Episode 33 - Michele Audin

Evelyn Lamb: Hello and welcome to My Favorite Theorem, a math podcast where we ask mathematicians what their favorite theorem is. I’m one of your hosts, Evelyn Lamb. I’m a freelance math and science writer in Salt Lake City, Utah. And this is your other host.

EL: I’m all right. It’s fall here, or hopefully getting to be fall soon.

KK: Never heard of it.

EL: Yeah. Florida doesn’t have that so much. But yeah, things are going well here. We had a major plumbing emergency earlier this month that is now solved.

KK: My big news is that I’m now the chair of the math department here at the university.

EL: Oh yes, that’s right.

KK: So my volume of email has increased substantially, but it’s and exciting time. We’re hiring more people, and I’m really looking forward to this new phase of my career. So good times.

EL: Great.

KK: But let’s talk about math.

EL: Yes, let’s talk about math. We’re very happy today to have Michèle Audin. Yeah, welcome, Michèle. Can you tell us a little bit about yourself?

Michèle Audin: Hello. I’m Michèle Audin. I used to be a mathematician. I’m retired now. But I was working on symplectic geometry, mainly, and I was interested also in the history of mathematics. More precisely, in the history of mathematicians.

EL: Yeah, and I came across you through, I was reading about Kovalevskaya, and I just loved your book about Kovalevskaya. It took me a little while to figure out what it was. It’s not a traditional biography. But I just loved it, and I was like, “I really want to talk to this person.” Yeah, I loved it.

MA: I wanted to write a book where there would be history and mathematics and literature also. Because she was a mathematician, but she was also a novelist. She wrote novels and things like that. I thought her mathematics were very beautiful. love her mathematics very much. But she was a very complete kind of person, so I wanted to have a book like that.

KK: So now I need to read this.

EL: Yeah. The English title is Remembering Sofya Kovalevskaya. Is that right?

MA: Yeah.

KK: I’ll look this up.

EL: Yeah. So, what is your favorite theorem.

MA: My favorite theorem today is Stokes’ formula.

EL: Oh, great!

KK: Oh, Stokes’ theorem. Great.

EL: Can you tell our listeners a little bit about it?

MA: Okay, so, it’s a theorem, okay. Why I love this theorem: Usually when you are a mathematician, you are forced to face the question, what is it useful for? Usually I’ll try to explain that I’m doing very pure mathematics and maybe it will be useful someday, but I don’t know when and for what. And this theorem is quite the opposite in some sense. It just appeared at the beginning of the 19th century as a theorem on hydrodynamics and electrostatics, some things like that. It was very applied mathematics at the very beginning. The theorem became, after one century, became a very abstract thing, the basis of abstract mathematics, like algebraic topology and things like that. So this just inverts the movement of what we are thinking usually about applied and pure mathematics. So that’s the reason why I like this theorem. Also the fact that it has many different aspects. I mean, it’s a formula, but you have a lot of different ways to write it with integrals, so that’s nice. It’s like a character in a novel.

KK: Yeah, so the general version, of course, is that the integral of, what, d-omega over the manifold is the same as the integral of omega over the boundary. But that’s not how we teach it to students.

MA: Yeah, sure. That’s how it became at the very end of the story. But at the very beginning of the story, it was not like that. It was three integrals with a very complicated thing. It is equal to something with a different number of integrals. There are a lot of derivatives and integrals. It’s quite complicated. At the very end, it became something very abstract and very beautiful.

KK: So I don’t know that I know my history. When we teach this to calculus students anymore, we show them Green’s theorem, and there are two versions of Green’s theorem that we show them, even though they’re the same. Then we show them something we call Stokes’ theorem, which is about surface integrals and then the integral around the boundary. And then there’s Gauss’s divergence theorem, which relates a triple integral to a surface integral. The fact that Gauss’s name is attached to that is probably false, right? Did Gauss do it first?

MA: Gauss had this theorem about the flux—do you say flux?

KK: Yeah.

MA: The flux of the electric—there are charges inside the surface, and you have the flux of the electric field. This was a theorem of Gauss at the very beginning. That was the first occurrence of the Stokes’ formula. Then there was this Ostrogradsky formula, which is related to water flowing from somewhere. So he just replaced the electric charges by water.

KK: Sort of the same difference, right? Electricity, water, whatever.

MA: Yes, it’s how you come to abstraction.

KK: That’s right.

MA: Then there was the Green theorem, then there is Stokes’ formula that Stokes never proved. There was this very beautiful history. And then in the 20th century, it became the basis for De Rahm theory. That’s very interesting, and moreover there were very interesting people working on that in the various countries in Europe. At the time mathematics were made in Europe, I’m sorry about that.

KK: Well, that’s how it was.

MA: And so there are many interesting mathematicians, many characters, different characters. So it's like it's like a novel. The main character is the formula, and the others are the mathematicians.

EL: Yeah. And so who are some of your favorite mathematicians from that story? Anyone that stands out to you?

MA: Okay, there are two of them: Ostrogradsky and Green. Do you know who was Green?

EL: I don't know about him as a person really.

MA: Yeah, really? Do you know, Kevin? No.

KK: No, I don't.

MA: Okay. So nobody knows, by the way. So he was he was just the son of a baker in Nottingham. And this baker became very rich and decided to buy a mill and then to put his son to be the miller. The son was Green. Nobody knows where he learned anything. He spent one year in primary school in Nottingham, and that’s it. And he was a member of some kind of, you know, there are books…it’s not a library, but morally it’s a library. Okay. And that’s it. And then appears a book, which is called, let me remember how it is called. It’s called An essay on the application of mathematical analysis to the theories of electricity and magnetism. And this appears in 1828.

EL: And this is just out of nowhere?

MA: Out of nowhere. And then the professors in Cambridge say, “Okay, it’s impossible. We have to bring here that guy.” So they take the miller from his mill and they put him in the University of Cambridge. So he was about, I don’t know, 30 or 40. and of course, it was not very convenient for the son of a baker to be a student with the sons of the gentlemen of England.

KK: Sure.

MA: Okay. So he didn't us stay there. He left, and then he died and nobody knew about that. There was this book, and that’s it.

KK: So he was he was 13 or 14 years old when he wrote this? [Ed. note: Kevin and Evelyn had misheard Dr. Audin. Green was about 35 when he wrote it. The joys of international video call reception!]

MA: Yeah. And then and then he died, and nobody knew except that—

KK: Wow.

MA: Wow. And then appears a guy called Thomson, Lord Kelvin later. This was a very young guy, and he decided to go to Paris to speak with a French mathematicians like Cauchy, Liouville. And then it was a very long trip, and he took with him a few books to read during the journey. And among these books was this Green book, and he was completely excited about that. And he arrived in Paris and decided to speak of this Green theorem and this work to everybody in Paris. There are letters and lots of documentation about that. And then this is how the Green formula appeared in the mathematics.

EL: Interesting! Yeah, I didn't know about that story at all. Thanks.

KK: It’s fascinating.

MA: Nobody knows. Yeah, that's very interesting.

KK: Isn’t what we know about Stokes’ theorem, wasn't it set as an exam problem at Cambridge?

MA: Yeah, exactly. So it began It began with a letter of Lord Kelvin to Stokes. They were very friendly together, on the same age and doing say mathematics and physics and they were very friendly together and and they were but they were not at the same at the same place of the world writing letters. And once Thomson, Kelvin, sent a letter to Stokes speaking of mathematics, and at the very end a postscript where he said: You know that this formula should be very interesting. And he writes something which is what we now know as the Stokes theorem.

And then the guy Stokes, he had to make a problem for an exam, and he gave this as an examination. You know, it was in Cambridge, they have to be very strong.

KK: Sure.

MA: And this is why it’s called the Stokes’ formula.

EL: Wow.

KK: Wow. Yeah ,I sort of knew that story. I didn't know exactly how it came to be. I knew somewhere in the back of my mind that it had been set as an exam problem.

MA: It’s written in a book of Maxwell.

KK: Okay.

EL: And so the second person you mentioned, I forget the name,

MA: Ostrogradsky. Well, I don’t know how to pronounce it in Russian, and even in English, but Ostrogradsky, something like that. So he was a student in mathematics in Ukraine at that time, which was Russia at that time, by the way. And he was passing his exams, and the among the examination topics there was religion. So he didn't go for that, so he was expelled from the university, and he decided to go to Paris. So it was in 1820, something like that. He went to Paris. He arrived there, and had no exams, and he knew nobody, and he made connections with a lot of people, especially with Cauchy, who was a was not a very nice guy, but he was very nice to Ostrogradsky.

And then he came back to Russia and he was the director of all the professors teaching mathematics in military schools in in Russia. So it was quite important. And he wrote books about differential calculus—what we call differential calculus in France but you call calculus in the U.S. He wrote a book like that, and for instance, because we were speaking of Kovalevskaya when she was a child, on the walls of her bedroom there were the sheets of the course of Ostrogradsky on the wall, and she read that when she was a little girl. She was very good in calculus.

This is another story, I’m sorry.

KK: No, this is the best part.

MA: And so, next question.

KK: Okay, so now I’ve got to know: What does one pair with Stokes’ theorem?

MA: Ah, a novel, of course.

EL: Of course!

KK: A novel. Which one?

MA: Okay, I wrote one, so I’m doing my own advertisement.

EL: Yeah, I was hoping we could talk about this. So yeah, tell us more about this novel.

MA: Okay, this is called Stokes’ Formula, a novel—La formule de Stokes, roman. I mean, the word “novel” is in the title. In this book I tell lots of stories about the mathematicians, but also about the about the formula itself, the theorem itself. How to say that? It’s not written like the historians of mathematics like, or want you to write. There are people speaking and dialogues and things like that. For instance, at the end of the book there is a first meeting of the Bourbaki mathematicians, the Boubaki group. They are in a restaurant, and they are having a small talk, like you have in a restaurant. There are six of them, and they order the food and they discuss mathematics. It looks like just small talk like that, but actually everything they say comes from the Bourbaki archives.

EL: Oh wow.

MA: Well, this is a way to write. And also this is a book. How to say that? I decided it would be very boring if the history of Stokes’ formula was told from a chronological point of view, so it doesn’t start at the beginning, and it does not end at the end of the story. All the chapters, the title is a date: first of January, second of January, and they are ordered according to the dates. So you have for instance, it starts with the first of January, and then you have first of February, and so on, until the end, which is in December, of course. But it’s not during the same year.

EL: Right.

MA: Well, the first of January is in 1862, and the fifth of January is in 1857, and so on. I was very, very fortunate, I was very happy, that the very end of the story is in December because the first Bourbaki meeting was in December, and I wanted to have the end there. Okay, so there are different stories, and they are told on different dates, but not using the chronology. And also in the book I explain what the formula means. You are comparing things inside the volume, and what happens on the surface face of the volume. I tried to explain the mathematics.

Also, in every chapter there is a formula, a different formula. I think it’s very important to show that formulas can be beautiful. And some are more beautiful than others. And the reader can just skip the formula, but look at it and just points out that it's beautiful, even if I don't understand it completely.

There were different constraints I used to write the book, and one of them was to have a formula, exactly one formula in every chapter.

EL; Yeah, and one of the reasons we wanted to talk to you—not just that I read your book about Kovalevskaya and kind of fell in love with it—but also because since leaving math academia, you've been doing a lot more literature, including being part of the Oulipo group, right, in France?

MA: Yes. You want me to explain what it is?

EL: Yeah, I don't really know what it is, so it'd be great if you could tell us a little more about that.

KK: Okay. It's a group—for mathematicians, I should say it’s a set—of writers and a few mathematicians. It was founded in 1960 by Raymond Queneau and François Le Lionnais. The idea is to the idea is to find constraints to write some literary texts. For instance, the most famous may be the novel by George Perec, La Disparition. It was translated in English with the title A Void, which is a rather long novel which doesn’t use the letter e. In French, it is really very difficult.

EL: Yeah.

MA: In English also, but in French even more.

EL: Oh, wow.

MA: Because you cannot use the feminine, for instance.

EL: Oh, right. That is kind of a problem.

MA: Okay, so some of the constraints have a mathematical background. For instance, this is not the case for La Disparition, but this is a case for for some other constraints, like I don't know, using permutations or graph theory to construct a text.

KK: I actually know a little about this. I taught a class in mathematics and literature a few years ago, and I did talk about Oulipo. We did some of these—there are these generators on the internet where you can, one rule is where you pick a number, say five, and you look at every noun and replace it by the one that is five entries later than that in the dictionary, for example. And there are websites that will, you feed it text, and it's a bit imperfect because it doesn't classify things as nouns properly sometimes, it's an interesting exercise. Or there was another one where—sonnets. So you would you would create sonnets. Sonnets have 14 lines, but you would do it sort of as an Exquisite Corpse, where you would write all these different lines for sonnets, and then you could remove them one at a time to get a really large number, I forget now however many you do so, yeah, things like that, right?

MA: Yeah, this is cent mille milliards, which is 10 to the 14.

KK: That’s right, yeah. So 10 different sonnets. But yeah, it’s really really interesting.

MA: The first example you gave then, which is called in French “X plus sept,” X plus seven, you do you start from a substantive, a noun, you take the seventh in a dictionary following it.

KK: That’s right.

MA: It depends on the dictionary you use, of course.

KK: Sure.

EL: Right.

MA: So that's what they did at the beginning, but now they're all different.

KK: Sure.

EL: Yeah, it's a really neat creative exercise to try to do that kind of constraint writing.

MA: That forms a constraint, the calendar constraint I used to in this book, is based on books by Michelle Grangaud, who is a poet from the Oulipo also, and she wrote Calendars, which were books of poetry. That's where the idea comes from.

EL: Yeah, and I assume this, your novel has been translated into English?

MA: Not yet.

EL: Oh, okay.

MA: Somebody told me she would do it, and she started, and I have no news now. I don’t know if she were thinking of a published or not. If she can do something, I will be very grateful.

EL: Yeah, so it’s a good reason to brush up your French, then, to read this novel.

And where can people find—is there writing work of yours that people can find on a website or something that has it all together?

MA: Okay, there is a website of the Oulipo, first of all, oulipo.net or something like that. Very easy to find.

KK: We’ll find it.

MA: Also, I have a webpage myself, but what I write is usually on the Oulipo site. I have also a site, a history site. It’s about history but not about mathematics. It’s about the Paris Commune in 1871. It has nothing to do with mathematics, but this is one of the things I am working on.

EL: Okay. Yeah, we'll share that with with people so they can find out more of this stuff.

MA: Thank you.

KK: Alright, this has been great fun. I learned a lot today. This is this is the best part of doing this podcast, actually, that Evelyn and I really learn all kinds of cool stuff and talk to interesting people. So we really appreciate you to appreciate you taking the time to talk to us today, and thanks for persevering through the technical difficulties.

MA: Yes. So we are finished? Okay. Goodbye.

EL: Bye.

Episode 32 - Anil Venkatesh

Evelyn Lamb: Hello, and welcome to my favorite theorem, a math podcast where we asked mathematicians to tell us about their favorite theorems. I'm one of your hosts, Evelyn Lamb. I'm a freelance math and science writer in Salt Lake City, Utah. And this is your other host,

Kevin Knudson: Hi, I'm Kevin Knudson, professor of mathematics at the University of Florida. How you doing, Evelyn?

EL: I’m all right. I had a really weird dream last night where I couldn't read numbers. And I was like, trying to find the page numbers in this book. And I kept having to ask someone, "Oh, is this 370?" Because it looked like 311 to me. For some reason those are two of the numbers that like somehow, yeah, those numbers don't look the same. But yeah, it was so weird. I woke up, and I opened a book. And I was like, "Okay, good. I can read numbers. Life is ok." But yeah, it was a bit disorienting.

KK: That's weird. I’ve never had anything like that.

EL: So how about you?

KK: Well, I don't know. I was in California earlier this week, so I'm trying to readjust to Florida after what was really nice in California. It’s just gruesomely hot here and gross. But anyway, enough about that. Yeah.

EL: Yeah. So today, we're very happy to have Anil Venkatesh joining us. Hi, Anil, can you tell us a little bit about yourself?

Anil Venkatesh: Hi, Evelyn. Hi, Kevin. Yes, I am an applied mathematician. I teach at a school called Ferris State University in Michigan. And I am also a musician, I play soccer, and I’m the lead Content Developer for a commercial video game.

EL: Oh, wow. And I how I ran across your name is through the music connection. Because you sometimes give talks at the Joint Math Meetings and things like that. And I think I remember seeing one of your talks there. But I didn't know about the game developing. What game is that?

AV: It's called Star Sonata. And I'll plug it maybe at the end of the episode. But it actually relates because the theorem I'm going to talk about, well, I ran across it in my development work, actually.

EL: Oh, cool. So let's get right to it.

AV: Okay. Well, I'm going to talk about the Shapley value, which is due to Lloyd Shapley. The paper came out in 1953, and there's a theorem in that paper. It did not come to be known as the Shapley theorem, because that's a different theorem. But it's an amazing theorem, and I think the reason theorem didn't gain that much recognition is that the value that it kind of proved something about is what really took off.

So should I tell you a little bit about what the Shapley value is, and why it's cool?

KK: Yeah, let’s have it.

AV: Well, so actually, I picked up this book that came out in ’88, so quite a long time after the Shapley value was originally introduced. And this book is amazing. It's got like 15 chapters. And each chapter is a paper by some mathematician or economist talking about how they use the Shapley value. So it's just this thing that really caught on in a bunch of different disciplines. But it is an econ result, which is why I took a while to actually track it down once I came up with the math behind it.

EL: RIght.

AV: So putting this into context, in 1953 people were thinking a lot about diplomacy, they were thinking about the Cold War, or ensuing Cold War. And so here's a great application of the Shapley value. So you have the United Nations. It’s got in the Security Council, five permanent members who can veto resolutions and then 10 rotating members. So for a resolution to pass, I don't know if this is exactly how it works now, but at least when the paper was written, you needed nine out of 15 to vote in favor.

KK: That’s still correct.

AV: And of those nine, you needed all five of the permanent members. So you couldn’t have any of those vetoes. So you might ask, How powerful is it to have the veto? Can we quantify the negotiating strength of possessing a veto in this committee?

KK: Okay.

EL: Okay.

AV: Okay. And yes, you can with the Shapley value, and it comes down to, well, do you want to hazard a guess? Like, how many times better is it to have a veto?

KK: Like a million.

AV: It's a lot better. You know, I didn't really have a frame of reference for guessing. It's about 100.

EL: Yeah, I don't know… Oh, how much?

AV: A hundred.

KK: I was only off by four orders of magnitude! That’s pretty good.

AV: Yeah.

EL: Yeah, not bad.

AV: So the way the Shapley value carries this out is you imagine out of 100 percent, let's apportion pieces of that to each of the 15 members according to how much power they have in the committee.

KK: Okay.

AV: And so if it was 20% to each of the permanent members, there wouldn't be any left for the remaining 10 voting, right? In actuality, it's 19.6% to each of the five permanent members.

KK: Okay.

EL: Wow.

AV: And then that last sliver gets apportioned 10 ways to the rotating members. And that's how we come up with roughly 100 times more powerful with the veto.

EL: Okay.

AV: I will tell you how this value is computed, and I'll tell you about the theorem. But I'll give you one more example, which I thought was pretty neat and timely. So in the US, laws get made when the House of Representatives and the Senate both vote with the majority in favor of the bill, and then the President does not veto that bill.

KK: Yes.

AV: Or if the president vetoes, then we need a two-thirds majority in both houses to override that veto. So you could ask, well, if you think of it just as the House, the Senate and the President, how much of the negotiating power gets apportioned to each of those three bodies when it comes to finally creating a law? And if you apply the Shapley value, you get ratios of 5:5:2, which means the president alone has a one-sixth say in the creation of a law.

EL: Okay. Yeah, I was when you said that I was thinking, I mean, if you do the Security Council one, the people with vetoes had almost one fifth each, so I was thinking maybe like being one third of the things that could veto, it would be about a third for the president, but that seemed too high.

AV: Yes. So if the if the it was not possible to override the veto, then it would be a little bigger, right?

EL: Right, right. Okay.

AV: Yes. Now, if you actually break this down on the individual basis, so you might think, okay, well, the house gets 5 out of 12 of the power, but there are so many people in the house, so each individual person in the house doesn't have as much power, right?

KK: Yes

AV: When it breaks down that way, so going individual representative, individual senator and President, the ratio goes like 2 to 9 to 350.

EL: Okay.

AV: So the President actually has way more power than any one individual lawmaker.

KK: Well, that makes sense, right?

AV: Yes, it does. And so, yeah. The great thing about the Shapley value is that it's not telling you things you don't know exactly, but it's quantifying things. So we know precisely what the balance of power is. Of course, you've got to ask, “Okay, so this sounds like a like a fun trick. But how is it done anyway?”

EL: Yeah.

AV: The the principle behind the Shapley value is just, it’s beautiful in its simplicity. The theory is this—and actually when I tell you this, it's going to remind you of a lemma that's already been on this podcast.

EL: Okay.

AV: More than one actually, this just a very standard kind of technique. So imagine all the possible orderings of voters. So suppose they come in one at a time and cast their vote. Under how many of these arrangements is a particular person casting the pivotal vote? The more more frequently the more arrangements in which Person A casts the typical vote, the more power Person A is allotted.

EL: Okay.

AV: That's it. So we actually just take an average overall possible orderings of votes and basically count up however many of those orderings involve a particular person casting the pivotal vote, and that's how we that's how we derive this breakdown of power.

EL: So this is a lot like having everyone at a vertex and looking at symmetries of this object, which is kind of reminding me of Mohammed Omar's episode about Burnside’s lemma. I assume so that's the one that you were thinking about.

AV: Yes, that’s the one I was thinking about.

EL: But you said another one as well.

AV: The other one hasn’t actually been on this podcast yet. And I could have talked about this one instead. But the Cohen-Lenstra heuristics for the frequency of ideal class groups of imaginary quadratic extensions also involves an idea, now this one gets a little deeper. but essentially, if you dig into the Shapley value, you notice that the bigger the group is, the less power each person has in it. And yeah, so there are various other twists you can ask using the Shapley value. So in the Cohen-Lenstra heuristics, you essentially divide by the automorphisms of a group, you weight things inversely by the number of automorphisms they have. Anyway, that one also evoked because you take sort of an average across all the groups of the same size. So, I'm not claiming that there's some kind of categorical equivalence between the Cohen-Lenstra heuristics and the Shapley value, but this idea of averaging over an entire space comes up in a bunch of different branches of mathematics.

KK: Sure.

EL: Yeah. Very cool. So, we've got the Shapley value now, and what is the theorem?

AV: The theorem, and this is what makes it all really pop, the theorem is why people, why the Shapley value is so ubiquitous. There is no other logical apportionment of 100 percent than the Shapley value’s algorithm.

EL: Okay.

AV: There is no other sensible way to quantify the power of a person in the committee.

EL: Interesting.

KK: What’s the definition of sensible?

AV: I’ll give it to you, and and when you hear—this is how weak the the assumptions are that already gave you this theorem, and that's why it's amazing.

KK: Sure.

AV: Efficiency: you must apportion all hundred percent

KK: Okay.

AV: Of course. Symmetry: if you rename the people but you don't change their voting rules, the Shapley value is not affected by that kind of game.

KK: Sure.

AV: Null player: if a person has no voting power at all, they get zero percent.

KK: All right.

AV: Obviously. And finally, additivity. That one takes a little bit more thinking about, but it's nothing crazy. It's just saying, like, if there are two different votes happening, then your power in the total situation is the sum of your power in the one vote and your power on the other vote. If there's more than one game being played, basically, the Shapley value is additive over those games.

KK: That's the weirdest one, but yeah, okay.

AV: Yeah, I looked at it. I thought a little bit about what to say. And then honestly, if you dig into it, you realize it's just, like, not saying anything amazing. You have to think about this: the Sheppey value, it's a function, right? So we're working in the space of functions, and weird things can happen there. So this is just asserting you don't have any really wild and woolly functions. We're not considering that.

EL: Okay.

AV: So you just have these assumptions. And then there's only one. And the way they prove it is by construction. They basically write down a basis of functions, and they write down a formula using that basis, and there can only be one because it's from a basis, and then they prove that formula has the properties desired. It’s a really short paper, it's like a 10 page paper with four references. It's amazing.

EL: You said this is the 1953 paper by Shapley?

AV: Yes, by Shapley.

EL: Yeah, was there another author too, or just?

AV: No, Shapley collaborated with many people on related projects, but the original paper was just by him.

EL: Yeah. So I assume people have maybe looked at Shapley values of individual voters, like in the US or in an individual state or local election. We're recording this in election season, a little bit before the midterm elections.

KK: Yeah, can’t end soon enough here.

EL: Yeah, I guess. Oh, I guess actually, that wouldn't be that interesting, because it would just be, I mean, within a state or something. But I guess, the Shapley value of someone in one state versus another state might be a fairly interesting question.

AV: Oh, yes. But even the Shapley value for one person in a certain district or another district, this gets into gerrymandering, for sure.

KK: Right.

AV: I don't know to what extent people have thought about the Shapley value applied in this way. I imagine they have, although I haven't personally seen it mentioned, or anything that looks like it in the gerrymandering math groups that have been doing the work.

KK: No, I mean, I've been working with them a little bit, too. I mean, not really. And yeah, of course, it sort of gets to things like, you know, the Senate is sort of fundamentally undemocratic.

EL: Right.

KK: I mean, the individual senators kind of have a lot power. But you know, the voters in Wyoming have a lot more, you know, their vote counts more than than a voter and say, Florida.

EL: Right? Or the voter in Utah versus the voter in Florida.

AV: I'm thinking about within a specific state, if you look at the different districts. I mean, I read a little bit about this. And I see that they're, they're trying to resolve kind of the tension between the ability to cast a pivotal vote and the ability to be grouped with people who are like-minded. I don't know, it seems like, I wonder whether there's some extent to which they're reinventing the wheel, and we already have a way to quantify the ability to cast a pivotal vote. There's only one way to do it.

EL: Interesting.

AV: I don't know. Yeah, I'm not super informed on that. But it feels like it would apply.

KK: Yeah. So what drew you to this through this? I mean, okay. So fun fact: Anil and I actually had the same PhD advisor, albeit a couple of decades apart, and neither of us works in this area, really. So what drew you to this?

AV: Well, that's why I mentioned my game development background. So this game, Star Sonata, is one of those massively multiplayer online role-playing games. It actually was created back in 2004, when World of Warcraft had just started. And basically the genre of game had just been created. So that's why the game started the way it did. But it's kind of just an indie game that stuck around and had its loyal followers since then.

And I also played the game myself, but several years ago, I just kind of got involved in the development side. I think initially, they wanted—Well, I was kind of upset as a player, I felt they’d put some stuff in the game that didn't work that well. So I said, “Listen, why don't you just bring me on as a volunteer, and I'll do quality assurance for you.” But after some time, I started finding a niche for myself in the development team, because I have these quantitative skills that no one else on the team really had that background in. So a little later, I also noticed that I actually had pretty decent managing skills. So here I am, I'm now basically managing the developers of the game.

And one of my colleagues there asked me an interesting question. And he was kind of wrestling with it in a spreadsheet, and he didn't know how to do it. So the question is this, suppose you're going to let the player have like, six pieces of equipment, and each piece of equipment, let's say it increases their power in the game by a percent. Power could be like, you know, your ability to kill monsters or something.

EL: Yeah.

AV: So the thing is, each piece of equipment multiplicatively increases your power. So your overall power is given by some product, let's say (1+a)(1+b)(1+c), and so on. One letter for each piece of equipment. So you write down this product, you have to use the distributed property to work out the the final answer. And it looks like 1 plus the sum of those letters plus a bunch of cross-terms.

KK: So symmetric functions, right?

AV: Yes, exactly. So what his question was, “Okay, now that we're carrying all six of these pieces of equipment, how much of that total power is due to each piece of equipment?”

EL: Okay.

AV: How much did each item contribute to the overall power of the player? The reason we want to know this is if we create a new piece of equipment the player can obtain, and we put that in, and then suddenly we discover that everyone in the game is just using that, that's not good game design. It's boring, right? We want there to be some variety. So we need to know a way to quantify ahead of time whether that will happen, whether a new a new thing in the game is going to just become the only thing anyone cares about. And they'll eschew all alternatives. So he asked me, basically, how can I quantify whether this will happen? And I thought about it. And as you can tell, what this is asking about is the Shapley value in a special case where all the actors contribute multiplicatively to the to the total. And I didn't know that at the time because I'd never learned about the Shapley value. I didn't really learn much econ.

KK: Sure.

AV: So I just derived it, as it turns out, independently, in this in this special case. And it works out in a very beautiful formula involving essentially the harmonic means of all those letters. So reciprocals of sums of reciprocals. The idea there—and I mean, I can give a real simple example—Like, suppose you have two items. One of them increases your power by 20%, and one increases by 30%. So your overall power is 1.2 times 1.3. So what does that get to? 1.56 So of that of that 56% increase, 20% goes to the one item, 30 goes to the other, but 6% is left over. And how should that be aportioned?

EL: Right.

AV: Well, if you think about it, you might think, “Well, okay, the 30 percent should get the lion's share.” And maybe so, maybe so. But then there's a competing idea: because that 30% was pretty big, the 20 percent’s effect is amplified, right? So it's not, there's not an immediately obvious way to split it. But you can kind of do it in a principled fashion. So once I wrote this down, you know, I gave it to my colleague, he implemented it, it improved our ability to make the game fun. But then I also started wondering, like, look, this is, this is nice and all, but someone must have thought of this before, you know? So I don't actually remember now, how I came across it, whether I just found it or somebody sent it to me. But one way or another, I found the Shapley value on Wikipedia. I read about it, and I immediately recognized it as the generalization of what I'd done. So, yeah.

EL: Oh, yeah. Well, and this seems like the kind of thing that would come up in a lot of different settings, too. A friend of mine one time was talking about a problem where, you know, they had sold more units and also increased price, or something. And, you know, how do you allocate the value of the increased unit sales versus the increase price or something, which might might be a slightly different, the Shapley value might not apply completely there.

AV: No, it does.

EL: Okay.

AV: Yes, that’s called the Aumann-Shapley pricing rule.

EL: Okay, yeah.

AV: Yeah. So, questions of fair division and cost allocation are definitely applications of the Shapley value. So, yeah.

EL: Neat. Thanks.

KK: Very cool. The other fun part of this podcast is that we ask our guests to pair their theorem is something What have you chosen to pair this with?

AV: Well, like many your guests, I really struggled with this question.

KK: Good.

AV: And the first thing I thought of, which won't be my choice, was a pie because you have to, you know, fairly divide the pie. I told this to one of my friends, and I explained what the Shapley value was, and she was like, “No, that's, that's a terrible idea, because you want to divide the pie equally.” But the Shapley value is this prescription for dividing unequally but according some other principle. So it won't be a pie. So I actually decided this morning, it's going I'm going to pair it with a nice restaurant you go to with your friends, but then they don't let you split the bill.

KK: Ah.

EL: Okay. Yeah, so you have to figure out what numbers to write on the back of the receipt for them read your credit cards. Or for the added challenge, you could decide, like, given the available cash in each person's wallet? Can you do that?

AV: Oh, don't even get me started.

KK: This is the problem, right? Nobody has cash. So when you're trying to figure out how to how to split the bill…People think that mathematicians are really good at this kind of thing, and in my experience, when you go to a seminar dinner or whatever, nobody can figure out how to split the bill.

AV: If I'm out with a bunch of people, and we have to split a bill, let it not be mathematicians, that’s what I say. Let it be anyone else.

KK: Yeah, because some people want to be completely exact and each person ordered a certain thing and it cost as much and you pay that, then you divide the tip proportionally, all this stuff. Whereas I'm more, you know, especially the older I get, the less I care about five or $10 one or the other.

AV: Yeah, well, I find it's good if I go out with a bunch of people who are kind of scared of math, because then they just let me do it. You know, I become the benevolent dictator of the situation

KK: That’s happened to me too, yeah.

EL: So, I don't remember where what city Ferris State is in.

AV: Well, it's in a town of Big Rapids, which is a little the Grand Rapids, which is a little bit more well-known

EL: Slightly grander. So, yeah, you're the slightly lesser rapids?

AV: So, there are at least five rapids in Michigan, like five different places named something rapids.

KK: Sure.

EL: So do you have a Big Rapids restaurant in mind for this pairing?

AV: You know, they're all really nice about splitting the bills there. So I was thinking something maybe in New York City or Boston.

KK: College towns are pretty good about this. In fact, they'll let you hand them five cards, and they'll just deal with it.

AV: Yeah, totally.

KK: Yeah, yeah, very nice. So your rapids are big but Grand Rapids’ rapids are grander.

AV: They’re much grander. Don’t get me started about Elk Rapids. I don't know how to compare that to the other two.

KK: Elk Rapids?

EL: Yeah, Big, Elk, and Grand, not clear what order those go in. [I guess Iowa’s got the Cedar Rapids.]

AV: Yes. I don't remember the other two rapids, but I know identified them at some point.

EL: Well thank you so much for joining us

AV: Thanks for inviting me. It was great.

EL: Yeah, I learned something new today for sure.

KK: Math and a civics lesson, right?

EL: Yes. Everybody go vote. Although this episode will already be out. [Ed. note: Evelyn said this backwards. Voting occurred before the episode, not vice versa.] But get ready to vote in the next election!

KK: Yeah, well, it's never ending right? I mean, as soon as one elections over, they start talking about the next one. Thanks, Anil.

EL: All right, bye.

AV: Thank you.

[outro]

Episode 31 - Yen Duong

Evelyn Lamb: Welcome to My Favorite Theorem. I'm your host Evelyn lamb. I'm a freelance math and science writer based in Salt Lake City. Today I am by myself because I'm on location. I am in Washington DC right now for the Science Writers conference. That's the conference for the National Association of Science Writers and I'm really happy to be joined by Yen Duong, who is a also a science writer with a math background. So yeah, can you tell us a little bit about yourself?

Yen Duong: Yeah, so I am in Charlotte, North Carolina, and I work part time for North Carolina Health News. And the rest of my time, I am a freelance math and science writer like you.

EL: Yeah.

YD: And I just finished the AAAS Mass Media Fellowship this summer, and before that I got my Ph.D. at UIC in geometric group theory.

EL: Yeah, and the AAAS fellowship is the one, the way I started doing science writing as well. A lot of people, when you come to conferences like these, you find out a lot of people who are more senior in the field have also gone through this. So it's really great. The application deadline, I believe is in January. So we'll try to air this at a time when people can look into that and apply for it. But yeah, it's a great program that brings grad students in science, you know, math and other sciences, into newsrooms to learn a little bit about how the news gets made and how to report on science for a broader audience. So it was a great experience for me. It sounds like it was a great experience for you.

YD: Yeah, it's fantastic. It's 10 weeks, I think this coming year, the stipend will be $6,000. So that's great. It is paid. And for me, at least it jump started the rest of my career as a math and science writer.

EL: Yeah, definitely. And it’s nice to hear that it's being paid a little more. I lived in New York City for less than that. And that was difficult. Okay, so do you want to tell us about your favorite theorem?

YD: I've been listening to this podcast for a while. And it's like, okay, I'll do a really fancy one to be really impressive. And people will think I'm fancy. But I decided not to do that. Because I'm not that fancy. And I think it's silly to be that pretentious. So I'm going with one of the first theorems I learned, like as an undergrad, which was Ramsey theory, that the Ramsey number of R(3,3) equals six.

EL: Okay, great. So, yeah, tell us what a Ramsey number is.

YD: Okay, so this is from graph theory. And the idea of saying, R(3,3)=6, I’ll just do the whole spiel.

EL: Yeah, yeah. And please use your hands a lot. It's really helpful for the podcast medium when you’re—Yeah, I know. Like Ramsey theory. I’m, like, moving my hands all around to show you what everything is.

YD: I will attempt to not pen and paper and start drawing things. Luckily, we don't have any available, right now. Yeah. So the idea is that, let's say that you are trying to put together a committee of three people. And you either want all three people to pairwise know each other and have worked together before, or you want all three people to be relative strangers. What you don't want is one person in the middle and everyone talks to them. And then the other two people don't talk to each other. That's a bad committee. Yeah. So the question is, how many people do you need to look at to guarantee that you can find such a committee?

EL: Right, so how big is your pool going to be of people you're choosing?

YD: Exactly. So like, if I look at three people? Well, that's not great, because it's me, you and someone in the next room. And there you go. We don’t have a good committee. And if I look at 100 people, like okay, I'm pretty sure I can find this with 100 people. So what Ramsey theory does is use graph theory to answer this question. And so like I said, the giveaway was that the number is 6, and something that I really love about this theorem is that you can teach it to literal—I think I taught it to 10 year olds the summer.

EL: Nice.

YD: And it's just a really nice basic introduction to, in my opinion, the fun parts of math. These kids who are like, “Ugh, I have to memorize equations, and I hate doing this.” And then I start drawing pictures and I explain the pigeonhole principle, and like, “Oh, I get it, like, I can do this.” I’m like, “Yes, you can! Everyone can do math!”

EL: Yay. Yeah. So the, the proof for that is, is kind of like you, you take a hexagon, right? Or the vertices of a hexagon and try to build—what do you do to denote whether you have friends or strangers?

YD: So graph theory is when you have vertices, which are dots, and edges, which are lines in between dots, and you use it to describe data and information systems. So in this case, we can make each person a dot, so we'll put six dots on a piece of paper. I do not have paper. I am using my hands. So we’ll have six dots on a piece of paper, and we’ll draw a blue line for friends, and we can draw a red line for strangers. So now our question becomes, how many dots do I need to make either a red triangle or blue triangle. So if you have six dots, let's look at one person, and that person will be me. And I look out at this crowd of five people. So for at least three of those people, I will have the same color line going to them. So they might all be strangers, so I'll have five red lines, or one might be a stranger and four friends—one red and four blue—but in that case, I have three, at least three, blue ones. So I can just assume that one of them is blue. So we'll just say, “Okay, I’ve got three blue lines going out.” So now I look at those three friends of mine. And I look at the relationships that they have with each other. This is really hard without pen and paper.

EL: Yeah, but luckily, our listeners have all gotten out of pens, two colors of pens, and they are driving this at home. So it's fine.

YD: Excellent. Good job, listeners! So now you've got your three dots. And you've got three blue lines coming out of them to one common dot. So you've got four dots on your piece of paper. So if in between any of those three dots, I draw a blue line, we’ve got our blue triangle, and we're done. We've got our committee.

EL: Yeah.

YD: Therefore, if I want to make this a proof, I'd better draw red lines. Yeah, I should draw a red line. Yeah. So now I've got three dots. And I've got red lines. But now I have red lines between all three of them. And there's my committee. So that's it. That's the entire proof. You can do it in a podcast in a few minutes. You can teach it to 10 year olds. You can teach it to 60 year olds. And I love it because it's like the gateway drug of mathematics proofs.

EL: Yeah, it’s really fun. And yeah, you can just sit down at home and do this. And—spoiler alert: to do this for four, to get a committee of four people, it's a little harder to sit down at home and do this, right? Do you—I should have looked up

YD: Oh, the Erdos quote, right? Is that what you're talking about?

EL: Well, well, I you can do four. Yeah, there's an Erdos quote about I think getting to six. Or five.

YD: So the Erdos quote is, paraphrased: if aliens come to the earth, and they tell us that they're going to destroy us unless we calculate R(5,5), then we should get all of the greatest minds in the world together and try to calculate it and solve it. But if the aliens say that we should try to compute R(6,6), then we should just try to destroy the aliens first.

EL: Yeah, so I think R(4,4) is like something like 18. Like, it's doable. I mean, by a computer, I think, not by a person, unless you really like drawing very large graphs. But yeah, it's kind of amazing. The Ramsey numbers just grow so fast. And we've been saying R(3,3) or R(4,4), having the same number twice in those. There are also Ramsey numbers, right, where it’s not symmetric.

YD: Like R(2,3) or R(2,4), Okay, so well two is maybe not the greatest number for this. But yeah, you can do things where you say, Oh, I'm going to have either a complete—so I'll either have a triangle of red, or I'll have four dots in blue, and they'll all be connected to each other with blue lines, a complete graph on four dots or however many dots.

EL: Yeah. So they don't have to be the same number. Although, you know, usually the same number is sort of a nicer one to look at. So how did you learn this theorem?

YD: Let's see. So I learned this through—I’ll just tag another great program—Budapest semesters in mathematics.

EL: Nice

YD: From a combinatorics professor. So BSM is when college students in the U.S. and Canada can go to Budapest for a semester and learn math from people there and they hang out with all these others. It’s a nice study abroad program for math. So that's when I first learned it. But since then, I think I've taught it to just like a hundred people, hundreds of people. I tell it to people in coffee shops, I break it out at cocktail parties, it's just like, my like, math is fun, I promise! little theorem. I think I've blogged about it.

EL: So watch out. If you're in a room with Yen, you will likely be told about this theorem.

YD: Yeah, that's my cocktail party theorem, that and Cantor’s diagonalization.

EL: Yeah, well, and cocktail parties are a place where people often like, describe this theorem. Like, if you're having a party, and and you want to make sure that any [ed. note: Evelyn stated this wrong; there shouldn’t have been an “any”] three people are mutual acquaintances, or mutual strangers, although the committee one actually makes a lot more sense. Because like, who thinks through a cocktail party that way? It's just a little contrived, like, “Oh, I must make sure the graph theory of my cocktail party is correct.” Like, I know a lot of mathematicians, and I go to a lot of their parties, but even I have never been to a party where someone did that. So on this podcast, we also like to ask you to pair your theorem with something. And why have you chosen for R(3,3)?

YD: I thought really hard about it, by the way.

EL: Yes. This is a hard part.

YD: Yeah. So I decided on broccoli with cheese sauce.

EL: Okay. Tell us why.

YD: Because it is the gateway vegetable, just like this theorem is the gateway theorem.

EL: Okay.

YD: Yeah. Like, my kids sometimes eat broccoli with cheese sauce. And it's sort of like trying to introduce them to the wonderful world of Brussels sprouts and carrots and delicious things. I feel like the cheese sauce is sort of this veneer of applicability that I threw on with the committee thing.

EL: Oh, very nice. Yeah.

YD: Even with the situation of the committee, like no one has ever tried to make a committee of three people who’ve all worked together or three people who didn’t. But, you know, it makes it more palatable than just plain broccoli.

EL: Yeah, okay. Well, and honestly, I could kind of see that, right. Because, like, it can be really that third wheel feeling when you’re hanging out with two people who know each other better than, you know either of them or something. Yeah. So actually, I feel, yeah, if you were making a committee for something, I could see why you might want to do this. I feel like a lot of people are not so thoughtful about making their committees that they would actually be like, “Will the social dynamics of this committee be conducive to…?”

YD: This is why my husband and I don't host cocktail parties, because my way of doing it is like, let's just invite everyone we know. And he's like, no, but what if someone feels left out? And then he gets stuck in the graph theory of our cocktail party and then it doesn't happen.

EL: And he's not even a mathematician, right?

YD: Yeah.

EL: Should have been, turns out.

YD: Yes, that's true. Stupid computers.

EL: Yeah. So when you make broccoli with cheese sauce, how do you make it. Are you a broccoli steamer? Do you roast it?

YD: We're definitely, if it's going to have cheese sauce on it, you’ve got to steam it. But generally, we're more roasters because I prefer it rested with garlic and olive oil.

EL: Okay.

YD: So delicious. Broccoli with cheese sauce is really a last resort. It's like, man, the kids have not eaten anything green in like a week

EL: They need a vitamin.

YD: Let’s give them some broccoli.

EL: So one of our favorite recipes is roasted broccoli with this raisin vinaigrette thing. You put vinegar and raisins, and maybe some garlic, A couple other things in a blender.

YD: Wait, so you blend the raisins?

EL: Yeah, you make a gloppy sauce out of the raisins and everything. And I don't think you plump them first or anything. I mean, usually I kind of get in a hurry, and I’ll put them all in, the ingredients, and then go do something else, and then come back. So maybe they plump a little from the vinegar. But yeah, it makes like a pasty kind of thing. It kind of looks like olive tapenade. And I have actually accidentally mistaken the leftover sauce in the fridge for olive tapenade and have been a bit disappointed. You know, if you're expecting olives, and you’re eating raisins instead, you’re just not as happy. But yeah, it's a really good recipe. If you want to expand your broccoli horizons, maybe not as kid friendly.

YD: Actually, my kids do love raisins. So maybe if I put raisins on top of broccoli, they would like it more.

EL: Yeah, I think there's some cumin in it too, something? And we're talking about recipes, because both of us like to cook a lot. And in fact Yen's blog is called Baking and Math. And it's not like baking with math. Like, there's baking, and there's math.

YD: Yeah, it’s a disjoint union. It doesn’t make that much sense, but I'm still a big fan of it. And it's actually how we met.

EL: Yes.

YD: Yeah. Because you found me on the internet.

EL: Yeah, I found you on the internet. And it was when I was writing for the AMS Blog on Math Blogs. And I was like, this is a cool blog. And yeah, then we became internet friends. And then I realized a couple of years later like, I feel like I know this person, but we've never actually met. We met at Cornell, at the Cornell topology festival, and I was like, “Wow, you're tall!” I just realized I always think people are either shorter than I think or taller that I think unless they're exactly my height because I think my

YD: You expect everyone to be your height?

EL: Yeah, my default, the blank slate version is like, “Oh, this person is the same height as I am.” So yeah, I was like oh, you're taller than I am. And I expected you to be exactly my height because I have no imagination

YD: I’m trying to think if I was surprised by, maybe, no, I don't think you had blue hair, maybe you did? No.

EL: No, I probably had blond hair at that point, yeah.

YD: I remember we did acro yoga when we first met. That's a good thing to do when you first meet someone.

EL: Yeah.

YD: It was very scary. It wasn't leap of faith, but so is meeting a stranger on the internet.

EL: Yeah. But luckily we’re both great people.

YD: Yeah. I also signed up for that conference because you tweeted that you were going to go, and I though, “Oh, I might as well sign up and then I can meet you.”

EL: I should have asked for a commission from the festival, although they probably paid for your travel, so it'd be like a reverse commission. So people can find your writing at your blog Baking and Math. They can find you on Twitter, you’re yenergy. And where here can they find your your science and health writing?

YD: So I post a lot of my clips on my website, my professional website, so that's yenduong.com, and then I also write for North Carolina Health News if you're interested in exactly what it sounds like, North Carolina health news.

EL: Yeah I'm sure a lot of people are I read them and I'm not in North Carolina, but I have a body, so I am interested in health news.

YD: Yeah.

EL: So thanks a lot for joining me.

YD: Thanks for having me. It was super fun. Fun fact for podcast listeners: Evelyn and I did not know where to look during this conversation. We couldn’t tell, should we look at each other or at the recording device?

EL: Yeah, so we did some of both. All right. Bye.

YD: Bye.

Episode 30 - Katie Steckles

Evelyn Lamb: Hello and welcome to My Favorite Theorem. I’m your host Evelyn Lamb, and I’m a freelance math and science writer in Salt Lake City, Utah. And this is your other host.

Kevin Knudson: Hi. I’m Kevin Knudson, professor of mathematics at the University of Florida. How are you doing, Evelyn?

EL: I’m all right. I had a lovely walk today. And there are, there’s a family of quail that is living in our bushes outside and they were parading around today, and I think they're going to have babies soon. And that's very wonderful.

KK: Speaking of babies, today is my son's birthday.

EL: Who’s not a baby anymore.

KK: He’s 19. Yeah, so still not the fun birthday, right? That's that's another two years out.

EL: Yes, in this country.

KK: In this country, yes. But our guest, however, doesn't understand this, right?

EL: Yes. Today we are very happy to have Katie Steckles from Manchester, England, United Kingdom. So hi, Katie. Can you tell us a little about yourself?

Katie Steckles: Hi. Well yeah I'm a mathematician, I guess. So I did a PhD in maths and I finished about seven years ago. And now my job is to work in public engagement. So I do events and do talks about maths and do workshops and talk about maths on YouTube and on the TV and on the radio and basically anywhere.

KK: That sounds awesome.

EL: Yeah, you’re all over the place.

KK: Yeah, that sounds like great fun, like no grading papers, right?

KS: A minimal amount of, yeah, I don’t think I’ve had to grade anything, no.

EL: Yeah, and you have some great YouTube videos. We’ll probably talk more about some of them later. Yeah. And and I have stayed at your apartment a few years ago, or your flat, in Manchester. Quite lovely. And yeah, it's great to have you on here and to talk with you again. So what is your favorite theorem?

KS: Okay, my favorite theorem is what's called the fold and cut theorem, which is a really, really nice piece of maths which, like the best bits of maths, is named exactly what it is. So it's about folding bits of paper and cutting them. So I first encountered this a couple years ago when I was trying to cut out a square. And I realize that's not a very difficult task, but I had a square drawn on a piece of paper and I needed to cut out just the square, and I also needed the outside bit paper to still be intact as well. So I realized I wasn't going to be able to just cut in from the edge. So I realized that if I folded up the bit of paper I could cut the square out without kind of cutting in from the side, and then I realized that if I folded it enough I could do that in one cut, one straight line would cut out the whole square. And I thought, “That’s kind of cool. I like that, that’s a nice little bit of maths.” And I showed this to another friend who’s also a mathematician, and he was basically like, “Isn't there a theorem about this?” I thought, “Ooh, maybe there is,” and I looked and the fold and cut theorem basically says that for any figure with straight line edges, you can always fold a piece of paper with that figure drawn on it so that you can cut out the whole thing with one cut, even if it's got more than one bit to it or a hole in it or anything like that. It's always possible with one cut, in theory.

EL: Yeah. So you discovered a special case of this theorem before even knowing this was a thing to mathematically investigate.

KS: Yeah, well, I was I was cutting out a square for math reasons, because that's everything I do. But I was actually trying to make a flexagon at the time, which as I'm sure you've all been there, but it was just because I needed this square hole. And I thought it was such a satisfying thing to see that it was possible in one cut. And my maths brain just suddenly went, “How can I extend this? Can I generalize this to other shapes?

KK: Sure.

KS: And it was just a nice kind of extension of that.

EL: Yeah. So I have a question for you. Did you, was your approach to go for the, like diagonal folds, or the folds that are parallel to the sides?

KS: Yeah, this is the thing. There are actually kind of two ways to do a square. So you can do, like, a vertical and a horizontal fold, and then you get something that needs two cuts, and then you can make one diagonal fold and just end up with the thing that you can do in one cut, but you can actually do it in two folds if you do two diagonal folds, but it's along the cut. I don't know what the payoff is there. It depends on how much time you want to spend cutting, I don't know.

EL: Okay.

KK: I was thinking as you're doing this, I've never—I know about this theorem, but I've never actually done it in practice, but never really tried, but I was as soon as you said the square, I started thinking, “Okay, what would I do here?” You know, and I immediately thought to sort of fold along the diagonals. But so in general, though, so you have some, you know, 75-sided figure, is there an algorithm for this?

KS: It’s pretty horrible, depending on how horrible the thing is. Like simple things are nice, symmetrical things are really nice, because youjust fold the whole thing off and then use, you know, just do the half of it. And so there are algorithms. So the proof is done by Eric Demaine and Martin Demaine. And they've essentially got, I think, at least two different algorithms for generating the full pattern given a particular shape. So I think one of them is based around what they call the straight skeleton, which is if you can imagine, you can shrink the shape in a very sort of linear way, so you shrink all of the edges down but keep them parallel to where they originally were, you’ll eventually get to kind of a skeleton shape in the middle of the shape, and that's sort of the basis of constructing all the fold lines. And it is sort of seems quite intuitive because if you think about, for example, the square, all your folds are going to need to either be bisecting an angle or perpendicular to a straight edge. Because if it bisects the angle, it puts one side of the shape on top of the other one. And if you go perpendicular to the edge, it’s going to put the edge straight on top of the edge. And I always kind of think about in terms of putting lines on top of where the lines are, because that's essentially what you're doing if you've got a thin enough bit of paper and a thick enough line, you can actually physically see it happening. So it's beautiful. And then the other method they have involves disks in each corner of the shape, I think, and you expand the disks until they're as big as they can be and touch the other disks. And that then gives you a structure to generate a fold pattern. But they have got algorithms. I haven't yet managed to find a simple enough implementation that you can just upload the picture to a website and it will tell you the whole pattern, which is a shame because I've come across some really difficult shapes that I would really like to be able to fold but haven't quite been able to do it by hand. I'm just going, “Ah, I could just put some maths on this and throw it in the computer program!” But I actually asked Eric Demaine because I was in email contact with him about this. And then the thing that happened was, there’s a TV show in the UK called Blue Peter. Their logo it's like a giant boat that’s called the Blue Peter. It's a big ship with about 20 sails on it. And they said we could talk about this nice piece of maths, and you could even maybe try and cut out our logo with one cut. And I said to myself, “Goodness me!” Because it's all curves as well, so I’d have to approximate it all by straight lines and then work out how to cut this whole thing, and I emailed Eric Demaine and I sent him the picture and asked him, “Like, do you have a program that you can use to just you know, take a figure, even if I send the shape the edge or whatever?” And in his reply, he was like, “Wow, well, that looks, no.”

I just love the fact that they asked me to do something that not even the mathematician that proved that it's possible for any shape was prepared to admit would be easy. And so yeah, I'm not sure if there is kind of a, I mean, I would love it. I’m not enough of a coder to be able to implement that kind of thing myself. I would love it if there was a way to, you know, put in a shape or word or picture and come up with a fold pattern. Yeah, no, I don't know if anyone's done that yet.

KK: Well, this is how mathematicians are, right? We just proved that a solution exists, you know, and then we walk away.

EL: And so I seem to remember you've done a video about this theorem. And one of the things you did in it was make a whole alphabet, making all of those out of one-cut shapes.

KS: Yeah, well, this was, I guess this is my kind of Everest in terms of this theorem. This is one of the reasons why I love it so much, because I put so much time into this as a thing. So essentially in the paper that Demaine and Demaine have written about this, they've got a little intro bit where they talk about applications at this theorem and times when it's been used. So I think it's maybe Harry Houdini used to do a five-pointed star with one car as part of his actual magic show. It was really impressive. And people watch me do it. And they go, “Wow, how do you do that?” Such a lovely little demo. They also mentioned in there that they heard of someone who could cut out any letter of the alphabet, and I saw that and thought, “Wow, that would be a really nice thing to be able to do!” You know, that would impress people because it's kind of like if you can do any shape, then the proof of that should be whatever shape you tell me, I can do. And of course, a mathematician would know that 26 things is not infinity things, but it's still quite a lot of things. It's an impressive demo. So I thought I would try and work that out. And I literally had to sit down and kind of draw out the shapes and kind of work out where all the bits went and how to fold them. And some are easy, some are nice ones to start off with, like I and C and L. As long as you’ve got a square sort of version of it, they're pretty easy to imagine what you’d do. And then they get more difficult. So S is horrible, because there’s no reflection symmetry at all. It's just rotation symmetry and you can't make any use of that at all. R is quite difficult, but not if you know how to do P, and P is quite difficult, but not if you know how to do F. And so it all kind of kind of builds gradually. And I worked out all of these patterns and and in fact, it was one of the reasons I was in communication with Eric Demaine. Because he'd seen the video and he said, “As well as being mathematicians, we collect fonts, like we just love different fonts, type faces, and we wondered if you could send us your fold patterns for your letters so that we can make a font out of them.”

EL: Oh wow.

KS: And I thought that was really nice, so they've got a list on their website of different fonts, and they’ve now got a fold-and-cut font which I’m credited for as well.

KK: Oh nice.

KS: So yeah, the video I did with Brady was for his channel Numberphile, which is as I understand it a hugely popular maths channel. I've done about five or six videos on there, and I've genuinely been recognized in the street.

EL: Oh wow. That’s amazing.

KS: I walked into a shop and the guy was like, “Are you Katie Steckles?” I said, “Yes?” Like, the customer service has gone way up in this place. And he said, “No, I’ve just been watching your video on YouTube.” It’s like, Oh, okay. So that was nice. So he asked me to come and do a few videos, and that was one of the things I want us to talk about. I said, “What do you want me to do? I mean, do you want me to spell out Numberphile or your name or whatever? Brady, who’s Australian, said, “No do, the whole alphabet.” His exact words were, “If you're going to be a bear, be a grizzly.” A very Australian thing to say, he was basically saying let's do the whole alphabet, it will be great. I think at that point it was early enough I wasn't 100 percent sure I would get them all right, but his kind of thing that he has about his videos is that they always write maths down on brown paper, so he had this big pile of brown paper there, and he cut it all into pieces for me, one for each letter. And it was such a wonderful kind of way to nod to that tradition of using brown paper. But I just sat there folding them all, and he filmed the whole thing, and he put it in as a time lapse, and then I cut each one, one cut on each bit of paper, and open them all up, and they all worked, so it was good. But it was this very long day crouched over a little table cutting out all of these letters. But people genuinely come and ask me about it because of that video, so that's quite nice.

EL: Yeah, well I think after I watched that video, I tried to do—I didn’t. H was was my kryptonite. I was trying to fold that, and I just at some point gave up. Like I kept having these long spindles coming out of the middle bar that I couldn't seem to get rid of.

KS: I think somewhere I have a photograph of all of my early attempts at the S. It’s just ridiculous. Like it's just a Frankenstein's monster parade of villains, just horrific shapes that don't even look like an S, and like how did I get this?

But it kind of gave me a learning process, and I think it was maybe just a few weeks of solidly playing around with things. I think I had one night in a hotel room while I was away working so that no one else around. I just spent the whole evening folding bits of paper. I don't know what the maid who cleaned the room the next day thought. The bin was full of bits of cut up paper. I've got like a big stacks of scrap paper at home that's like old printouts and things I don't need that I use for practicing the alphabet because I go through a lot of paper when I’m practicing.

KK: This is a really fun theorem. So you know, another thing we like to do on this podcast is ask our guests to pair their theorem with something. So what have we chosen to pair the fold-and-cut theorem with?

KS: Wow. So I know that you sometimes often pair things with foodstuffs, so I'm going to suggest that I would pair this with my husband's chili and cheddar waffles.

EL: Okay.

KS: And I’ll tell you why, so my reasoning is that I kind of feel like this is a really nice example, as a theorem, about kind of the way that maths works and the way the theorems work. So my husband's chili is a recipe that he's been working on for years. He comes from a family where they do a lot of cooking, and it was natural for him when he moved out to just have his own kind of recipes. His chili recipe is so good that we've taken his chili to parties and people have asked for the recipe. And I'm just like, there isn't one. It's not written down anywhere. It's just in his head. He has this recipe. And he's obviously worked really hard on on it and achieved this brilliant thing. And kind of the ability to do the alphabet, the ability to kind of make things using this theorem for me is my equivalent of that. It's my special skill I can show off to people with. Because, you know, I've put in that time and I've solved the problem. And one of my favorite things about maths is that it gives you that problem solving kind of brain, in that you will just keep working at something, you keep practicing until you get there. And then the reason why I’ve paired it with cheddar waffles is (a) because that is a delicious combo.

EL: That sounds amazing.

KK: Yeah.

KS: Yeah. As soon as we got a waffle maker, that was our first go at it, was “What can we put with this chili that will make it even better?” And I just found the recipe for cheddar waffles on the internet, because we don't have that, you know, we don't do that many waffles. We don’t really know how to make them. And but the fact that you can go online and just find a recipe for something, is a really nice kind of aspect of modern life.

This is one of the things about maths I appreciate is that once you prove the theory that kind of goes into a toolbox, and other people can then you know, look at that theorem and use it in whenever they're doing, and you kind of building your maths out of bits of things that other people have proved, and bits of things that you're proving, and it's sort of a nice analogy for that, I guess. So those are those are the two things about it. Now that we've got the fold-and-cut theorem, nobody needs to prove it again, and anyone can use it.

EL: Yeah. And I guess if it were a perfect analogy, in some ways, maybe the chili recipe is sort of like these algorithms for making them, they’re really—well maybe that’s not good because the algorithms seem really complicated and difficult. Here, it's more that the recipe is hidden in your husband's brain.

KK: Well, a lot of algorithms feel that way.

KS: It really is quite complex. So you get some more things out of the cupboards that I've never seen before and they all go back in again afterwards. There’s a lot to it that people don’t realize.

KK: It’s a black box. my chili recipe is a black box, too. I can't tell you what's in it I mean it’s probably not as good as your husband’s, though.

KS: It’s got roasted vegetables in it. Yeah, it's that's that's one of the main secrets if anyone's trying to recreate it. But then just a whole lot of other spices that only he can tell me

EL: My husband doesn't like soups with tomatoes in them very much. I mean sometimes he does. But I don't do chili very much. So yeah, I don't have a good chili recipe we have a friend who's allergic to onions and that's a nice excercise in, can you cook or modify your recipe and still have it taste like what it’s supposed to be? because without us yeah a lot of things that don't work and she must have a nightmare with it. Because like a lot of packaged foods, they've got it.

KK: Sure.

KS: They’ve got onion powder or stuff.

EL: Every restaurant.

KS: We made chili without, and it kind of works. It kind of works without onions. It was great. I think there was a bit more aubergine that went in and some new spices, just to give it a bit more oniony flavor, but it still works.

EL: Oh, nice. Yeah, cooking without onions is tough. Does it extend to to garlic—does it generalize to other things in the allium family?

KS: Yeah, it’s all alliums, so she can’t really have garlic either she can get away with a little bit of garlic, but not any reasonable amount. Yeah, it must be completely horrible. Actually it kind of reminds me of Eugenia Cheng, her first book was about maths and baking. But one of the really nice points that she makes about the analogy between recipes and maths, which we have apparently stumbled into is that, you know, understanding something in a maths sense means that you can take bits of it out and replace it with other things. You've got a particular problem and you go, “Okay, well, do we need to make this assumption, do we need this particular constraint? What happens if we relax this and then put something else in?” And that's how you explore kind of where you go with things. And if you relax a constraint and then find the solution, that maybe tell us something about the solution to the constraint problem, and things like that. So, you know, tweaking a recipe helps you to understand the recipe a bit more. And as long as you know roughly what goes in there and you've got something that is, you know, recognizably a chili, then, you know, it doesn't matter what you've changed, I guess.

KK: Yeah, so we also give our guests a chance to plug anything you're working on. You want to plug videos, websites, anything?

KS: Oh, I’m always working on a million different things. I guess probably the nicest thing for people to have a look at would be the Aperiodical, which is a website where I blog with two of my colleagues so we write— it's kind of a maths blog but aimed at the people who are already interested in maths, so it's one of the few things I do that is not an outreach project. Which is essentially it’s aimed at people who already are interested and want to find out what's going on, so we sometimes right like opinion pieces about things or, like, “Here’s a nice bit of maths I found,” and then sometimes we just write news. And there’s a surprising amount of maths news, it turns out. It's not just “They’ve discovered a new Mersenne prime again.” There are various other maths news stories that come up as well, so we write those up, and bits of competitions and puzzles and things as well and it's at aperiodical.com. And we get submissions. So if anyone else wants to write an article and have it go out on a blog that’s seen by, you know, a couple of thousand people a day or whatever, they’re welcome to send us stuff, and we’ll have a look at it.

EL: Yeah, it's a lovely blog, and you also organize and host the math blog carnival that is, like, every month a round-up of math blog posts and stuff like that.

KS: We sort of inherited that from whoever was running it before, the Carnival of Mathematics. Every month someone who has a maths blog takes it in turn to write a post, which is essentially just a bunch of blog posts that went out this month. And we have the submissions form and all the kind of machinery behind it is now hosted at Aperiodical and has been for a few years, so if you have a maths blog elsewhere, and you want to get an opportunity to put a post on your site that will be seen by a bunch of people because there's a bunch of people who just read it every month, then get in touch because we're always looking for hosts for future months. And essentially we just forward to your email address all the submissions that people put in during the month, and you can then write it up in kind of the first week of the next month.

EL: Yeah. And I always see something cool on there that I had missed during the month. So it's a nice resource.

KS: So one of the other non-outreach, I guess, maths things that I'm involved with is a thing called Maths Jam. Or in the U.S. the equivalent would be Math Jam. We do have both websites, basically. So I coordinate all the Math Jams in the world. So it's essentially a pub night for people who want to go and do maths in a pub with people. It's aimed at adults because a lot of kids already get a chance to go to math club at school and do maths puzzles and things in their classrooms, but adults who have finished school, finished university, don't often get that chance. So we basically go to the pub once a month or to a bar or restaurant somewhere that will allow us to sit around and drink do maths. And there are now I think, getting on for a hundred Maths Jams in the world. So we've got about 30 or 40 in the UK. And then they’re popping up all over. We just picked up one in Brazil, we’ve got three in Italy now, three in Belgium, and there are a few in the U.S. But what I'm going to say is that I’m very sad that we don't have more because I feel like it would be really nice if we had a whole load of U.S. jams. I think we've got more in Canada that we have in the USA, which interesting given the population sizes, or relative sizes.

EL: Right.

KS: I think Washington DC has just gone on hiatus because not enough people are coming along. So the organizer said, “I'm getting fed up of sitting in the pub on my own. No one else is coming. I'm just going to put it on hold for now.” And so if you live somewhere in the U.S. and you want to go meet with the people and do maths in an evening, essentially to start when you just need a couple of people you that know you can drag along with you to sit around the case no one else turns up. And we send out a sheet with some ideas of puzzles and things to do. And you can play games, chat about maths, and do whatever. People can bring stuff along. And all you need to do to organize it is choose a bar and send the email once a month. And those are the only requirements. And go to the pub once a month, but I think that's probably not a big ask if that's the kind of thing you're into. So if anyone is interested, you can email to katie@mathsjam.com and I can send you all the details of what's involved. You can have a look on the website, mathsjam.com, or math-jam.com, if you want to have a look at what there is already, what’s near you.

EL: Yeah, it'd be nice to have more in the U.S.

KS: Yeah, well, I get a lot out of it. Even though it's kind of sort of my job, but also I always meet people and chat through things and share ideas and people always go, “Oh, that reminds me of this other thing I saw,” and they show me something I've not seen before. And it's such a nice way to share things. But also just to know that everyone else in the room is totally sympathetic to maths and will be quite happy for you to chat on about some theorem or whatever and not think you’re weird. It’s quite nice.

EL: Well thanks a lot for joining us. I enjoyed talking about the fold-and-cut theorem. It makes me want to go back and pick up that alphabet again and try to conquer Mount H, that felled me the last time.

KS: I can send you send you a picture of my fold pattern for each, but I’m sure you would much rather work it out for yourself. It’s such a lovely puzzle. It's a really nice little challenge.

EL: Yeah, it’s fun.

Episode 29 - Mike Lawler

Kevin Knudson: Welcome to My Favorite Theorem. I’m Kevin Knudson, professor of mathematics at the university of Florida, and this is your other host.

EL: Hi. I’m Evelyn Lamb. I’m a freelance math and science writer in Salt Lake City, Utah.

KK: How’s it going, Evelyn?

EL: How are you today?

KK: I’m okay. I’m a little sleepy. So before I came to Florida, I was at Mississippi State University, and I still have a lot of good friends and colleagues there, and that basketball game last night. I don’t know if you guys saw it, but that last minute shot, last second shot that Notre Dame hit to win was just a crusher. I’m feeling bad for my old friends. But other than that, everything’s great. Nice sunny day.

EL: Yeah, it’s gray here, so I’m a little, I always have trouble getting moving on gray mornings.

KK: But you’ve got that nice big cup of tea, so you’re in good shape.

EL: Yes.

KK: That’s right. So today we are pleased to welcome Mike Lawler. Mike, why don’t you introduce yourself and tell everyone about yourself?

1:24 ML: Hi. I’m Mike Lawler. I work in the reinsurance division for Berkshire Hathaway studying large reinsurance deals. And I also spend a lot of my spare time doing math activities for kids, actually mostly my own kids.

KK: Yeah.

EL: Yeah.

KK: Yours is one of my favorite sites on the internet, actually. I love watching how you explain really complicated stuff to your kids. How old are they now? They’re not terribly old.

ML: They’re in eighth grade and sixth grade.

KK: But you’ve been doing this for quite a while.

ML: We started. Boy, it could have been 2011, maybe before that.

KK: Wow, right.

ML: I think all three of us on the podcast today, and probably everybody listening, loves math.

KK: One hopes.

ML: And I think there’s a lot of really exciting math that kids are really interested in when they see. It’s fun finding things that are interesting to mathematicians and trying to figure out ways to share them with kids.

EL: Yeah. Well I like, you always make videos of the things, so listening to your kids talking through what they’re thinking is really fun. Recently I watched one of the old ones, and I was like, “Oh my goodness! They’re just little babies there.” They’re so much bigger now. I don’t have kids of my own, so I don’t have that firsthand look at kids growing up the same way. They’re sweet kids, though.

ML: I have to say, one of the first, it wasn’t actually the first one we did, but it’s called Family Math 1, where we do the famous “How many times can you fold a piece of paper?” And, you know, they’re probably 4 and 6 at the time, or maybe 5 and 7, and yeah, it’s always fun to go back and watch that one.

EL: Yeah.

KK: I see videos of my son, he’s now 18, he’s off in college. When I see videos of him, he’s a musician, so when he was 10, figuring out how to play this little toy accordion we got him, I kind of get a little weepy. You know.

ML: It’s funny, I was picking him up somewhere the other day, and I confused him with a 20-year-old, my older son, and I just thought to myself: how did this happen?

KK: So, all right. Enough talking about kids, I guess. So, Mike, we asked you on to talk about your favorite theorem. So what is it?

ML: Well, it’s not quite a theorem, but it’s something that’s been very influential to me. Not in sharing math with kids, but in my own work. It comes from a paper from 1995 at a professor named Zvi Bodie at BU. And he was studying finance, and continues to study finance. And he published a paper showing that the cost of insurance for long holdings in the stock market actually increases with time. Specifically, if you want to buy insurance to guarantee your investments at least earn the risk-free rate, that cost of insurance goes up over time. And it just shocked me when I was just learning about finance, actually when I was just in grad school. And this paper has had a profound influence on me over the last 20 years. So that’s what I want to talk about today.

KK: Okay. I know hardly any of those words. I have my retirement accounts and all that, but like most good quantitatively-minded people, I just ignore them.

ML: Well, let’s take a simple example. Let’s just take actually the most simple example. Say you wanted to invest $100 in the stock market, and you thought, because you’ve read or you’ve heard that the stock market gives you good returns, you thought, “Well, in 10 years from now, I think I’ll probably have at least $150 in that account.” And you said, “Well, what I want to do is go out and buy some insurance that guarantees me that at least I’ll have that amount of money in the account. That’s the problem. That’s the math problem that Bodie studied.

KK: Right. So how does one price that insurance policy, I guess? So right, on the insurance side, how do they price that correctly, and on the consumer side, how do you know you’re getting a worthwhile insurance policy, I guess.

ML: Yeah, well this is kind of the fun of applied mathematics. So there’s a lot of theory behind this, and I think like a lot of good theories, it’s not named after the people who originally discovered that. So I think that’s important part of any theory. But then when you understand the theory, and you actually go into the financial markets, you have to start to ask yourself, “What parts of the theory apply here, and which ones don’t?” So the theory itself goes back to the early 1900s with a French mathematician and his Ph.D. thesis. His last name is Bachelier, and I’m probably butchering that. But then people began to study random processes, and Norbert Wiener studied those. And eventually all of that math came into economics, I think in the late 60s, early 1970s, and something called the Black-Scholes formula came to exist. The Black-Scholes formula is what people use to price this kind of insurance, sometimes called options. So that’s been around the financial markets since at least the early 1970s, so let’s call it 50 years now. And if you’re a consumer, I think you’d better be careful.

EL: Well I find, I don’t know a lot about financial math, but I’ve tried to read a few books about the financial crash, actually one of which you suggested to me, I think, All the Devils Are Here. And I find, even with my math background, it’s very confusing what they’re pricing and how they’re calculating these, how they’re batching all of these things. It just really seems like a black box that you’re just kind of hoping what’s in the box isn’t going to eat you.

ML: That’s a pretty good description. Yeah, Bethany MacLean’s book All the Devils Are Here is absolutely phenomenal, and Roger Lowenstein’s book, called When Genius Failed, is also an absolutely phenomenal book. You are absolutely right. The math is very heavy, and a lot of times, especially when you talk about the financial crisis, the math formulas get misused a little bit, and maybe are applied into situations where they might not necessarily apply.

KK: Really? Wall Street does that?

ML: So you really have to be careful. I think if you pull the original Black-Scholes paper, I think there are 7 or 8 assumptions that go into it. As long as these 7 or 8 things are true, then we can apply this theory. In theory we can apply the theory.

KK: Right.

ML: So when you go into the financial markets, a lot of times if you have that checklist of 7 things with you, you’re going to find maybe not all 7 are true. In fact, a lot of times, maybe you’re going to find not a single one of those things is true. And that is I think a problem that a lot of mathematicians have when they come into the markets, and they just think the theory applies directly, if you will.

KK: Right, and we’ve all taught enough students to know they’re not very good at checking assumptions, right? So if you have to check off a list of 6 or 7 things, then after the first couple, you’re like, “Eh, I think it’s fine.”

ML: Right. Maybe that seventh one really matters.

KK: Right.

EL: Yeah.

ML: Or maybe you’re in a situation where the theory sort of applies 95% of the time, but now you’re in that 5% situation where it really doesn’t apply.

KK: So should I buy investment insurance? I mean, I’ve never directly done such a thing.

ML: Well…

KK: I don’t know if it’s an option for me since I just have 401Ks, essentially.

ML: Well, it’s probably not a great idea to give investment advice over a podcast.

KK: Right, yeah, yeah.

ML: But from a mathematical point of view, the really interesting thing about Bodie’s paper is Black-Scholes is indeed a very complicated mathematical idea, but the the thing that Bodie found was a really natural question to ask about pricing this kind of insurance, ensuring that your portfolio would grow at the risk-free rate. In that situation, and you can see it in Bodie’s paper, the math simplifies tremendously. And I think that is a common theme across mathematics. When someone finds exactly the right way to look at a problem, all of a sudden the problem simplifies. And I’m sure you can probably give me 3 or 4 examples in your own fields where that is then the case.

KK: Sure. Well, I’m not going to, but yeah.

EL: So something when you told us that this was the theorem or quasi-theorem you were going to talk about, it got me wondering how much the financial world—I’ve been trying to think about how to phrase this question—but how much your natural tendencies as mathematicians actually carry over into finance. How much are you able to think about your work in finance and insurance as math questions and how much you really have to shift how you’re thinking about things to this more realistic point of view.

ML: I think it’s a great question because, you know, the assumptions and a lot of times the mathematical simplifications that allow you to solve these differential equations that stand behind the Black-Scholes theorem and generally stochastic processes, you know, you’re, that doesn’t translate perfectly to the real world. And you have to start asking questions like, “If this estimate is wrong, does it miss high? Does it miss low?” “In the 5% of the times it doesn’t work, do I lose all my money?

EL: Right.

ML: And so those, I can tell you as an undergrad I was also a physics major, and I spent a lot of time in the physics lab, and there’s not one single person who was ever in lab with me who misses me. I was a mathematician in the labs. But doing some of these physics experiments really teaches you that applying the theory directly, even in a lab situation, is very difficult.

KK: Right.

EL: Right. And your Ph.D. was in pure math, right?

ML: Right, it was sort of mathematical physics. In the late 90s, people were really excited about the Yang-Mills equations.

KK: Mirror symmetry.

ML: Work that Seiberg and Witten were doing. So I was interested in that.

EL: So your background is different from what you’re doing now.

ML: Oh, totally. You know, I, it’s kind of a hard story for me to tell, but I really loved math from the time I was in fifth grade all the way up through about my third year of graduate school.

EL: Yeah, I think that could be a painful story.

ML: I don’t know why, I really don’t know why, I just kind of lost interest in math then. I finished my Ph.D., and I even took an appointment at the University of Minnesota, but I just lost interest, and it was an odd feeling because from about fifth grade until—what grade is your third year of graduate school?

KK: Nineteenth.

ML: Nineteenth grade. I really got out of bed every morning thinking about math, and I sort of drifted away from it. But my kids have brought me back into it, so I’m actually really happy about that.

KK: Well that’s great. So, what have you chosen to pair with your quasi-theorem, we’re calling it?

ML: Well, you know, so for me, this paper of Bodie’s goes back, and it sort of opened a new world for me, and for the last 20 years I’ve been studying more about it and learning more about it and all these different things, so I got to thinking about a journey. I have books on my table right now about this paper. So the journey I want to highlight is—and I think a lot of people can understand who are outside of math—is an athletic journey. I’m going to bring up a woman named Anna Nazarov, who represents the United States on the national ultimate frisbee team, which is a sport I’ve been around. And four years ago, she made it almost to being on the national team and got cut in the last minute and wrote this very powerful essay about her feelings about getting cut and then turned around and worked hard and improved and won three world and national championships in the last four years as a result of that work.

KK: Wow.

ML: Yeah, you know, it’s hard to compare world championships to just your plain old work. I think people in math understand that you kind of roll up your sleeves and over a long period of time you come to understand mathematics, or you come to understand in this case how certain mathematics applies, and so I want to pair this with that kind of athletic journey, which I think, to the general public, people understand a little bit better.

EL: Yeah, so I played ultimate very recreationally in grad school. There was a math department pickup ultimate game every week, and playing with other math grad students is my speed in ultimate. I really miss it. When you, I can tell, follow ultimate, and I often read the links you post about ultimate frisbee, I’m like, oh, I kind of miss doing that. But a few years ago, I did get to, I happened to be in Vancouver at the same time that they were doing the world ultimate championships there and got to see a couple games there, and it’s really fun, and it’s been fun to follow the much-higher-level-than-math-grad-student ultimate playing thing through the things you’ve posted.

ML: Yeah, it’s neat to follow an amateur sport, or not as well-known a sport because the players work so hard, and they spend so much of their own money to travel all over the world. You know, I think a lot of people do that with math. Despite the topic of today’s conversation, most people aren’t going into math because of the money.

KK: Well this has been great fun. Thanks for joining us, Mike. Is there anything, we always want to give our guest a chance to plug anything. We already kind of plugged your website.

EL: We’ll put links to your blog in the show notes there, and your Twitter. But yeah, if there’s anything else you want to plug here, this is the time for it.

ML: No, that’s fine. If you want to follow Mike’s Math Page, it’s a lot of fun sharing math with kids. And like I said, I sort of lost interest in math in grad school, but sharing math with kids now is what gets me out of bed in the mornings.

KK: Great.

EL: Yeah.

KK: All right. Well, thanks again, Mike.

ML: Thank you.

Episode 28 - Chawne Kimber

Kevin Knudson: Welcome to My Favorite Theorem. I’m your cohost Kevin Knudson, professor of mathematics at the University of Florida. I am joined by cohost number 2.

Evelyn Lamb: I am Evelyn Lamb. I’m a freelance math and science writer in Salt Lake City. So how are you?

KK: I’m okay. And by the way, I did not mean to indicate that you are number 2 in this.

EL: Only alphabetically.

KK: That’s right. Yeah. Things are great. How are things in Salt Lake?

EL: Pretty good. I had a fantastic weekend. Basically spent the whole thing reading and singing, so yeah, it was great.

KK: Good for you.

EL: Yeah.

KK: I didn’t do much. I mopped the floors.

EL: That’s good too. My floors are dirty.

KK: That’s okay. Dirty floors, clean…something. So today we are pleased to have Chawne Kimber on the show. Chawne, do you want to introduce yourself?

Chawne Kimber: Sure. Hi, I’m a professor at Lafayette College. I got my Ph.D. a long time ago at University of Florida.

KK: Go Gators!

CK: Yay, woo-hoo. I work in lattice-ordered groups.

KK: Lattice-ordered groups, very cool. I should probably know what those are, but maybe we’ll find out what they are today. So yeah, let’s get into it. What’s your favorite theorem, Chawne?

CK: Okay, so maybe you don’t like this, but it’s a suite of theorems.

KK: Even better.

EL: Go for it.

CK: So, right, a lattice-ordered group is a group, to begin with, in which any two elements have a sup and an inf, so that gives you your lattice order. They’re torsion-free, so they’re, once you get past countable ones, they’re enormous groups to work with. So my favorite theorems are the representation theorems that allow you to prove stuff because they get unwieldy due to their size.

EL: Oh cool. One of my favorite classes in grad school was a representation class. I mean, I had a lot of trouble with it. It was just representations of finite groups, and those were still really out there, but it was a lot of fun. Really algebraic thinking.

CK: Well actually these representations allow you to translate problems from algebra to topology, so it’s pretty cool. The classical theorem is by Hahn in 1909. He proved the special cases that any totally ordered Archimedean group can be embedded as a subgroup of the reals, and it kind of makes sense that you should be able to do that.

KK: Sure.

CK: And then he said that any ordered abelian group, so not necessarily lattice-ordered, can be embedded in what’s called a lexicographical product of the reals. So we could get into what that is, but those are called Hahn groups. They’re just huge products of the reals that are ordered in dictionary order that only live on well-ordered sets. So this conjecture, it’s actually a theorem, but then there’s a conjecture that that theorem is actually equivalent to the axiom of choice.

KK: Wow.

EL: Oh wow.

CK: Right?

EL: Can we maybe back up a little bit, is it possible to, for me, I really like concrete examples, so maybe can you talk a little bit about a concrete example of one of these archimedean groups? I don’t know how concrete the concrete examples are.

CK: No, they’re just really weird ways of hacking at the reals, basically, so they’re just subgroups of the reals. Think of your favorite ones, and there you go, the ones that are archimedean. And as soon as you add two dimensions of ordering, it’s even more complex, right? So the classical example that I work with would be rings of continuous functions on a topological space, and then you can build really cool examples because we all understand continuous functions, so C(X), real-valued continuous functions on a Tychonoff space, so T-3 1/2, whatever.

KK: Metric space.

CK: The axioms so you have enough continuous functions. So Gillman and Jerison in the 1950s capitalized on a theorem from the 1930s by Gelfand and Kolmogorov that said that the maximal ideals of C(X), if you take them in the hull-kernel topology, are isomorphic to the Stone-Čech compactification of the space that you’re working on. And so if you have a compact space to begin with, then your space is isomorphic to your maximal ideals. So then, just build your favorite—so C(X) is lattice-ordered, if you take the pointwise ordering, and then since the reals have a natural order on then, you pick up your sups and infs pretty easily. So there you’re starting to touch some interesting examples of these groups. Have I convinced you, Evelyn?

EL: Yeah, yeah.

CK: Okay, good. So they’re huge. You have to have some complexity in order to be able to prove anything interesting about them. So then there the Hahn embedding is pretty obvious. You just take the images of the functions. There’s too much structure in a ring like that, so maybe you want to look at just an ordered group to get back to the Hahn environment. So how can you mimic Hahn in view of Gelfand-Kolmogorov? So can we get continuous functions as the representation of an ordered group? Because the lex products that Hahn was working with are intractable in a strong way. And so then you have to start finding units because you have to be able to define something called a maximal sub-object, so you want it to be maximal with respect to missing out on some kind of unit. And so then we get into a whole series of different embedding theorems that are trying to get you closer to being able to deal with the conjecture I mentioned before, that Hahn’s embedding theorem is equivalent to the axiom of choice.

EL: Yeah, I’m really fascinated by this conjecture. It kind of seems like it comes out of nowhere. Maybe we can say what the axiom of choice is and then, is there a way you can kind of explain how these might be related?

CK: Yes and no.

KK: Let’s start with the axiom of choice.

CK: Yeah, so the axiom of choice is equivalent to Zorn’s lemma, which says that maximal objects exist. So that’s the way that I deal with it. It allows me to say that maximal ideals exist, and if they didn’t exist, these theorems wouldn’t exist. You use this everywhere in order to prove Hahn’s theorem, so that’s why it’s assumed to be possibly equivalent. This isn’t the part that I work on. I’m not a logician.

KK: So many things are equivalent to the axiom of choice. For example, the Tychonoff product theorem, which is that the product of compact spaces is compact. That’s actually equivalent to the axiom of choice, which seems a bit odd. I was actually reading last night, so Eugenia Cheng has this book Beyond Infinity, her most recent book, good bedtime reading. I learned something last night about the axiom of choice, which is that you need the axiom of choice to prove that if you have two infinities, two countable infinities, you want to think [they’re the same], it’s countable somehow. If they come with an order, then fine, but if you have two, like imagine pairs of socks, like an infinite collection of pairs of socks, is that countable? Are the socks countable? It’s an interesting question, these weird slippery things with the axiom of choice and logic. They make my head hurt a little bit.

CK: Mine too.

EL: So yeah, you’re saying that looking at the axiom of choice from the Zorn’s lemma point of view, that’s where these maximal objects are coming in in the Hahn conjecture, right?

CK: Absolutely.

KK: That makes sense.

CK: That’s kind of why I drew the parallel with this theorem about C(X), these maximal ideals being equivalent to the space you’re on. Pretty cool.

KK: Right. Because even to get maximal ideals in an arbitrary ring, you really need Zorn’s lemma.

CK: Right. And there’s a whole enterprise of people working to see how far you can peel that back. I did take a small foray into trying to understand gradations of the axiom of choice, and that hurts your head, definitely.

KK: Right, countable axiom of choice, all these different flavors.

CK: Williams prime ideal theorem, right.

KK: Yeah, okay.

EL: So what drew you to these theorems, or what makes you really excited about them?

CK: Well, you know, as a super newbie mathematician back in the day, I was super excited to see that these disparate fields of algebra and topology that everyone had told me were totally different could be connected in a dictionary way. So a characteristic of a ring can be connected is equivalent to a characteristic on a topological space. So all kinds of problems can be stated in these two different realms. They seem like different questions, but they turn out to be equivalent. So if you just know the way to cross the bridge, then you can answer either question depending on which realm gives you the easier approach to the theorem.

KK: I like that interplay too. I’m a topologist, but I’m a very algebraic one for exactly that reason. I think there are so many interesting ideas out there where you really need the other discipline to solve it, or looking through that lens makes it a lot clearer somehow.

EL: And was this in graduate school that you saw these, or as a new professor?

CK: Definitely grad school. I was working on my master’s.

KK: So I wonder, what does one pair with this suite of theorems?

CK: It’s a very hard question, actually.

KK: That’s typical. Most people find this the more difficult part of the show.

CK: Yeah. I think that if you were to ask my Ph.D. advisor Jorge Martinez what he would pair, he is very much a wine lover and an opera lover. So it would be both. You’d probably see him taking a flask into Lincoln center while thinking about theorems. So he loved to go to Tuscany, so I assume that’s where you get chianti. I don’t know, I could be lying.

KK: You do, yeah.

CK: Yeah, so let’s go with a good chianti, although that might make me sound like Hannibal Lecter.

KK: No fava beans.

CK: So we’ve got a chianti, and maybe a good opera because it’s got to be both with him. It’s hard for me to say. So he comes up to New York to do an opera orgy, just watching two operas per day until he falls down. I sometimes join him for that, and the last one I went to was Così fan tutte, and so let’s go with that because that’s the one I remember.

EL: If I remember correctly—it’s been a while since I saw or listened to that opera—there are pairs of couples who end up in different configurations, and it’s one of these “I’ll trick you into falling in love with the other couple’s person” that almost seems like the pairs being topology and algebra, and switching back and forth. I don’t know, maybe I’m putting ideas in your mind here.

CK: Or sort of the graph of the different couplings, the ordered graph could be the underlying object here. You never know.

EL: An homage to your advisor here with this pairing.

CK: Yeah, let’s do that.

EL: Well I must admit I was kind of hoping that you might pair one of your own quilt creations here. So I actually ran into you through a quilting blog you have called completely cauchy. Do you mind talking to us a little bit about how you started quilting and what you do there because it’s so cool.

CK: Yeah. Of course I chose that name because Cauchy is my favorite mathematician, and as a nerd there would be no other quilt blog named after a dead mathematician. So I am a little mortified that when you google “Cauchy complete,” as many students do, mine is actually the first entry that comes up on google.

KK: Excellent.

CK: I don’t know what that means, but okay. So yeah, when I applied for tenure, which is kind of a hazing process no matter where you are, no matter how good of a faculty member you are, I really wanted to have control, and you don’t have control at that point. And so I started sewing for fun, late at night, at 1 am, after everything kind of felt done for the day. I never imagined that I’d be doing what I’m doing today, which is using quilting to confront issues of social justice in the United States, and they’ve been picked up by museums and other venues. It’s this whole side hustle out there that I kept quiet for a long, long time. And then once I got promoted to full professor I came out of the closet.

KK: Were you concerned that having a side hustle, so to speak, would compromise your career? Because it shouldn’t.

CK: Yeah, I think something so gender-specific as quilting, something you associate with grandmas. At the end of the day, the guys I work with, I must say half of my quilts have four-letter words on them, you know, the more interesting four-letter words, so as soon as my guys saw them, they were totally on board with this enterprise, so I didn’t really need to be into the closet, but I didn’t want anybody to ever say, “Oh, she should have proved one more theorem instead of making that quilt.”

EL: Yeah.

KK: It’s unfortunate that we feel that way, right? I think that’s true of all mathematicians, but I imagine it’s worse for women, this idea that you have to work twice as hard to prove you’re half as good or something like that?

CK: Do we need to mention I’m also a black woman? So that’s actually how I was raised, you need to do three times as much to be seen as half as good, and that’s the way that I’ve lived my life, and it’s not sustainable in any way.

KK: No, absolutely not.

EL: But yeah, they are really cool quilts, so everyone should look at completely cauchy, and that’s spelled cauchy. There’s a mathematician named Cauchy. I actually have another mathematician friend with a cat named Cauchy, or who had a cat named Cauchy. I think the cat has passed away. Yeah, and I actually sew as well. I’ve somehow never had the patience for quilting. It just feels somehow like too little. I like the more immediate gratification of making a whole panel of a skirt or something. You do really intricate little piecing there, which I admire very much, and I’m glad people like you do it so I don’t have to.

KK: Sure, but Evelyn, you don’t have to make it little.

CK: You don’t.

KK: I’m sure you’ve seen these Gee’s Bend quilts, right, they’re really nice big pieces, and that can have a very dramatic effect too. But yeah, the intricate work is really remarkable. My wife has done a little quilting, and she always gets tired of it because of the fine stuff, but then she’s a book artist. She sets lead typing in her printing press by hand, and that’s fine, but piecing together little pieces of cloth somehow doesn’t work.

CK: It seems more futile, you take a big piece of fabric and cut it into small pieces so that you can sew it back together. That is kind of dumb when you think about it.

KK: Well, but I don’t know, you’ve got this whole Banach-Tarski thing, maybe.

EL: Bring it back around to the axiom of choice again.

CK: You guys are good at this.

KK: It’s not our first podcast. Well this has been great fun. Anything else you want to promote?

CK: No, I’m good.

KK: Thanks for joining us, Chawne. This has really been interesting, and we appreciate you being on.

CK: Great. Thank you.

EL: Thanks.

Episode 27 - James Tanton

Kevin Knudson: Welcome to My Favorite Theorem. I’m one of your hosts, Kevin Knudson, professor of mathematics at the University of Florida, and here is your jet-lagged other host.

Evelyn Lamb: Hi, I’m Evelyn Lamb, a freelance math and science writer in Salt Lake City. I’m doing pretty well right now, but in a few hours when it’s about 5 pm here, I think I will be suffering a bit. I just got back from Europe yesterday.

KK: I’m sure you will, but that’s the whole trick, right? Just keep staying up. In a couple of weeks, I’m off to that part of the world. I’ll be jet-lagged for one of the ones we have coming up right after that. That should be fun. I’ll feel your pain soon enough, I’m sure.

EL: Yeah.

KK: So today we are pleased to welcome James Tanton. James, why don’t you introduce yourself and tell everyone about yourself?

James Tanton: Hello. First of all, thank you for having me. This is such a delight. So who am I? I’m James Tanton. I’m the mathematician-at-large for the Mathematical Association of America, which is a title I’m very proud of. It’s a title I never want to give up because who doesn’t want to be a mathematician-at-large, wreaking havoc wherever one steps? But my life is basically doing outreach in the world and promoting joyous thinking and doing of mathematics. I guess my background is somewhat strange. You can probably tell I have an accent. I grew up in Australia and came to the US 30 years ago for my Ph.D., which was grand, and I liked it so much here I am 30 years later. My career has been kind of strange. I was in the university world for close to 10 years, and then I decided I was really interested in the state of mathematics education at all levels, and I decided to become a high school teacher. So I did that for 10 years. Now my life is actually working with teachers and college professors all across the globe, usually talking about let’s make the mathematics our kids experience, whatever level they’re at, really, truly joyous and uplifting.

EL: Yeah, I’ve wondered what “mathematician-at-large” entails. I’ve seen that as your title. It sounds like a pretty fun gig.

JT: So I was the MAA mathematician-in-residence for a good long while. They were very kind to offer me that position. But then I’m married to a very famous geophysicist, and my life is really to follow her career. She was off to a position at ASU in Phoenix, and then off we moved to Phoenix four years ago. So I said to the folks at the MAA, “Well, thanks very much. I guess I’m not your mathematician-in-residence anymore,” and they said, “Why don’t you be our mathematician at large?” That’s how that title came up, and of course I so beautifully, graciously said yes because that’s spectacular.

KK: Yeah, that sounds like a Michael Pearson idea, that he would just go, “No, no, we really want to keep you.”

JT: It’s so flattering. I’m so honored. It’s great because, you know, actually, it’s the work I was going to do in any case. I feel compelled to bring joyous mathematics to the world.

KK: Right. Okay, so this podcast is about theorems. So why don’t you tell us what your favorite theorem is?

JT: Okay, well first of all, I don’t actually have a theorem, even though I think it should be elevated to the status of one. I want to talk about Sperner’s lemma. So a lemma means, like, an auxiliary result, a result people use to get to other big ideas, but you know what? I think it’s charming in and of itself. So Sperner’s lemma. This was invented slash discovered back in the 1920s by a German mathematician by the name of Emanuel[] Sperner, who was playing with some combinatorial thinking in Euclidean geometry and came up with this little result.

Let me describe to you in one way first, not really the way he did it, because then I can actually explain a proof as well of the result. Imagine you have a big rubber ball, just a nice clean rubber surface, and you’ve got a marker. I’m going to suggest you just put dots all over the surface of the rubber ball, lots of dots all over the place. Once you’ve done that to your satisfaction, start connecting pairs of dots with little line segments. They’ll be little circular arcs, and make triangles, so three dots together make a triangle. Do that all over the surface of the sphere. Grand. So now you’ve got a triangulated sphere, a surface of a sphere completely covered with triangles. Each triangle for sure has only three dots on it, one in each corner, so no dots in the middle of the edges, please. All right? That’s step one.

Step two, just for kicks, go around and just label some of those dots with the letter A, randomly, and do some other dots with the letter B, randomly, just some other dots with the letter C—why not?—until each dot has a label of some kind, A, B, or C. And then admire what you’ve done. I claim if you look at the various triangles you have, you have some labeled BBB, and some labeled BCA, and some labeled BBA, and whatever, but if you find one triangle that is fully labeled ABC, I bet if you kept looking, in fact I know you are guaranteed, to find another triangle that’s labeled ABC. Sperner’s lemma says on the surface of a sphere that if there’s one fully labeled triangle, there’s guaranteed to be at least another.

EL: Interesting! I don’t think I knew that, or at least I don’t know that formulation of Sperner’s lemma.

JT: And the reason I said it that way, I can now actually describe to you why that is true because doing it on the surface of a sphere is a bit easier than doing it on a plane. Would you like to hear my little proof?

KK: Let’s hear it.

EL: Sure!

JT: Of course the answer to that has to be yes, I know. So imagine that these are really chambers. Each triangle is a room in a floor design on the surface of a sphere. So you’re in a room, an ABC room around you. You’ve got three walls around you: an AB wall, a BC wall, and an AC wall. Great. I’m going to imagine that some of these walls are actually doors. I’m going to say that any wall that has an AB label on it is actually a door you can walk through. So you’re in an ABC room, and you currently have one door you can walk through. So walk through it! That will take you to another triangle room. This triangle room has at least one AB edge on it, because you just walked through it, and that third vertex will have to be an A, B, or C. If it’s a C, you’re kind of stuck because there are no other AB doors to walk through, in which case you just found another ABC room. Woo-hoo, done!

EL: Right.

JT: If it’s either A or B, then it gives you a second AB door to walk through, so walk through it. In fact, just keep walking through every AB door you come to. Either you’ll get stuck, and in fact the only place you can possibly get stuck is if there’s exactly one AB door, in which case it was an ABC triangle, and you found an ABC triangle. Or it has another door to walk through, and you keep going. Since there’s only a finite number of triangles, you can’t keep going on indefinitely. You must eventually get stuck. You must get stuck in an ABC room. So if you start in one ABC room, you’ll be sure to be led to another.

EL: Oh, okay, and you can’t go back into the room you started in.

KK: That was my question, yeah.

JT: Could you possibly return to a room you’ve previously visited? Yes, there’s a subtlety there. Let’s argue our way through that. So think about the first room that you could possibly—if you do revisit a room, think of the first room you re-enter. That means you must have gone through an AB door to get in. In fact, if you’ve gone through that room before, you must have already previously used that AB door to go into and out of it. That is, you’ve used that AB door twice already. That is, the room you just came from was a previously revisited room. You argue, oh, if I think this is the first room I’ve visited twice, then the room you just came from, you’re wrong. It was actually that room that you first visited twice. Oh, no, actually it was the one before that that you first visited twice. There can be no first room that you first visited twice. And the only way out of that paradox is there can be no room that you visit twice.

EL: Okay.

JT: That’s the mind-bendy part right there.

EL: I feel like I need a balloon right now and a bunch of markers.

JT: You know, it’s actually fun to do it, it really is. But balloons are awkward. In fact, the usual way that Sperner’s lemma is presented, I’ll even not do it in the usual way. Sperner did it on a triangle. I’ll do it on any polygon. This time, this we can actually do with markers, and it’s really fun to actually do it. So draw a great big polygon on a page and then triangulate it. Fill its interior with dots and then fill in edges so you’ve got all these triangles filling up the polygon. And then randomly label the dots A, B, or C in a random, haphazard way. Make sure that you have an odd number of AB doors on the outside edge of that polygon. If you do that, no matter what you do, you cannot escape creating somewhere on the interior a fully labeled ABC triangle. The reason is, you just do this thing. Walk from the outside of the polygon through an AB door, an outside AB door, go along on a journey. If you get stuck, bingo! You’re on an ABC triangle. Or you might be led out another AB door back to the big space again. But if you have an odd number of AB doors on the outside, you’re guaranteed to have at least one of those doors not leading outside, meaning you’ve been stuck on the inside. It’s guaranteed to lead to an ABC triangle in the middle of the polygon.

EL: Okay, and this does require that you use all three—is it a requirement that you use all three letters, or does the odd number of things…I don’t know if my question makes sense yet.

JT: There’s no rules on what you except on the outside, please give me an odd number of AB doors.

EL: Okay.

JT: And there’s nothing special about the letters A and B. You could do an odd number of BC doors or an odd number of AC doors.

EL: Right.

JT: What you do on the interior is up to you. Label them all A, I dare you, and you’ll still find an ABC triangle.

EL: Okay.

JT: Isn’t that crazy?

KK: Okay, so why did Sperner care?

JT: Why did Sperner care? Well he was just playing around with this geometry, but then people realized, as one of your previous guests mentioned, the fabulous Francis Su, that this leads to some topological results, for example, the Brouwer fixed-point theorem, which people care about, and you should listen to his podcast because he explains the Brouwer fixed-point theorem beautifully.

EL: Yes, and he did actually mention to us in emails and stuff that he is actually quite fond of Sperner’s lemma also, so I’m sure he’ll be happy to listen to this episode.

JT: In some cases, Sperner’s lemma is kind of special because people knew Brouwer’s fixed-point theorem before Sperner, but they had very abstract nonconstructive proofs of the theorem. Fixed points, when you crumple pieces of paper and throw them on top of themselves, fixed points exist, but you can know that and not know to find them. Sperner’s lemma, if you think about it, is giving you a way to possibly find those ABC triangles. Just start on the outside and follow paths in. So it gives you a kind of hope of finding where those fixed points might actually lie, so it’s a very sort of constructive type of thinking on this topological result that is proved abstractly.

One thing that Francis Su did not mention is the hairy ball theorem, which I think is a lovely little application of Sperner’s lemma, which goes back to the spheres. Spheres—in my mind, this is how I was first thinking about Sperner’s lemma. So I don’t know if you know the hairy ball theorem. If you take a tennis ball with the little fur, the little hairs at, ideally, every point of the sphere, but that’s not really possible. But we can imagine in our mind’s eye a hairy ball. If you try to comb those hairs flat, tangent to the surface all the way around—well, maybe there would be a little angle, something like that. But as long as you don’t do anything crazy, you know, it’s a nice, smooth, continuous vector field on the surface of the sphere, just these hairs, close hairs go towards the same direction, very smoothly, nothing abrupt going on, then you are forced to have a cowlick, that is, one hair that sticks straight up. That is, you are forced to have a tangent vector that is actually the zero vector. You can actually prove that with Sperner’s lemma.

EL: Wow.

JT: Yeah, and the way you do that is: choose one point, like the North Pole, and imagine a little magnet there, and you can imagine all the magnetic fields make these two circular, the magnetic field of a dipole, sorry, I have to think back to my physics days. So you’ve got these natural lines associated with that magnet all over the sphere, so I suggest just triangulate the sphere. Just draw lots of little triangles all over it. And then at each vertex of the triangle, you’ve got this vector field, and you’ve got these hairs all over the vector field. At any point on the triangle, look at the direction the hair is pointing compared to the direction of the magnetic field. And you can label that either A, B, or C by doing the following. Basically you’ve got 360 degrees of possible differences of directions between those things. So if it’s in one of the first 0-120 degrees of counterclockwise motion, label it A. If it’s between the 120 and 240 mark, label it B. If it’s between the 240 to 360 mark, label it C. There is a way to label that triangulation based on the direction of the hairs on the surface of the sphere. Bingo! So we’ve just now proved that in any triangulation, you can argue that you arrange things at the pole as an ABC triangle, there’s this little thing you can arrange, then there has to be some other ABC triangle somewhere on the sphere. That is, there’s a little small region where you’ve got three hairs trying to point in three different directions. And do finer and finer triangulations. You actually argue the only way out of that predicament is there’s got to be one hair that’s pointing three directions at the same time, that is, the zero vector.

KK: That’s very cool.

EL: Yeah.

JT: I just love these. These things feel so tangible. I just want to play with them with my hands and make it happen. And you can to some degree. Try to comb a fuzzy ball. You have a hard time. Or look at a guinea pig. They’re basically fuzzy balls, and they always have a cowlick. Always.

KK: Are there higher-dimensional generalizations of this? This feels very much two-dimensional, but I feel there’s an Euler characteristic lurking there somewhere.

JT: Absolutely you can do this in higher dimensions. This works in any dimension. For example, to make this three-dimensional, stack all these tetrahedra together. Take a polyhedron, triangulate it. If there’s an odd number of ABC faces on the outside, then there’s guaranteed to be some ABCD tetrahedron in the middle. And higher dimensions. And people of course play with all sorts of variations. For example, I’ll go back to two dimensions for a moment, back to triangles. If three different people create their own labeling scheme, so you’ve got lots of ABC triangles around the place, then there’s guaranteed to be one triangle in the middle, so if you chose one person’s label for the vertex, the second person’s for the second vertex, the third person’s label for the third vertex, according to their labels, which are all different, that’s an ABC triangle in this sort of mixed labeling scheme. So they call these permutation results of Sperner’s lemma and so forth. Just mind-bendy, and in higher dimensions.

EL: So was this a love at first sight kind of theorem for you? What were your early experiences with it?

JT: So when did I first encounter it? I guess when I studied the Brouwer fixed-point theorem, and when I saw this lemma in and of itself—and I saw it in the light of proving Brouwer’s fixed-point theorem—it just appealed to me. It felt hands-on, which I kind of love. It felt immediately accessible. I could do it and experience it and play with it. And it seemed quirky. I liked the quirky. For some reason it just appealed to me, so yes, it appealed to all my sensibilities. And I also have this thing I’ve discovered about me and my life, which is that I like this notion that I’m nothing in the universe, that the universe has these dictates. For example, if there’s one ABC triangle, there’s got to be another one. I mean, that’s a fact. It’s a universal fact that despite my humanness I can do nothing about it. ABC triangles just exist. And things like the “rope around the earth” puzzle: if you take a rope and wrap it around the Equator, add 10 feet around the rope and re-wrap, you’ve got 19 inches of space. What I love about that puzzle, if you do it on Mars 10 feet from its Equator, it’s 19 inches of space. Do it for Jupiter: it’s 19 inches of space. Do it for a planet the size of a pea: it’s 19 inches of space. You cannot escape 19 inches. That sort of thing appeals to me. What can I say?

KK: So you are a physicist?

JT: Don’t tell anyone. My first degree was actually in theoretical physics.

KK: So the other fun thing we do on this podcast is we ask our guest to pair their theorem, or lemma in this case, with something. So what have you chosen to pair Sperner’s lemma with?

JT: You know, I’m going with a good old Aussie pavlova.

EL: Excellent.

JT: And I’ve probably offended all the people from New Zealand because they claim it’s their pavlova. But Australians say it’s theirs, and I’ll go with that since I’m an Aussie. And why that, you might ask?

EL: Well first can we say what a pavlova is in case, so I only learned what this was a couple years ago, so I’m just assuming—I was one of the lucky people who learned about this in making one, which was delicious, so yeah.

JT: First of all, it’s the most delectable dessert devised my mankind, or invented, or discovered. I’m not sure if desserts are invented or discovered. That’s a good question there. So it’s a great big mount of meringue, you just build this huge blob of meringue, and you bake it for two hours and let it sit in the oven overnight so it becomes this hard, hard outer shell with a soft, gooey meringue center, and you just slather it with whipped cream and your favorite fresh fruits. And my favorite fruits for a pavlova are actually mango and blueberries together. That’s my dessert. But you know, well maybe to think of it, there are actually two reasons. I happen to know it was invented in the 1920s, the same time Sperner came up with this lemma, which is kind of nice. But any time I bake one—I bake these things, I really enjoy baking desserts—it kind of reminds me of a triangulated sphere because you’ve got this mound of meringue, and you bring it out of the oven, and it’s got this crust that’s all cracked up, and it kind of looks like a triangulation of a polyhedron of some kind. So it has that parallel I really like. So pavlovas bring as much joy to my life as these quirky Sperner’s lemma type results, so that’s my pairing.

EL: So they’re not, so I went to this Australia-themed potluck party a couple years ago, and I decided to bring this because I was looking for Australian foods, so I got this. I was pretty intimidated when I saw the pictures, but it’s actually, at least I found a recipe and it looked good, and it worked the first time, more or less. I think you can handle it.

JT: It is a showstopper, but it’s so easy to make. Don’t tell anyone, it’s ridiculously easy, and it looks spectacular.

KK: Yeah, meringues look like something, but really, you just have to be patient to whip the whites into something, and then that’s it. It works.

JT: Then you’re done. It kind of works. You can’t overcook it. You can undercook it, but then it’s just a goopy delicious mess.

KK: Right. So we also like to give our guests a chance to plug various things. I’m sure you’re excited to talk about the Global Math Project.

JT: Of course I’m going to talk about the Global Math Project. Oh my goodness. You know, when I mention I’m kind of a man on a mission to bring joyous, uplifting mathematics to the world, I’m kind of trying to live up to those words, which is kind of scary. But let me just say something marvelous, really marvelous and humbling, happened last October. We brought a particular piece of mathematics to the world, a team of seven of us, the Global Math Project team, not knowing what was going to happen. It was all volunteer, grassroots, next to no funding, we’re terrible at raising funding, it turns out. But it really was believing that teachers, given the opportunity to have a real joyous, genuine, human conversation about mathematics with their students, that’s actually classroom-relevant mathematics. Classroom mathematics is a portal to the same mystery, delight, intrigue, and wonder, they will. Teachers are our best advocates across the globe for espousing beautiful, joyous, uplifting mathematics. So we presented a piece of mathematics called Exploding Dots, and we invited teachers all around the globe to do that, to have just some experience on this topic with their students during Global Math Week last October, and they did. We had teachers from 170 different countries and territories, all of their own accord, reach out to about 1.77 million students just in this one week. Phenomenal. And this is school-relevant mathematics. So we’re doing it again! Why not?

EL: Oh, great!

JT: So this year, 10/10, we chose that date because it’s a universal date. No matter how you read it, it’s the tenth of October. We’re going to go up to 10 million students with the same story of Exploding Dots. So I invite you, please look up Global Math Project, go Google Exploding Dots. See what we’re bringing to the world. And on its own accord, in the last number of months, we’ve now reached 4.6 million students across the globe, so 10 million students sounds outlandish, but you know what? We might actually do this. And it’s just letting the mathematics, the true, joyous mathematics, simply shine for itself, just get out of its way. And you know what? It happens. Math can speak for itself. Welcome to Global Math Project.

EL: Yeah. We’ll include that in the show notes, for sure.

KK: In fact, this is June that we’re taping this, recording this. Taping? I’m dating myself. We’re recording this in June. So just this weekend Jim Propp had a very nice essay on his Mathematical Enchantments blog about this, about Exploding Dots. I’d seen some things about it, so I knew a little about it. It’s really very lovely, and as you say relevant.

JT: I’m glad you mentioned Jim Propp. I was about to give a shout out to him as well because he wrote a beautiful piece, and it’s this Mathematical Enchantments blog piece for June 2018. Worth having a look at. Absolutely. What I love about this, it really shows, I mean, Exploding Dots is the story of place value, as simple as that. But it really connects to how you write numbers, what you’re experiencing in the early grades. It explains, if you think of it in one particular way, all the grade school algorithms one learns, goes through all of high school polynomial algebra, which is just a repeat of grade five, but no one tends to tell people that. Why stop at finite things? Go to infinite things, go to infinite series and so forth, and start getting quirky. Not just playing with 10-1 machines with base 10 and 2-1 machines with base 2, start playing with 3-2 machines and discover base 1 1/2, start playing with 2-negative 1 machines and discover base -2, and you get to unsolved research questions. So here’s this one simple little story, just playing with dots in boxes, literally—like me playing with dots on a sphere; I seem to be obsessed with dots in my life—takes you on a journey from K through 5 through 8 through 12 to 16, and on, all in one astounding fell swoop. This is mathematics for you. Think deeply about elementary ideas, and well, it’s a portal to a universe of wonder.

KK: That’s why we’re all here, right?

JT: Indeed. So let’s help the world see that together, from the young’uns all the way up.

KK: All right. Well, this has been great fun. I knew Sperner’s lemma, sort of in the abstract, but I never really thought of it too closely. So I’m glad that I can now prove it, so thank you for that.

EL: Yeah, I’m going to sit down and make sure you’re not pulling my leg about that. I think the odd number of AB’s is key here.

JT: Absolutely. The odd number of AB outside edges is key.

EL: Right.

JT: Because you could walk through the door and out a door, so pairs of them could cancel each other out. So play with them.

EL: I bet if I start drawing, I’ve been restraining myself from going over here to the side.

JT: Well you know, sketches work so well on a podcast.

KK: That’s right.

JT: Absolutely, play. That’s all mathematics should be, an invitation to play. Go for it.

EL: Yeah, thanks a lot for being here.

JT: My pleasure. Thank you so much.

[outro]

Episode 26 - Erika Camacho

Evelyn Lamb: Hello and welcome to My Favorite Theorem, a podcast where we ask mathematicians to tell us about their favorite theorems. I’m your host Evelyn Lamb. I’m a freelance math and science writer in Salt Lake City, Utah. This is your other host.

Kevin Knudson: I’m Kevin Knudson, a professor of mathematics at the University of Florida. How’s it going?

EL: Great. I’m excited about a new project I’m working on that is appropriate to plug at the beginning of this, so I will. So I’ve been working on another podcast that will be coming out in the fall, may already be out by the time this episode is out. It’s with the folks at Lathisms, that’s L-A-T-H-I-S-M-S, which is a project to increase visibility and recognition of Hispanic and Latinx mathematicians. And our guest today is going to be a guest on that podcast too, so I’m very excited to introduce our guest, who is Erika Camacho. Hi, Erika. Can you tell us a little bit about yourself?

Erika Camacho. Sure. So I’m an associate professor at Arizona State University. My concentration is, well I’m a professor of applied mathematics, and my concentration is mathematical physiology, mainly focusing in the retina and modeling the retina and the deterioration of photoreceptors. And I’m in the west campus of Arizona State University, which is mainly focusing, it’s both a research and student focused institution, so it’s kind of like a hybrid between what you would call more of a research place and also a liberal arts education.

EL: Cool.

KK: Very nice. Which city is that in?

EC: We’re in Glendale, the west valley of Arizona, Phoenix greater area.

EL: I was in Arizona not too long ago, and the time zone is always interesting there because it’s exactly south of Utah, but I was there after Utah and most of the country went to daylight saving time, and most of Arizona doesn’t observe that, so it was kind of fun. I also went through part of the Navajo Nation there that does observe daylight saving time, so I changed time zones multiple times just driving straight north, which was kind of a fun thing.

EC: It is very confusing. Let’s say you have an event that you’re going to, and you’re driving to one where it’s say in some of the Navajo Nation, and you don’t realize that you might miss some of your event because of the time change. You’re just driving and you’re crossing the border where it changes to a different time zone. It takes a while to get adjusted to. I missed a flight one time for the same reason. I was not aware that Arizona didn’t observe daylight saving. Now I’m aware.

KK: I actually have a theory that someone could run a presidential campaign, and their sole platform is that they would get rid of daylight saving time, and they would win in a landslide.

EL: I mean, people have won on less.

KK: Clearly.

EL: So Erika, we invited you here not to chat about time zones or presidents but to chat about theorems. So what is your favorite theorem?

EC: Before I say my favorite theorem, like I said, I am an applied mathematician. So I focus on modeling. And in modeling, there’s a lot of complexities, a lot of different layers and levels where you’re trying to model things. So many of the systems you’re trying to develop as you create this model tend to be nonlinear models. Many times I’m looking at how different processes change over time. So many of the processes I work with are continuous. So I work with differential equations, and they tend to be nonlinear. Sometimes that’s where the complexity comes in, trying to analyze nonlinear systems, and the most accurate way, the way that we’re going to get the most insight into some of the behavior we’re looking for in terms of physiological systems that relate to the retina and retinal degeneration, one of the things that we’re really looking at is what happens in the long run? How is it that photoreceptors degenerate over time, and can we do something to stop the progression of blindness or the progression of certain diseases that would cause the photoreceptors to degenerate? So we’re really asking what are the long-term solutions of the system, and how did they evolve over time? So we’re looking for steady states. We’re looking for what is their stability and what are the changes in the processes or the mechanisms that govern those systems, which usually are defined by the parameters that end up actually leading to a change in the stability of the equilibria? And they could take the system to another equilibrium that is stable now. So in physiological terms, to another pathological state, or another state that we could hopefully do a few strategies to prevent blindness. So that’s the setting of where I come from, and when you asked me this question, what is my favorite theorem, it was hard because as applied mathematicians we utilize different theory. And all the theory is useful, and depending on what the question is, then the mathematics that are utilized are very different.

So I thought, “What is the theorem that is utilized the most in the case where we’re looking at nonlinear systems and we’re trying to analyze them? And one of the most powerful theorems out there, which is one that has almost become addicting, that you use it all the time, is the Hartman-Grobman theorem. I say addicting because it’s a very powerful theorem. It allows us to take a nonlinear system and in certain cases be able to analyze it and be able to get an accurate depiction of what’s happening around the equilibrium point, what is the qualitative behavior of the system, what are the solutions of the system, and what is their stability. Because you’re looking at, in most cases, a continuous system, you can map it and be able to kind of piece it together.

EL: So it’s been a long time since I took any differential equations. I’m a little embarrassed, or did any differential equations.

KK: Me too.

EL: So can you tell us a little more about the setting of this theorem?

EC: So the Hartman theorem, like I said, is a theorem that allows us to study dynamical systems in continuous time. It’s very powerful because it gives us an accurate portrayal of the flow, solutions of the nonlinear system in a neighborhood around a fixed point, the equilibrium, the steady state. So I’m going to be using fixed point, equilibrium, and steady state interchangeably. In some cases, and in the cases where it does help, is in the cases where the equilibrium that we’re looking at, the eigenvalues of the linearized system, or the nonlinear system that we’re looking at, actually has nonzero real part. In other words, we’re looking at hyperbolic equilibrium points. That’s when we could actually apply this system.

KK: Okay.

EC: This theorem, otherwise we cannot. That is, for certain cases? The standard technique is you look at your nonlinear system, you linearize it through a process, and you’re able to then shift your equilibrium point to the origin, and now you’re considering the linearized system, and that system, the Jacobian you obtain through the linearization has eigenvalues that have nonzero real part. Then you’re able to apply the Hartman theorem, which tells you that there is this homomorphism from the nonlinear system, the flow and the solutions of the nonlinear system, locally, to the actual linear system. And now everything that you get that you would normally be able to see analyzed in a linear system locally, you’re able to do it for the nonlinear system. So that’s where the powerful thing comes in. Like I said, the gist of it is that the solutions of the nonlinear system can actually be approximated by a linear system, but only in a neighborhood of the equilibrium point. And this is only in the case where we have hyperbolic equilibrium fixed points. But that is very powerful because that allows us to really get a handle on what’s going on locally in a neighborhood of the steady state. For us, we’re looking at, say, how certain diseases progress in the long run, where are we heading? Where is the patient heading, in terms of blindness? And it really allows us to be able to move in that direction in terms of understanding what is going on. And like I said, it’s powerful not just because it’s telling us about the stability, but it’s actually telling us the qualitative structure of the solution and the behavior, right, of your solutions, locally are the same in the linear case and the nonlinear case because of this topological equivalence.

KK: That’s pretty remarkable. But I guess the neighborhood might be pretty small, right?

EC: Right. The neighborhood is small.

KK: Sure.

EC: In nonlinear systems, you have plenty of different equilibrium points around those neighborhoods, right, but again remember that your solutions in the phase space are changing continuously, so you are able to kind of piece together what is going on, more or less, but for sure you know what’s going on in the long term behavior, you know what’s going on around that neighborhood, and for given initial conditions, which is really key in math applications because sometimes we’re asking what happens for different initial conditions. What are the steady states? What do the solutions look like in the long run? What do things look like, what is going on for different initial conditions?

KK: So if you’re modeling the retina, how many equations are we talking? How big are these systems?

EC: Well, that’s the thing. In the very most simplified case, where you’re able to divide the photoreceptors into the rods and cones, then you have two populations.

KK: Okay.

EC: And in one of the cases we’re looking at the flow of nutrients, so we are also considering the retinal pigment epithelium cells, which is another population, so you have three equations in that case. So that’s a more simplistic situation, but it’s a situation where we have been able to really get a sense of what’s going on in terms of degeneration in these two classes of photoreceptors that undergo a mutation. So one of the diseases I work on is retinitis pigmentosa, and the reason why that is a very complicated case that we haven’t been able to really get a handle on and be able to come up with better therapies and better ways of stopping degeneration of the photoreceptors—in fact there is no cure for stopping photoreceptors from degenerating—is because the mutation happens in the rods. The rods are the ones that are ill. Yet the cones die, which are perfectly healthy. And trying to understand how is it that the rods actually are communicating with the cones that ends up also killing them is an important part, and with a very simplistic model for an undiseased case, we were able to actually, before biologically this link was discovered, that in fact the photoreceptors produced this protein that is called the rod-derived cone viability factor, that helps the cone survive, and we were able to show that mathematically just by analyzing the equilibria and being able to look at different things in the long run, and the invariant spaces, and being able to show what we know just by basic biology of what happened to the rods and the cones and then realized that the communication had to be a one-way interaction from the rods to the cones. So that’s one of the models that we have. And then once we had that handled, we were able to introduce the disease and look at a four-dimensional system.

Now we’re looking internally at the metabolic process inside the cones because there’s a metabolic process. So the rods produce this protein. How is that protein taken by the rods, and what does it do once it’s inside the rods? For that we really need to look inside the metabolic process and the kinetics of the cones and also the rods. There, if you’re just considering the cones, you’re looking at 11-12 differential equations.

KK: Wow.

EL: Wow.

EC: With many parameters. So at that point we’re going to a much higher dimension. And that’s where we currently are. But that has given us a lot of insight, not just in how the rods help the cones but how is it that other processes are being influenced, getting affected? And again, where the Hartman-Grobman theorem applies is to autonomous systems, where time is not explicit in the equations.

EL: Okay.

KK: This is fascinating.

EL: Math gets this kind of rap for being really hard, but then you think, like, math is so much simpler than this biological system. Your rods being sick make your cones die!

EC: But I think the mathematics is essential. There’s a big cost in taking certain experiments to the lab, just to be able to understand what is going on. There’s a cost, there’s a time dependence, and math bypasses that. So once you have a mathematical model that is able to predict things. That’s why you start with things that are already known. Many times the first set of models that I create are models that show what we already know. They’re not giving any new insight. It’s just to show that the foundation is ready and we can build on it, now we can introduce some new things and be able to ask questions about things we don’t know. Because once we are able to do that, really it’s able to guide us to places, or at least indicate what kinds of lab studies and experiments should be run and what kinds of things should be focused on. And that’s one of the things we do. For example, one of the collaborators I work with is the Vision Institute of Paris, so the institute and the director there, and the director of genetics as well. And we have this collaboration where I think working together has really helped guide their experiments and their understanding of where they should be looking, just as they helped me really understand what are the types of systems we need to consider and what are the things that we can neglect, that we don’t have to really focus on? And I think that’s the thing, mathematics is really powerful to have in biological system, I think.

EL: Yeah.

EC: And my favorite theorem can be used to gain insight into photoreceptor degeneration in very complicated systems. Another thing that’s interesting about the Hartman-Grobman theorem is that one of the things that is really powerful is that you don’t have to find a solution, a solution to the nonlinear system, to get an understanding of what’s going on and get an insight into the qualitative behavior of those solutions. And I think that is really powerful. Do you have any questions?

EL: So, I mean, a lot. But something I always think is interesting about applied mathematicians is that often they end up working in really different application areas. So did you start out looking at retinas and that kind of biological system, or did you start out somewhere else in applied math and gradually move your way over there?

EC: So when I started in applied math, what I really liked was dynamical systems. Yes, the first project I worked on as a graduate student was actually looking at the cornea and how different light intensities affect the developing cornea. And for that I really had to learn about the physiology of the eye and the physiology of the retina. But then I did that for graduate school, and initially once I went out of graduate school, in my postdoc I was working on how different fanatic groups get formed.

EL: Oh wow, really different application.

EC: Which was in Los Alamos. I was looking at what are the sources of power that allow groups that can become terrorists, for example, to really become strong. What are the competing forces? So it was more a sociological application, but again using dynamical systems to try to understand it. Later on I moved on to a more general area of math biology, looking at other different systems and diseases, but then I went back through an undergraduate project in an REU. Usually the way I work with undergraduates is I make them be the ones that ask the question, that select the application. And I tell them, you have to go learn all about it because you have to come and teach me. And then from there I’m going to help you formulate the questions that can be put into a mathematical equation and that can be modeled somehow. And they were very interested, they wanted to do something with a PDE. And they thought, well, something with the retina. And they thought that retinitis pigmentosa would be perfect for modeling with a PDE and being able to analyze it that way. And then as I learned more about the disease, the interesting thing is that when the cones begin to die, like the rods are the ones that are sick, but when the cones begin to die, there is no spatial dependence anymore. They don’t die in a way that you can see this spatial dependence. It’s really more random, and it’s more dependent on the fact that there is this lack of protein that is not being synthesized by the rods anymore. Many times what happens is there is this first wave of death of photoreceptors where the rods die, and when most of then die, when 90 percent of them are gone, then the cones begin to die. And then there are all these other things about, yes, you can think about wakes and the velocity of them, but there is not this spatial dependence. Initially it is, but that’s when only the rods are dying. But when we are really interested in asking the questions about why the cones die, there’s not that case anymore.

KK: It’s sort of a uniformly distributed death pattern, as it were? What I love about this is, you know, here’s a problem that basically a second-year calculus student can understand in some sense. You have two populations. We teach them this all the time. You have two populations, and they’re interacting in some way. What’s the long-term behavior? But there are still so many sophisticated questions you can ask and complicated systems there. Yeah, I can see why your undergrads were interested in this, because they understood it immediately, at least that it could be applied. And then they brought this to you, and now you’re hopefully going to cure RP, right?

EC: Right, well another thing is that you can understand, and you can use math that is not very high-level to start to get your hands dirty. And for example, now that we’re looking at this multi-level layer where you’re looking at the molecular level and also at the cellular level, then you’re really asking about multi-scale questions and how can we better analyze the system when we have multiple scales, right? And then there are sometimes questions about delay. So the more focused and the more detailed the model becomes, the more difficult the mathematics becomes.

KK: Sure.

EC: And then there are also questions, for example, without the mathematics, there’s a lot of interesting mathematics going on, I’m sorry, I mean without the biology, that you could analyze with mathematics. We did a project like that with a collaborator where the parameter space was not really relevant biologically, but the mathematics was very interesting. We had all this different behavior. We had not just equilibrium points, but we had periodic solutions, torus, we had all of this, and what is going on? And a lot of this happened in a very small region, and it just became more of a mathematical kind of analysis rather than just a biological one.

EL: Yeah, very cool. So another part of this program is that we like to ask our guest to pair their theorem with something, you know, food, beverage, music art, anything like that. So have you chosen a pairing for the Hartman-Grobman theorem?

EC: I thought about it a lot, because like I said it’s such a powerful theorem, and I go back to the idea of it’s addicting. I think anyone who’s worked in dynamical systems in the nonlinear case in a continuous timeframe definitely utilizes this theorem. It comes to a point where we are doing it automatically. So I thought, what is something that I consider very addicting, yet it looks very simple, right? It’s elegant but simple. But once you have it, it’s addicting. And I could not think of anything else but the Tennessee whiskey cake. Have you ever had it?

KK: No.

EL: No, but it sounds dangerous.

EC: It is delicious. It’s funny, I don’t like whiskey, and I had it when I went to San Antonio to give a talk one time. I was like, well, okay, everyone wanted it, so I decided to go with it. I usually pick chocolate because that is my favorite.

EL: Yeah, that’s my go-to.

EC: I love chocolate. So I said, well, let me try it. It was the most delicious thing. Now I want to be able to bake it, make it. I had a piece, and I want more.

KK: So describe this cake a little bit. Obviously I get that it has bourbon in it.

EC: The way it’s served is it’s served warm, and it has vanilla ice cream. It has nuts, and it has this kind of butterscotch or sometimes chocolate sauce over it. And it’s very moist. It has those different layers. I also think, right, in terms of complexity, it has these different layers. In order to get a sense of the power of it, you have to kind of go through all the layers and have all of them in the same bite. And I feel like that with the Hartman theorem, right, that the power of it is really to apply it to something that has nonlinearity, that is really complex, and something that you know you might not be able to get a handle on the solutions analytically, but you still want to be able to say what is going on, what is the behavior, where are we heading? To somehow be able to infer what the solutions are through a different means, to be able to go around, and it gives you that kind of ability.

KK: And this is where whiskey helps.

EC: Well the whiskey’s the addicting part, right?

EL: So have you made this cake at all, or do you usually order it when you’re out?

EC: Usually I order it when I’m out. But I want to make it. So my mom’s birthday is coming up on August 3rd, and I’m going to try to make it. I was telling my husband, “We’re going to have to make it throughout the next few days because I’m pretty sure we’re going to go through a few trials.”

KK: Absolutely.

EC: I can never get it right.

EL: Even the mistakes will be rewarding, just like math.

KK: And again the whiskey helps.

EC: But that’s an interesting question. I thought, what can I pair it with? And the only thing I could think of was something that’s addicting or something that has multiple layers but that all of them have to be taken at once, that you’re allowed to look at all of them at once.

EL: This fits perfectly.

KK: Sounds great. Well I’ve learned a lot today. I’ve never thought about modeling the eye through populations of rods and cones, but now that you say it, I guess sure, of course. And now I have to look up Tennessee whiskey cake.

EL: Yeah, it’s really good. You should try it.

KK: I’m going to go do that.

EL: It’s almost lunch here, so you’re definitely making me hungry.

KK: Well thanks a lot for joining us.

EL: Thanks a lot for being here.

EC: Well thank you so much for having me here. I really enjoyed it.

Episode 25 - Holly Krieger

EL: Hello and welcome to My Favorite Theorem, a podcast where we ask mathematicians what their favorite theorem is. I’m your cohost Evelyn Lamb. I’m a freelance math and science writer in Salt Lake City, Utah. This is your other cohost.

KK: Hi, I’m Kevin Knudson, professor of mathematics at the University of Florida. How are you doing, Evelyn?

EL: I’m all right. How about you?

KK: Okay. So one of our former guests, who I won’t name, was giving a big lecture here at the colloquium series this week, so I got to meet that person in person.

EL: Oh, excellent.

KK: So I might even have a better picture for the webpage, for the post to say, hey, our hosts and guests can actually be in the same place at the same time.

EL: Yeah, that would be exciting. And one of these days, maybe you and I will meet in person, which I’m pretty sure we have not yet.

KK: Maybe. I know we haven’t. I keep threatening to come to Salt Lake City, but I don’t think Salt Lake can handle me. I have actually been there once. Wonderful town. It’s a great city.

EL: I like it. So today we are very glad to have Holly Krieger on the show. So Holly, would you like to tell us a little bit about yourself?

HK: Sure, I’d be happy to. Thanks for having me, first of all. So I am a lecturer at the University of Cambridge. I’m also a fellow at one of the constituent colleges of Cambridge, Murray Edwards College, and the kind of math I’m most interested in is complex dynamics and number theory. So I do a lot of studying of the Mandelbrot set and the arithmetic properties of these kinds of things and related questions.

EL: And I see you and I have the same poster of the Mandelbrot set. Mine is not actually hanging up yet. You have been better at getting the full experience by hanging it up, but I see that poster behind you.

HK: That’s right, the Mandelmap. It’s amazing, this poster. I just found it on Kickstarter, and then I sent it to a bunch of mathematician friends, and so occasionally I will go to someone to visit someone mathematically, and they have the same poster in their office. It’s very satisfying.

EL: Well, we have invited you here to ask you what your favorite theorem is. So what’s your favorite theorem?

HK: So here’s the thing: I shouldn’t be on this podcast because I don’t have a favorite theorem.

KK: No, no, no.

HK: I don’t have a favorite theorem, it’s true. Somehow I’m too much of a commitment-phobe, like I have a new favorite theorem every week or something like that. I can tell you this week’s favorite theorem.

EL: That’s good enough.

KK: That’s fine. Ours have probably changed too. Evelyn and I in Episode 0 stated our favorite theorems, and I’m pretty sure Evelyn might have changed her mind by now.

EL: Yeah, well, one of our other guests, Jeanne Clelland, made a pretty good case for the Gauss-Bonnet theorem.

KK: She really did.

EL: I think my allegiance has shifted.

HK: Maybe you can do a podcast retrospective, every 20 episodes or something, what are the hosts’ favorite theorems today?

KK: That’s a good idea, actually. Good.

HK: So, my favorite theorem for this week. I love this theorem because it is both mathematically sort of really heavy-hitting and also because it has this sort of delicious anti-establishment backstory to it. My favorite theorem this week is Brouwer’s fixed-point theorem.

KK: Nice.

HK: Maybe I should talk about it mathematically first, maybe the statement?

EL: Yeah.

HK: Okay. So I think the easiest way to state this is the way Brouwer would have thought about it, which is if you take a closed ball in Euclidean space, so you can think about an interval in the real line, that’s a closed ball in the one-dimensional Euclidean space, or you can think about a disc in two-dimensional space, or what we normally think of as a ball in three-dimensional space, and higher you don’t think about it because our brains don’t work that way. So if you take a closed ball in Euclidean space, and you take a continuous function from that closed ball to itself, that continuous function has to have a fixed point. In other words, a point that’s taken to itself by the function.

So that’s the statement of the theorem. Even just avoiding the word continuous, you can still state this theorem, which is that if you take a closed ball and morph it around and stretch it out and do crazy things to it, as long as you’re not tearing it apart, you’ll have a fixed point of your function.

KK: Or if you stir a cup of coffee, right?

HK: That’s right, so there’s this anecdote that what Brouwer was thinking about—I have no idea if this is accurate.

KK: Apocryphal stories are the best.

HK: Reading about him biographically, I almost feel like coffee would be too exciting for Brouwer. So I’m not actually sure about the accuracy of this story. So the story goes that he was stirring his coffee, and he noticed that there seemed to be a point at every point in time, a point where the coffee wasn’t moving despite the fact that he was stirring this thing. So that actually leads to one of the reasons I like this in terms of real-world applications. It’s a good—well, depending on who you hang out with, it’s a good—cocktail party theorem because if you’re making yourself a cocktail and you throw all the ingredients into your shaker and you start stirring them up, well, when you’re done stirring it, as long as you haven’t done anything crazy like disconnected the liquid inside of the shaker, then you’ve got to have some point in the liquid that’s returned to its original spot. And I think that’s a fun version of the coffee anecdote.

EL: But the cocktail would definitely be too exciting for Brouwer.

VN: I would be really surprised. He was a vegetarian, not that you can’t be a fun vegetarian. He was a vegetarian, and he was sort of a health nut in general, and that was back in a time—he proved this theorem in the early 1900s—back in a time when I don’t think that behavior was quite so common.

KK: It was more, like, on a commune. You’d go to some weird, well I shouldn’t say weird, you’d go to some rural place and hang out with other like-minded people.

HK: That’s right.

KK: And live this healthful lifestyle. You would eschew meat and sugar and all that stuff.

HK: Right, exactly. So the other way I like to describe this in terms of the real world, and I think this is a common way Brouwer himself actually described this, is that if you take a map, so take a map of somewhere that’s rectangularly shaped. You can either think the map itself is a rectangle, so whatever it pictures is a rectangle, or you can think of Colorado or something like that. If you take a map, and you’re in the place that’s indicated by the map, then there’s somewhere on the map that is precisely in the same point on the map as it is in the place. Namely, where you are. But you can get more specific than that. So those are two sort of nice ways to visualize this theorem.

One of the reasons I like it is that it basically touches every subfield of mathematics. It has implications for differential equations and almost any sort of applied mathematics that you might be interested in. Things like existence of equilibrium states and that kind of thing over to its generalizations, which touch on number theory and dynamical systems and these kinds of things through Lefschetz fixed-point theorem and trace formula and that kind of thing. So mathematically speaking, it’s sort of the precursor to the entire study of fixed-point theorems, which is maybe an underappreciated spine running through all of mathematics.

KK: Since you’re interested in dynamics, I can see why you might really be interested in this theorem.

HK: Yeah, that’s right. It comes up particularly in almost any kind of study of dynamical systems, where you’re interested in iteration, this comes up.

EL: I like to ask our guests if this was a love at first sight theorem or if it’s grown on you over time.

HK: That’s a good question. It’s definitely grown. I think when you first meet this thing, I mean let’s think about it a little bit. In one dimension, how do you think about this theorem? You think, well, I’ve got a map from, say, the unit interval to itself, right, which is a continuous map. I can draw its graph. And this is the statement essentially that that graph has to intersect the line y=x between 0 and 1.

KK: So it’s a consequence of the Intermediate Value Theorem.

HK: That’s right. This is one of those deals where we always tell the calc students, “Tilt your head,” and they always look at us like we’re crazy, but then they all do it and it works. I find this appealing because it’s sort of an intersection theoretic way to think about it, which is sort of the generalizations that I’m interested in. But I think that you don’t realize the scope of this kind of perspective viewing this as intersection, and how that sort of leads you into algebraic geometry versions of this theorem. You don’t realize that at first. Same with, you don’t realize the applications to Banach spaces at first, and equilibrium states at first, so understanding the breadth of this theorem is not something that happens right away. The other thing is that really why I like this theorem is the backstory. Can I tell you about the backstory?

KK: Absolutely.

HK: So Brouwer, you can already tell I kind of don’t like him, right? So Brouwer was a Dutch mathematician, and he was essentially the founder of a school of mathematical philosophy known as intuitionism. What these people think, or perhaps thought—I don’t know who among us is one of them at this point—what these people think is that essentially mathematics is a result of the creator of mathematics, that there is no mathematics independent of the person who is creating the mathematics. So weird consequences of this are things like not believing in the law of the excluded middle. So they think a thing is only true if you can prove it and only not true if you can provide a counterexample. So something that is an open problem, for example, they consider to be a counterexample, or whatever you want to say, to the law of the excluded middle. So it’s in some sense a time-dependent mathematical philosophy. It’s not that everything is either true or not in the system, but true or not or not yet.

EL: That’s interesting. I don’t know very much about this part of math history. I’ve sort of heard of the fact that you don’t have to necessarily accept the law of the excluded middle, but I hadn’t heard people talk about this time-dependent aspect of it. I guess this is before we get into Cantor and Gödel, or more Gödel and Cohen’s, incompleteness theorems, which kind of seem like that would be a whole other wrench into things.

HK: That’s right. So this does predate Gödel, but it’s after Cantor. This was basically a knee-jerk reaction to Cantor. So the reason why I’m sort of anti-this philosophy is that I view Cantor as a true revolutionary in mathematics.

KK: Absolutely.

HK: Maybe I’ll have a chance to say a little bit about the connection between the Brouwer fixed-point theorem and some of what he did, but Cantor sat back, or took a step back and said, “Here’s what the size of a set is, and I’m going to convince you that the real line and the real plane, this two-dimensional space, have the same size.” And everyone was so deeply unhappy with this that they founded schools of thought like intuitionism, essentially, which sort of forced you to exclude an argument like Cantor’s from being logically valid. And so anyone who was opposed to Cantor, I have a knee-jerk reaction to, and the reason I find this theorem so delicious, sort of appealing, is because it’s not constructivist. Brouwer’s fixed-point theorem doesn’t hand you the fixed point, which is what Brouwer says you should have to do if you’re actually proving something. He really believed, I mean, he worked on it from his thesis to his death, essentially, while he was active, he really believed in this philosophy of mathematics that you cannot say there exists a thing but I can’t ever tell you what it is. He thought you really had to hand over the mathematical object in order to convince somebody. And yet one of his most famous results fails to do exactly that. And the reason why is that his thesis advisor was like, “Hey, no one is going to listen to you unless you do some actual mathematics. So he put aside the philosophy for a few years, proved some nice theorems in topology, in sort of the formalist approach, and went back to mathematical philosophy.

KK: I did not know any of this. This whole time-dependent mathematics, now I can’t stop thinking about Slaughterhouse-5, right, you’ve read Slaughterhouse-5? The Tralfamadorians would tell us, you know, that it’s already all there. It’s encased in amber. They can see it all, so they know what theorems we’re going to discover later.

HK: That’s right.

KK: So what’s your favorite proof of this theorem?

HK: So I think my favorite proof of this theorem is probably not Brouwer’s. It’s probably an algebraic topology proof, essentially.

KK: I thought you’d go with the iteration proof, but okay.

HK: No, I don’t think so because what it’s really about to me, it really is a topological statement about the nonexistence of retractions. So let’s just talk about the disc, let’s do the two-dimensional version. So if you had, so first of all, it’s a proof by contradiction, which already Brouwer is not on board with, but let’s do it anyways. So if you had a function which was a continuous map of the closed unit disc to itself which had no fixed point, then you could define a new function which maps the closed disc to its boundary, the circle, in the following way. If you have a point inside the disc, you look at where its image is. It’s somewhere else, right, because there are no fixed points. So you can draw the ray from its image through that point in the plane. That ray will hit the unit circle exactly once. That’s the value you assign the point in this new function. This will give you a new map, which maps the closed unit disc to its boundary, so this map is a retraction, which means it acts as the identity on the unit circle, and it maps the entire disc continuously onto the boundary circle. And such a thing can never exist.

KK: You’ve torn a hole in the disc.

HK: You’ve torn a hole in the disc. It’s really believable, I mean, rather than a rigorous proof, think about the interval. Take every point in the interval and assign it a value of either 0 or 1. You obviously have to tear it to do it. It’s totally clear in your head. with the disc, maybe it’s not quite so obvious. Usually the cleanest proof of the non-existence of a retraction like this goes through algebraic topology and understanding what the fundamental groups of these two objects are.

KK: That’s the proof I was thinking of, being a topologist.

HK: You thought maybe I’d be dynamical about it?

KK: Well, you could just pick a point and iterate, and since it’s a complete metric space, it converges to some point, and that thing has to be fixed. But that’s also not constructive, right?

HK: It’s also not constructive. But there are approximate construction versions.

KK: Right.

HK: One more thing I like about this theorem, in terms of its implications, is it’s one more tool Brouwer used in his theorem proving the topological invariance of dimension, that dimension is a well-defined notion under homeomorphisms. In particular, you don’t have a homeomorphism, just stretching, continuous in both directions, from, like, R^n to R, n-dimensional Euclidean real space to the real line. This doesn’t sound earth-shattering to us now. I think we kind of take it for granted. But at the time, this was not so long after Cantor was like, “Oh, but there is actually an injection, right, from n-dimensional Euclidean space to the real line.” So it’s not that it was surprising, but it was sort of reassuring, I think, that if you impose continuity this kind of terrible behavior can’t happen.

KK: Right. In other words, you need additional structure to get your sense that the plane is bigger than the line.

HK: That’s right. Although even taking into account continuity, topology is weird sometimes. There are space-filling curves, so in other words, there are surjective maps from the real line, (well, let’s just stick to intervals) from the unit interval to any-dimensional box that you want. And so somehow that’s really counterintuitive to most people. So it’s not so obvious that maybe what you think of as the reverse, an injection of a large space into a small space, maybe that would be problematic. But thanks to Brouwer’s fixed-point theorem, it’s not.

KK: So what pairs well with Brouwer’s fixed-point theorem?

HK: Well, okay, it has to be a cocktail, right, because I chose the cocktail example and because cocktails are fun. And they’re anti-Brouwer, presumably, as we discussed. So for the overlap of the cocktail description and the map description that I gave of Brouwer’s fixed-point theorem, I’m going to go with a Manhattan.

EL: Okay.

KK: Is that your favorite cocktail?

HK: It’s one of my favorites. Also Manhattan is almost convex.

KK: Almost.

HK: Almost convex.

KK: So you’re a whiskey drinker?

HK: I am a whiskey drinker.

KK: All right. I don’t drink too much brown liquor because if I drink too much of it I’ll start fights.

HK: Fortunately being sort of small as a human has prevented me from starting too many fights. I just don’t think I would win.

EL: So in my household I am married to a dynamicist, so I’m a dynamicist-in-law, but I’m more of a geometer, and we have this joke that there are certain chores that I’m better at, like loading the dishwasher because I’m good at geometry and what shapes look like. My spouse is good at dynamics, and he is indeed our mixologist. So do you feel like your dynamical systems background gives you a key insight into making cocktails? It certainly seems to work with him.

HK: Definitely for the first cocktail. Subsequent cocktails, I don’t know.

KK: Well I’m going to happy hour tonight. Maybe I’ll get a Manhattan.

HK: Maybe you should talk about Brouwer’s fixed-point theorem.

KK: With my wife? Not so much.

HK: Doesn’t go over that well?

KK: Well, she would listen and understand, but she’s an artist. Cocktails and math, I don’t know, not so much for her.

HK: I don’t know, that just makes me think of, okay, wow, I’m really going to nerd it out. Do you guys ever watch Battlestar Galactica?

EL: I haven’t. It’s on my list.

KK: When I was a kid I watched the original.

HK: The new one. All right. This is for listeners who are BSG nerds. So there’s this drawing of this vortexy universe, this painting of the vortexy universe that features in the later, crappier seasons. Now that makes me think it’s kind of an illustration of Brouwer’s fixed-point theorem. So maybe you should tell your wife to try and paint Brouwer’s fixed-point theorem for you.

KK: Okay.

HK: Marital advice from me. Don’t take it.

KK: We’ve been married for almost 26 years. I think we’re okay. We’re hanging in all right. So we always like to give our guests a chance to plug anything they’ve been working on. You’ve been in a bunch of Numberphile videos, right?

HK: Yeah, that’s right, and there will be more in the future, so if anyone hasn’t checked out Numberphile, it’s this amazing YouTube channel where maths is essentially explained to the public. Mathematicians come, and they talk about some interesting piece of mathematics in what is really meant to be an accessible way. I’ve been a guest on there a couple of times, and it’s definitely worth checking out.

EL: Yeah, they’re great. Holly’s videos are great on there. I like Numberphile in general, but I have personally used your videos about the Mandelbrot set, the dynamics of it and stuff, when I’ve written about it, and some other related dynamical systems. They’ve helped me figure out some of the finer points that as not-a-dynamicist maybe don’t come completely naturally to me.

HK: Oh, that’s awesome.

EL: I’ve included them in a few of the posts I’ve done, like my post about the Mandelbrot set.

HK: That’s amazing. That’s good because I’ve used your blog a few times when I’ve tried to figure out things that people might be interested to know about mathematics and things that are accessible to write and talk about to people. So it goes both directions.

EL: Cool.

KK: It’s a mutual lovefest here.

EL: People can also find you on Twitter. I don’t remember actually what your handle is.

HK: It’s just my name, @hollykrieger.

EL: Thanks a lot for being on the show. It was a pleasure.

HK: Thanks so much for having me. It was great to talk to you guys.

KK: Thanks, Holly.

Episode 24 - Vidit Nanda

Kevin Knudson: Welcome to My Favorite Theorem! I’m your host Kevin Knudson, professor of mathematics at the University of Florida. And this is your other host.

Evelyn Lamb: Hi. I’m Evelyn Lamb. I’m a freelance math and science writer in Salt Lake City. So how’s it going, Kevin?

KK: It’s okay. Classes are almost over. I’ve got grades for 600 students I still need to upload. But, you know, it’s an Excel nightmare, but once we get done with that, it’s okay. Then my son comes home for Christmas on Saturday.

EL: Oh great. I don’t miss grading. I miss some things about teaching, but I don’t miss grading.

KK: No.

EL: I don’t envy this time of the semester.

KK: Certainly not for a 600-student calculus class. But you know, I had a good time. It’s still fun. Anyway, today we are pleased to welcome Vidit Nanda. Vidit, why don’t you introduce yourself and tell everyone about you?

Vidit Nanda: Hello. My name is Vidit Nanda. I’m a research fellow at the University of Oxford and the amazing new Alan Turing Institute in London. This year I’m a member at the School of Mathematics at the Institute for Advanced Study in Princeton. I’m very happy to be here. Thank you both for doing this. This is a wonderful project, and I’m very happy to be a part of it today.

KK: Yeah, we’re having a good time.

EL: Can you tell us a little more about the Alan Turing Institute? I think I’ve heard a little bit about it, but I guess I didn’t even know it was that new. I thought I had just never heard of it before.

VN: Right. So about three years ago, and maybe longer because it takes time to set these things up, the UK decided they needed a national data science center, and what they did was they collected proposals from universities, and the ones who are now, well, the original five universities that got together and contributed funds and professors and students to the Turing Institute were Oxford, Cambridge, Warwick, UCL, and Edinburgh. Now we have a space on what they call the first floor of the British Library, and we would call the second floor of the British Library. Half of that floor is called the Alan Turing Institute, and it’s kind of crazy. You enter the British Library, and there’s this stack of books that kind of looks like wallpaper. It’s too beautiful, you know, but it is real. It’s behind glass. And then you turn to the right, and it’s Las Vegas, you know. There’s a startup-looking data science center with people dressed exactly the way you think they are with the hoodies, you know. It’s sort of nuts. But there are two things I should tell everyone about the Alan Turing Institute who’s listening. The first one is that if you walk down a flight of steps, there’s a room called Treasures of the British Library. Turn left, and the first thing you see is a table with Da Vinci’s sketches right next to Michaelangelo’s letters with the first printing of Shakespeare. Those are the first things you see. So if you’re ever thinking about cutting a corner in a paper you’re writing, you go down to that room, you feel bad about yourself for ten minutes, and you rush back up the stairs, inspired and ready to work hard.

KK: Yeah. This sounds very cool.

EL: Wow, that’s amazing.

VN: That’s the first table. There’s other stuff there.

KK: Yeah, I’m still waiting on my invitation to visit you, by the way.

VN: It’s coming. It would help if I’m there.

KK: Sure, once you’re back. So, Vidit, what’s your favorite theorem?

VN: Well, this will not be a surprise to the two of you since you cheated and you made me tell you this in advance. And this took some time. My favorite theorem is Banach’s fixed point theorem, also called the contraction mapping principle. And the reason it’s my favorite theorem is it’s about functions that take a space to itself, so for example, a polynomial in a single variable takes real numbers to real numbers. You can have functions in two dimensions taking values in two dimensions, and so on. And it gives you a criterion for when this function has a fixed point, which is a point that’s sent to itself by the function.

One of the reasons it’s my favorite theorem—well, there are several—but it’s the first theorem I ever discovered. For the kids in the audience, if there are any, we used to have calculators. I promise. They looked like your iPhone, but they were much stupider. And one of the most fun things you could do with them was mash the square root button like you were in a video game. This is what we had for entertainment.

KK: I used to do this too.

VN: Take a large number, and you mash the square root button, and you get 1. And it worked every time.

KK: Right.

VN: And this is Banach’s fixed-point theorem. That’s my proof of Banach’s fixed-point theorem.

KK: That’s great. What’s the actual statement, though? Let’s be less loose.

VN: Right. The actual statement requires a little bit more work than having an old, beat-up calculator. The setup is kind of simple. You have a complete metric space, and by metric space you mean a space where points have a well-defined distance subject to natural axioms for what a distance is, and complete means if you have a sequence of points that are getting close to each other, they actually have a limit. They stop somewhere. If you have a function from such a complete metric space to itself so that when you apply the function to a pair of points, the images are closer together strictly than the original points were, so f(x) and f(y), the distance between them should be strictly less, some constant less than 1 times the distance between x and y. If this is true, then the function has a unique fixed point, and the amazing part about this theorem that I cannot stress highly enough is that the way to find this fixed point is you start anywhere you want, pick any initial point and keep hitting f, this is mashing the square root button, and very quickly, you converge to the actual fixed point. And when you hit the square root button, nothing changes, you just stay at 1.

KK: And it’s a unique fixed point?

VN: It’s a unique fixed point because wherever else you start, you reach that same place. So I’m an algebraic topologist by trade, and this is very much not an algebraic topology fixed-point theorem. The algebraic topology fixed-point theorem makes no assumptions on the function, like it should be bringing points closer together. It makes assumptions on the space where the function is taking its values. It says if the space is nice, maybe convex, maybe contractible, then there is a fixed point, no uniqueness and no recipe for converging to the fixed point.

KK: In fact, we recently had a guest who chose the Brouwer fixed-point theorem.

EL: Yeah.

VN: Yes, the Brouwer fixed-point theorem is one of my favorites, it’s one of the tools I use in my work a lot, but I always have this sort of analyst envy where their fixed-point theorem comes with a recipe for finding the actual fixed point.

KK: Right.

VN: Instead of an existence result.

KK: Yeah, we just wave our hands and say, “Yeah, yeah, yeah, if you didn’t have a fixed point there’d be some map on homology that couldn’t exist and blah blah blah.

VN: Right. And that’s sort of neat but sort of unsatisfying if what you actually care about are the fixed points.

EL: Yeah, so in some ways I kind of ended up more of an analyst because of this. I was really attracted to algebra and that kind of thing, and I felt like at some point I just couldn’t do anything. I felt like in analysis, at least I could get a bound on something, even if it was a really ugly bound, I could at least come in with my hands and play around in the dirt and eventually come up with something. This is probably showing that somehow my brain is more likely to succeed at analysis or something because I know there are people who get to algebra and they can do things, but I just felt like at some point it was this beautiful but untouchable thing, and analysis wasn’t so pretty, and I didn’t mind going and mucking it up.

KK: I had the opposite point of view. I never liked analysis. All those epsilons and deltas, and maybe it was a function of that first advanced calculus course, where you have to get at the end the thing you’re looking for is less than epsilon, not 14epsilon+3epsilon^2. It had to be less than epsilon. I was like, man, come on, this thing is small! Who cares? So I liked the squishiness of topology. I think that’s why I went there.

VN: I think with those epsilon arguments, I don’t know about you guys, but I always ended up doing it twice. You do it the first time and get some hideous function of epsilon, and then you feed back whatever you got to the beginning of the argument, dividing by whatever is necessary, and then it looks like, when you submit your solution, it looks like you were a genius the whole time, and you knew to choose this very awkward thing initially, and you change the argument.

KK: That’s mathematics, right, when you read a paper, it’s lovely. You don’t see all the ugly, horrifying ream of paper you used for the calculations to get it right, you know. I think that’s part of our problem as mathematicians from a PR point of view. We make it look so slick at the end, and people think, wait a minute, how did you do that? Like it’s magic.

VN: We’re very much writing for people next door in our buildings as opposed to people on the street. It helps sometimes, and it also bites us.

KK: This is where Evelyn’s so great, because she is writing for people on the street, and doing it very well.

EL: Well thank you. I didn’t intend this to come back around here, but I’ll take it. Anyway, getting back to our guest, so when did you first encounter this theorem, and was it something you were immediately really into, or did it take some more time?

VN: Actually, the first time I encountered this theorem in a semiformal setting, it just blazed by. I think this is where most people see it for the first time, is in a differential equations course. One of the things that’s so neat about this theorem is that it’s one of the things that guarantees you take f’(x), which equals some hideous expression of x, why should this have a solution, how long should it have a solution for, when is a solution unique? And this requires the hideous thing on the right side to satisfy the contraction mapping property. The existence and uniqueness of ordinary differential equations is the slickest, most famous application of the Banach fixed-point theorem.

KK: I’d never thought about it.

VN: And the analyst nods while Kevin stares off into space, wondering why this should be the case.

KK: No, no, you had a better differential equations course than I did. In our first diffeq’s course, we wouldn’t bring this up. This is too high-powered, right?

VN: It was sort of mentioned, this was at Georgia Tech. It was mentioned that this property holds, there was no proof, even though the proof is not difficult. It’s not so bad if you understand the Cauchy sequence, which not everyone in differential equations does. So we were not shown the proof, but there’s a contraction mapping principle. And then Wikipedia was in its infancy, so now I’m dating myself badly, but I did look it up then and then forgot about it. And then of course I saw it in graduate school all over the place.

KK: Hey, when I was in college, the internet didn’t exist.

VN: How did you get anything done?

KK: You went to the library.

EL: Did you use a card catalog?

KK: I’m a master of the card catalog.

EL: We had one at my elementary school library.

KK: Geez. So growing up in high school, we used to go to the main public library downtown where they had bound periodicals and so if you needed to do your report about, say, the assassination of John Kennedy, for example, you had to go and pull the old Newsweeks off the shelf from 1963. I don’t know, there’s something to that. There’s something to having to actually dig instead of just having it on your phone. But I don’t want to sound like an old curmudgeon either. The internet is great. Well, although wait a minute, the net neutrality vote is happening right now.

VN: It’s great while we speak. We don’t know what’s going to happen in 20 minutes.

KK: Maybe in the middle of this conversation we’re going to get throttled. So Vidit, part of the fun here is that we ask our guest to pair their theorem with something. So what have you chosen to pair the contraction theorem with?

VN: I’m certainly not going to suggest Plato like one of the recent guests. I have something very simple in mind. The reason I have something simple in mind is there’s an inevitability to this theorem, right? You will find the fixed point. So I wanted something inevitable and irresistible in some sense, so I want to pair it with pizza.

EL: Pizza is the best food. Hands down.

VN: Right. It is the best food, hands down. I’m imagining the sort of heathens’ way of eating pizza, right, you eat the edges and move in. I’ve seen people do this, and it’s sort of very disturbing to me. The edge is how you hold the damn thing in the first place. But if you imagine a pizza being eaten from the outside, that’s how I think of the contraction mapping, converging to the middle, the most delicious part of the pizza. I refuse to tell you what fraction of the last two weeks it took me to come up with this pairing. It’s disturbingly difficult.

KK: So you argue that the middle of the pizza is the most delicious part?

EL: Oh yeah.

KK: See, my dog would argue with you. She is obsessed with the crust. If we ever get a pizza, she’s just sitting there: “Wait, can I have the crust?”

EL: But the reason she gets the crust is because humans don’t find it the most delicious.

VN: If I want to eat bread, I’ll eat bread.

KK: I make my own pizza dough, so I make really good pizza crust. It’s worth eating. It’s not this vehicle. But you’re right. Yeah, sure.

EL: We’re going to press you now. What pizza toppings are we talking here? We really need specifics. It’s 9 am where I am, so I can’t have pizza now unless I made my own.

KK: You could. You can have it any time of the day.

EL: But I don’t think there’s a store open. I guess I could get a frozen pizza at the grocery store.

VN: Kevin would suggest having a quick-rise dough set up that, if you pour your yeast in it, it’ll be done in 20 minutes. I think, I’m not big into toppings, but it’s important to have good toppings. Maybe bufala mozzarella and a bit of basil, keep it simple. There’s going to be tomatoes in it, of course, some pizza sauce. But I don’t want to overload it with olives and peppers and sausage and all that.

EL: Okay. So you’re going simple. That’s what we do. We make our own pizza a lot, and a couple years ago we decided to just for fun buy the fancy canned tomatoes from Italy, the San Marzanos.

VN: The San Marzanos, yeah.

EL: Buy the good mozzarella. And since then, that’s all we do. We used to put a bunch of toppings on it all the time, and now it’s just, we don’t even make a sauce, we just squish the tomatoes onto the pizza. Then put the cheese on it, and then the basil, and it’s so good.

KK: I like to make, I assume you’ve both been to the Cheese Board in Berkeley?

EL: No, I haven’t. I hear about it all the time.

KK: It’s on Shattuck Ave in Berkeley, and they have the bakery. It’s a co-op. The bakery is scones—delicious scones, amazing scones—and bread and coffee and all that. And right next door is a pizza place, and they make one kind of pizza for the day, and that’s what you’re going to have. You’re going to have it because it’s delicious. Even the ones where you’re like “eh,” it’s amazing. The line goes down the block, and everybody’s in a good mood, there’s a jazz trio. Anyway, I got the cookbook, and that’s how I make my crust. There’s a sourdough crust, and then our favorite one is the zucchini-corn pizza.

EL: Really.

KK: It’s zucchinis, onions, and cheese, and then corn, and a little feta on top. And then you sprinkle some cilantro and a squeeze of lime juice.

VN: God, I’m so hungry right now.

KK: This is amazing. Yeah, it’s almost lunchtime. My wife and I are going to meet for lunch after this, so can we wrap this up?

EL: Hopefully you’re going to have pizza.

KK: We’re going to a new breakfast place, actually. I’ve got huevos rancheros on my mind.

VN: Excellent.

EL: That’s good too.

KK: Well this has been great fun, Vidit. Thanks for joining us.

VN: Thanks so much again for having me and for doing this. I’m looking forward to seeing who else you’ve managed to rope in to describe their favorite theorems.

KK: There are some good ones.

EL: We’re enjoying it.

KK: We’re having a good time.

VN: Wonderful. Thank you so much, and have fun.

EL: Nice talking to you.

KK: See you. Bye.

Episode 23 - Ingrid Daubechies

Evelyn Lamb: Hello and welcome to My Favorite Theorem. This is a podcast about math where we invite a mathematician in each episode to tell us about their favorite theorem. I’m one of your hosts, Evelyn Lamb. I’m a freelance math and science writer in Salt Lake City, Utah. And this is your other host.

Kevin Knudson: Hi, I’m Kevin Knudson, professor of mathematics at the University of Florida. I’m excited about this one.

EL: Yes, I’m very excited. I’m too excited to do any banter. We’re coming up on our one-year anniversary, and we are very honored today to have a special guest. She is a professor at Duke. She has gotten a MacArthur Fellowship, won many prizes. I was just reading her Wikipedia page, and there are too many to list. So we are very happy to have Ingrid Daubechies on the show. Hi, Ingrid. Can you tell us a little bit about yourself?

Ingrid Daubechies: Hi, Evelyn and Kevin. Sure. I have just come back from spending several months in Belgium, in Brussels. I had arranged to have a sabbatical there to be close to and help set up arrangements for my very elderly parents. But I also was involved in a number of fun things, like the annual contest for high school students, to encourage them to major in mathematics once they get to college. And this is the year I turn 64, and because 64 is a much more special number than 60 for mathematicians, my students had arranged to organize some festivities, which we held in a conference center in my native village.

KK: That’s fantastic.

ID: A lot of fun. We had family and friends, we opted to have a family and friends activity instead of a conference where we tried to get the highest possible collection of big name marquee. I enjoyed it hugely. We had a big party in Belgium where I invited via Facebook everybody who ever crossed my timeline. There were people I went to high school with, there was a professor who taught me linear algebra.

KK: Oh, wow.

ID: So it was really a lot of fun.

KK: That’s fantastic.

EL: Yeah, and you have also been president of the International Mathematical Union. I meant to say that at the beginning and forgot. So that is also very exciting. I think while you were president, you probably don’t remember this, but I think we met at a conference, and I was trying to talk to you about something and was very anxious because my grandfather had just gone to the hospital, and I really couldn’t think about anything else. I remember how kind you were to me during that, and just, I think you were talking about your parents as well. And I was just thinking, wow, I’m talking to the president of the International Mathematical Union, and all I can think about is my grandpa, and she is being so nice to me.

ID: Well, of course. This is so important. We are people. We are connected to other people around us, and that is a big part of our life, even if we are mathematicians.

EL: But we have you on the show today to talk about theorems, so what is your favorite theorem?

ID: Well, I of course can’t say that I have one particular favorite theorem. There are so many beautiful theorems. Right now I have learned very recently, and I am ashamed to confess how recently, because it’s a theorem that many people learn in kindergarten, it’s a theorem called Tutte’s embedding theorem about graphs, meshes, in my case it’s a triangular mesh, and the fact that you can embed it, meaning defining a map to a polygon in the plane without having any of the vertices cross, so really an embedding of the whole graph, so every triangle on the complicated mesh that you have, it’s a disk-type mesh, meaning it has no holes, a boundary, lots of triangles, but you can think of it as a complicated thing, and you can embed it under certain conditions in a convex polygon in the plane, and I really, really, really love that. I visualize it by thinking of it as a complicated shape and applying a hair dryer to it to kind of, like Saran wrap, and a hair dryer will flatten it nicely, will try to flatten it, and I think the fact that you can always do it is great. And we’re using it for an interesting, actually we are extending it to mappings, the theorem is originally formulated for a convex polygon in the plane, you can always map to a convex polygon in the plane, and we are extending it to the case where you have a non-convex polygon because that’s what we need, and then we have certain conditions.

KK: Sure. Well, there have to be some conditions, right, because certainly not every graph, every mesh you would draw is planar.

ID: Yeah.

KK: What are those conditions?

ID: It has to be 3-connected, and you define a set of weights on it, on the edges, that ensure planarity. You define weights on the edges that are all positive. What happens is that once you have it in the polygon, you can write each one of the vertices as a convex combination of its neighbors.

KK: Yeah.

ID: And those define your weights. You have to have a set of weights on the edges on your original graph that will make that possible.

KK: Okay.

ID: So you define weights on the original graph that help you in the embedding. What happens is that the positive weights are then used for that convexity. So you have these positive weights, and you use them to make this embedding, and so it’s a theorem that doesn’t tell you only that it is planar, but it gives you a mechanism for building that map to the plane. That’s really the power of the theorem. So you start already with something that you know is planar and you build that map.

KK: Okay.

ID: It’s really powerful. It’s used a lot by people in computer graphics. They then can reason on that Tutte embedding in the plane to build other things and apply them back to the original mesh they had in 3-space for the complicated object they had. And that’s also what we’re trying to use it for. But we like the idea of going to non-convex polygons because that, for certain of the applications that we have, will give us much less deformation.

EL: So, is this related to, I know that you’ve done some work with art reconstruction, and actually in the back of the video here, I think I see some pictures of art that you have helped reconstruct. So is it related to that work?

ID: Actually, it isn’t, although if at some point we go to 3-D objects rather than the paintings we are doing now, it might become useful. But right now this collaboration is with biologists where we’re trying to, well we have been working for several years and we’re getting good results, we are quantifying similarity of morphological surfaces. So the people we work with are working on bones and teeth. They’re paleontologists. Well, they’re interested in evolutionary anthropology, but they work a lot with teeth and bones. And there’s a lot of domain knowledge they have because they’ve seen so many, and they remember things. But of course in order to do science with it, they need to quantify how many similar or dissimilar things are. And they have many methods to do that. And we are trying to work with them to try to automate some of these methods in ways that they find useful and ways that they seek. We’ve gotten very good results in this over the many years that we’ve worked with them. We’re very excited about recent progress we’ve made. In doing that, these surfaces already for their studies get scanned and triangulated. So they have these 3-d triangulations in space. When you work with these organs and muscles and all these things in biology, usually you have 3-d shapes, and in many instances you have them voxelized, meaning you have the 3-d thing. But because they work with fossils, which often they cannot borrow from the place where the fossil is, they work with casts of those in very high-quality resin. And as a result of that, when they bring the cast back, they have the surface very accurately, but they don’t have the 3-d structure. So we work with the surfaces, and that’s why we work with these 3-d meshes of surfaces. And we then have to quantify how close and similar or dissimilar things are. And not just the whole thing, but pieces of it. We have to find ways in which to segment these in biologically meaningful ways. The embedding theorem comes in useful.

But it’s been very interesting to try to build mathematically a structure that will embody a lot of how biologists work. Traditionally what they do is, because they know so much about the collection of things they study, is they find landmarks, so they have this whole collection. They see all these things have this particular thing in common. It looks different and so on. But this landmark point that we mark digitally on these scanned surfaces is the same point in all of them. And the other point is the same. So they mark landmarks, maybe 20 landmarks. And then you can use that to define a mapping. But they asked us, “could we possibly do this landmark-free at some point?” And many biologists scoffed at the idea. How could you do this? At the beginning, of course, we couldn’t. We could find distances that were not so different from theirs, but the landmarks were not in the right places. But we then started realizing, look, why do they have this immense knowledge? Because they have seen so many more than just 20 that they’re now studying.

So we realized this was something where we should look at many collections, and there we have found, with a student of mine who made a breakthrough, if you start from a mapping between, so you have many surfaces, and you have a first way of mapping one to the other and then defining a similarity or not, depending on how faithful the mapping is, all these mappings are kind of wrong, not quite right. But because you have a large collection, there are so many little mistakes that are made that if you have a way of looking at it all, you can view those mistakes as the errors in a data set, and you can try to cancel them out. You can try to separate the grains from the chaff to get the essence of what is in there. A little bit like students will learn when they have a mentor who tells them, no, that point is not really what you think, and so on. So that’s what we do now. We have large collections. We have initial mappings that are not perfect. And we use the fact that we have the large collection to define, then, from that large collection, using machine learning tools, a much better mapping. The biologists have been really impressed by how much better the mappings are once we do that. The wonderful thing is that we use this framework, of course we use machine learning tools, we use all these computer graphics and dealing with surfaces to be efficient. We frame it as a fiber bundle, and we learn. If you think of it, every single one, if you look at a large collection, every one differs by little bits. We want to learn the structure of this set of teeth. Every tooth is a 2-d surface, and similar teeth can map to each other, so they’re all fibers, and we have a connection. And we learn that connection. We have a very noisy version of the connection. But because we know it’s a connection, and because it’s a connection that should be flat because things can be brought back to their common ancestor, and so going from A to B and B to C, it should not matter in what order you go because all these mappings can go to the common ancestor, and so it should kind of commute, we can really get things out. We have been able to use that in order to build correspondences that biologists are now using for their statistical analysis.

KK: So differential geometry for biology.

ID: Yes. Discrete differential geometry, which if there is an oxymoron, that’s one.

KK: Wow.

ID: So we have a team that has a biologist, it has people who are differential geometers, we have a computational geometer, and he was telling me, “you know, for this particular piece of it, it would be really useful if we had a generalization of Tutte’s theorem to non-convex polygons,” and I said, “well, what’s Tutte’s theorem?” And so I learned it last week, and that’s why it’s today my favorite theorem.

EL: Oh wow, that’s really neat.

KK: So we’ll follow up with you next year and see what your favorite theorem is then.

EL: Yeah, it sounds like a really neat collaborative environment there where everybody has their own special knowledge that they’re bringing to the table.

ID: Yes, and actually I have found that to be very, very stimulating in my whole career. I like working with other people. I like when they give you challenges. I like feeling my brain at work, with working together with their different expertise. And, well, once you’ve seen a couple of these collaborations at work, you get a feel for how you jump-start that, how you manage to get people talking about the problems they have and kind of brainstorm until a few problems get isolated on which we really can start to get our teeth dug in and work on it. And that itself is a dynamic you have to learn. I’m sure there are social scientists who know much more about this. In my limited setting, I now have some experience in starting these things up, and so my students and postdocs participate. And some of them have become good at propagating. I’m very motivated by the fact that you can do applications of mathematics that are really nontrivial, and you can distill nontrivial problems out of what people think are mundane applications. But it takes some investing to get there. Because usually the people who have the applications—the biologists, in my case—they didn’t say, “we had this very particular fiber bundle problem.”

EL: Right.

ID: In fact, it’s my student who then realized we really had a fiber bundle, and that helped define a machine learning problem differently than it had been before. That then led to interesting results. So you need all the background, you need the sense of adventure of trying to build tools in that background that might be useful. And I’m convinced that for some of these tools that we build, when more pure mathematicians learn about them, they might distill things in their world from what we need. And this can lead to more pure mathematics ultimately.

KK: Sure, a big feedback loop.

ID: Yes, absolutely. That’s what I believe in very, very strongly. But part of my life is being open to when I hear about things, is there a meaningful mathematical way to frame this? Not just for the fun of it, but will it help?

EL: Yeah, well, as I mentioned, I was amazed by the way you’ve used math for this art reconstruction. I think I saw a talk or an article you wrote about, and it was just fascinating. Things that I never would have thought would be applicable to that sphere.

ID: Yeah, and again it’s the case that there’s a whole lot of knowledge we have that could be applicable, and in that particular case, I have found that it’s a wonderful way to get undergraduates involved because they at the same time learn these tools of image processing and small machine learning tools working on these wonderful images. I mean, how much cooler is it to work on the Ghent altar piece, or even less famous artwork, than to work on standards of image analysis. So that has been a lot of fun. And actually, as I was in Belgium, the first event of the week of celebration we had was an IP4AI, which is Image Processing for Art Investigation, workshop. It’s really over the last 10-15 years, as a community is taking off. We’re trying to have this series of workshops where we have people who are interested in image processing and the mathematics and the engineering of that talk to people who have concrete problems in art conservation or understand art history. We try to have these workshops in museums, and we had it at a museum in Ghent, and it again was very, very stimulating exhilarating.

KK: So another thing we like to do on this podcast is ask our guest to pair their favorite theorem with something. So I’m curious. What do you think pairs well with Tutte’s theorem?

ID: Well, I was already thinking of Saran wrap and the hair dryer, but…

KK: No, that’s perfect. Yeah.

ID: I think also—not for Tutte’s theorem, there I really think of Saran wrap and a hair dryer—but I also am using in some of the work in biology as well what people call diffusion, manifold learning through diffusion techniques. The idea is if you have a complicated world where you have many instances and some of them are very similar, and others are similar to them, and so on, but after you’ve moved 100 steps away, things look not similar at all anymore, and you’d like to learn the geometry of that whole collection.

KK: Right.

ID: Very often it’s given to you by zillions of parameters. I mean, like images, if you think of each pixel of the image as a variable, then you live in thousands, millions of dimensions. And you know that the whole collection of images is not something that fills that whole space. It’s a very thin, wispy set in there. You’d like to learn its geometry because if you learn its geometry, you can do much more with it. So one tool that was devised, I mean 10 years ago or so—it’s not deep learning, it’s not as recent as that—is manifold learning in which you say, well, in every neighborhood if you look at all the things that are similar to me, then I have a little flat disc, it’s close enough to flat that I can really approximate it as flat. And then I have another one, and so on, and I have two mental images for that. I have one mental image: this whole kind of crochet thing, where each one of it you make with a crochet. You cover the whole thing with doilies in a certain sense. You can knit it together, or crochet it together and get the more complex geometry. Another image I often have is sequins. Every little sequin is a little disc.

EL: Yeah.

ID: But it can make it much more complex. So many of my mental images and pairings, if you want, are hands-on, crafty things.

KK: Do you knit and crochet yourself?

ID: Yes, I do. I like making things. I use metaphors like that a lot when I teach calculus because it’s kind of obvious. I find I use almost no sports metaphors. Sports metaphors are big in teaching mathematics, but I use much more handicraft metaphors.

KK: So what else should we talk about? ID: One thing, actually, I was saying, I had such a lot of fun a couple of weeks ago when there was a celebration. The town in which I was born happens to have a fantastic new administrative building in which they have brought together all different services that used to be in different buildings in the town. The building was put together by fantastic architects, and it feels very mathematical. And it has beautiful shapes.

It’s in a mining town—I’m from a coal mining town—and so they have two hyperboloid shapes that they used to bring light down to the lower floors. That reminds people of the cooling towers of the coal mine. They have all these features in it that feel very mathematical. I told the mayor, I said, “Look, I’ll have this group of mathematicians, some of whom are very interested in outreach and education. We could, since there will be a party on Saturday and the conference only starts on Monday, we could on the Sunday have a brainstorming thing in which we try to design a clue-finding search through the building. We design mathematical little things in the building that fit with the whole design with the building. So you should have the interior designers as part of the workshop. I have no idea what will come out, but if something comes out, then we could find a little bit of money to realize it, and that could be something that adds another feature to the building.”

He loved the idea! I thought he was going to be…but he loved the idea. He talked to the person who runs the cafeteria about cooking a special meal for us. So we had a tagine because he was from Morocco. We wanted just sandwiches, but this man made this fantastic meal. We toured the building in the morning and in the afternoon we had brainstorming with local high school teachers and mathematicians and so on. We put them in three small groups, and they came up with three completely different ideas, which all sound really interesting. And then one of them said, “Why don’t we make it an activity that either a family could do, one after the idea, or a classroom could do? You’d typically have only an hour or an hour and a half, and the class would be too big, but you’d split the class into three groups, and each group does one of the activities. They all find a clue, and by putting the clues together, they find some kind of a treasure.”

KK: Oh, wow.

ID: So the ideas were great, and they link completely different things. One is more dynamical systems, one is actually embodying some group and graph theory (although we won’t call it that). And what I like, one of the goals was to find ideas that would require mathematical thinking but that were not linked to curriculum, so you’d start thinking, how would I even frame this? And so on, and trying to give stepwise progression in the problems so that they wouldn’t immediately have the full, complete difficult thing but would have to find ways of building tools that would get you there. They did excellent work. Now each team has a group leader that over email is working out details. We have committed to in a year working out all the details of texts and putting the materials together so it can actually be realized. That was the designers’ part. Can we make something like that not too expensive? They said, oh yeah, with foam and fabric. And I know they will do it.

A year from now I will see whether it all worked on that.

EL: So will you come to Salt Lake next and do that in my town?

ID: Do you have a great building in which it work?

EL: I’m trying to think.

ID: We’re linking it to a building.

EL: I’ll have to think about that.

KK: Well, we have a brand new science museum here in Gainesville. It’s called the Cade Museum. So Dr. Cade is the man who invented Gatorade, you know, the sports drink.

ID: Yes.

KK: And his family got together and built this wonderful new science museum. I haven’t been yet. It just opened a few months ago.

ID: Oh wow.

KK: I’m going to walk in there thinking about this idea.

ID: Yeah, and if you happen to be in Belgium, I can send you the location of this building, and you can have a look there.

KK: Okay. Sounds excellent. Well, this has been great, Ingrid. We really appreciate your taking your time to talk to us today.

ID: Well thank you.

KK: We’re really very honored.

ID: Well it’s great to have this podcast, the whole series.

KK: Yeah, we’re having a good time.

EL: We also want to thank our listeners for listening to us for a year. I’m just going to assume that everyone has listened religiously to every single episode. But yeah, it’s been a lot of fun to put this together for the past year, and we hope there will be many more.

ID: Yes, good luck with that.

KK: Thanks.

ID: Bye.

Episode 22 - Ken Ribet

Evelyn Lamb: Welcome to My Favorite Theorem, a podcast about math. I’m Evelyn Lamb, one of your cohosts, and I’m a freelance math and science writer in Salt Lake City, Utah.

Kevin Knudson: Hi, I’m Kevin Knudson, a professor of mathematics at the University of Florida. How are you doing, Evelyn? Happy New Year!

EL: Thanks. Our listeners listening sometime in the summer will really appreciate the sentiment. Things are good here. I promised myself I wouldn’t talk about the weather, so instead in the obligatory weird banter section, I will say that I just finished a sewing project, only slightly late, as a holiday gift for my spouse. So that was fun. I made some napkins. Most sewing projects are non-Euclidean geometry because bodies are not Euclidean.

KK: Sure.

EL: But this one was actually Euclidean geometry, which is a little easier.

KK: Well I’m freezing. No one ever believes this about Florida, but I’ve never been so cold in my life as I have been in Florida, with my 70-year-old, poorly insulated home, when highs are only in the 40s. It’s miserable.

EL: Yeah.

KK: But the beauty of Florida, of course, is that it ends. Next week it will be 75. I’m excited about this show. This is going to be a good one.

EL: Yes, so we should at this point introduce our guest. Today we are very happy to have Ken Ribet on the show. Ken, would you like to tell us a little bit about yourself?

Ken Ribet: Okay, I can tell you about myself professionally first. I’m a professor of mathematics at the University of California Berkeley, and I’ve been on the Berkeley campus since 1978, so we’re coming up on 40 years, although I’ve spent a lot of time in France and elsewhere in Europe and around the country. I am currently president of the American Mathematical Society, which is how a lot of people know me. I’m the husband of a mathematician. My wife is Lisa Goldberg. She does statistics and economics and mathematics, and she’s currently interested in particular in the statistics of sport. We have two daughters who are in their early twenties, and they were home for the holidays.

KK: Good. My son started college this year, and this was his first time home. My wife and I were super excited for him to come home. You don’t realize how much you’re going to miss them when they’re gone.

KR: Exactly.

EL: Hi, Mom! I didn’t go home this year for the holidays. I went home for Thanksgiving, but not for Christmas or New Year.

KK: Well, she missed you.

EL: Sorry, Mom.

KK: So, Ken, you gave us a list of something like five theorems that you were maybe going to call your favorite, which, it’s true, it’s like picking a favorite child. But what did you settle on? What’s your favorite theorem?

KR: Well, maybe I should say first that talking about one’s favorite theorem really is like talking about one’s favorite child, and some years ago I was interviewed for an undergraduate project by a Berkeley student, who asked me to choose my favorite prime number. I said, well, you really can’t do that because we love all our prime numbers, just like we love all our children, but then I ended up reciting a couple of them offhand, and they made their way into the publication that she prepared. One of them is the six-digit prime number 144169, which I encountered early in my research.

KK: That’s a good one.

KR: Another is 1234567891, which was discovered in the 1980s by a senior mathematician who was being shown a factorization program. And he just typed some 10-digit number into the program to see how it would factor it, and it turned out to be prime!

KK: Wow.

KR: This was kind of completely amazing. So it was a good anecdote, and that reminded me of prime numbers. I think that what I should cite as my favorite theorem today, for the purposes of this encounter, is a theorem about prime numbers. The prime numbers are the ones that can’t be factored, numbers bigger than 1. So for example 6 is not a prime number because it can be factored as 2x3, but 2 and 3 are prime numbers because they can’t be factored any further. And one of the oldest theorems in mathematics is the theorem that there are infinitely many prime numbers. The set of primes keeps going on to infinity, and I told one of my daughters yesterday that I would discuss this as a theorem. She was very surprised that it’s not, so to speak, obvious. And she said, why wouldn’t there be infinitely many prime numbers? And you can imagine an alternative reality in which the largest prime number had, say, 50,000 digits, and beyond that, there was nothing. So it is a statement that we want to prove. One of the interesting things about this theorem is that there are myriad of proofs that you can cite. The best one is due to Euclid from 2500 years ago.

Many people know that proof, and I could talk about it for a bit if you’d like, but there are several others, probably many others, and people say that it’s very good to have lots of proofs of this one theorem because the set of prime numbers is a set that we know a lot about, but not that much about. Primes are in some sense mysterious, and by having some alternative proofs of the fact that there are infinitely many primes, we could perhaps say we are gaining more and more insight into the set of prime numbers.

EL: Yeah, and if I understand correctly, you’ve spent a lot of your working life trying to understand the set of prime numbers better.

KR: Well, so that’s interesting. I call myself a number theorist, and number theory began with very, very simple problems, really enunciated by the ancient Greeks. Diophantus is a name that comes up frequently. And you could say that number theorists are engaged in trying to solve problems from antiquity, many of which remain as open problems.

KK: Right.

KR: Like most people in professional life, number theorists have become specialists, and all sorts of quote-on-quote technical tools have been developed to try to probe number theory. If you ask a number theorist on the ground, as CNN likes to say, what she’s working on, it’ll be some problem that sounds very technical, is probably hard to explain to a general listener, and has only a remote connection to the original problems that motivated the study. For me personally, one of the wonderful events that occurred in my professional life was the proof of Fermat’s last theorem in the mid-1990s because the proof uses highly technical tools that were developed with the idea that they might someday shed light on classical problems, and lo and behold, some problem that was then around 350 years old was solved using the techniques that had been developed principally in the last part of the 20th century.

KK: And if I remember right — I’m not a number theorist — were you the person who proved that the Taniyama-Weil conjecture implied Fermat’s Last Theorem?

KR: That’s right. The proof consists of several components, and I proved that something implies Fermat’s Last Theorem.

KK: Right.

KR: And then Andrew Wiles partially, with the help of Richard Taylor, proved that something. That something is the statement that elliptic curves (whatever they are) have a certain property called modularity, whatever that is.

EL: It’s not fair for you to try to sneak an extra theorem into this podcast. I know Kevin baited you into it, so you’ll get off here, but we need to circle back around. You mentioned Euclid’s proof of the infinitude of primes, and that’s probably the one most people are the most familiar with of these proofs. Do you want to outline that a little bit? Actually not too long ago, I was talking to the next door neighbors’ 11-year-old kid, he was interested in prime numbers, and the mom knows we’re mathematicians, so we were talking about it, and he was asking about what the biggest prime number was, and we talked about how one might figure out whether there was a biggest prime number.

KR: Yeah, well, in fact when people talk about the proof, often they talk about it in a very circular way. They start with the statement “suppose there were only finitely many primes,” and then this and this and this and this, but in fact, Euclid’s proof is perfectly direct and constructive. What Euclid’s proof does is, you could start with no primes at all, but let’s say we start with the prime 2. We add 1 to it, and we see what we get, and we get the number 3, which happens to be prime. So we have another prime. And then what we do is take 2 and multiply it by 3. 2 and 3 are the primes that we’ve listed, and we add 1 to that product. The product is 6, and we get 7. We look at 7 and say, what is the smallest prime number dividing 7? Well, 7 is already prime, so we take it, and there’s a very simple argument that when you do this repeatedly, you get primes that you’ve never seen before. So you start with 2, then you get 3, then you get 7. If you multiply 2x3x7, you get 6x7, which is 42. You add 1, and you get 43, which again happens to be prime. If you multiply 2x3x7x43 and add 1, you get a big number that I don’t recall offhand. You look for the prime factorization of it, and you find the smallest prime, and you get 13. You add 13 to the list. You have 2, 3, 7, 43, 13, and you keep on going. The sequence you get has its own Wikipedia page. It’s the Euclid-Mullin sequence, and it’s kind of remarkable that after you repeat this process around 50 times, you get to a number that is so large that you can’t figure out how to factor it. You can do a primality test and discover that it is not prime, but it’s a number analogous to the numbers that occur in cryptography, where you know the number is not prime, but you are unable to factor it using current technology and hardware. So the sequence is an infinite sequence by construction. But it ends, as far as Wikipedia is concerned, around the 51st term, I think it is, and then the page says that subsequent terms are not known explicitly.

EL: Interesting! It’s kind of surprising that it explodes that quickly and it doesn’t somehow give you all of the small prime numbers quickly.

KR: It doesn’t explode in the sense that it gets bigger and bigger. You have 43, and it drops back to 13, and if you look at the elements of the sequence on the page, which I haven’t done lately, you’ll see that the numbers go up and then down. There’s a conjecture, which was maybe made without too much evidence, that as you go to the sequence, you’ll get all prime numbers.

EL: Okay. I was about to ask that, if we knew if you would eventually get all of them, or end up with some subsequence of them.

KR: Well, the expectation, which as I say is not based on really hard evidence, is that you should be able to get everything.

KK: Sure. But is it clear that this sequence is actually infinite? How do we know we don’t get a bunch of repeats after a while?

KR: Well, because the principle of the proof is that if you have a prime that’s appeared on the list, it will not divide the product plus 1. It divides the product, but it doesn’t divide 1, so it can’t divide the new number. So when you take the product and you factor it, whatever you get will be a quote-on-quote new prime.

KK: So this is a more direct version of what I immediately thought of, the typical contradiction proof, where if you only had a finite number of primes, you take your product, add 1, and ask what divides it? Well, none of those primes divides it. Therefore, contradiction.

KR: Yes, it’s a direct proof. Completely algorithmic, recursive, and you generate an infinite set of primes.

KK: Okay. Now I buy it.

EL: I’m glad we did it the direct way. Because setting it up as a proof by contradiction when it doesn’t really need the contradiction, it’s a good way, when I’ve taught things like this in the past, this is a good way to get the proof, but you can kind of polish it up and make it a little prettier by taking out the contradiction step since it’s not really required.

KR: Right.

KK: And for your 11-year-old friend, contradiction isn’t what you want to do, right? You want a direct proof.

KR: Exactly. You want that friend to start computing.

KK: Are there other direct proofs? There must be.

KR: Well, another direct proof is to consider the numbers known as Fermat numbers. I’ll tell you what the Fermat numbers are. You take the powers of 2, so the powers of 2 are 1, 2, 4, 8, 16, 32, and so on. And you consider those as exponents. So you take 2 to those powers of 2. 2^1, 2^2, 2^4, and so on. To these numbers, you add the number 1. So you start with 2^0, which is 1, 2^1 is 2, and you add 1 and get 3. Then the next power of 2 is 2. You add 1 and you get 5. The next power of 2 is 4. 2^4 is 16. You add 1, and you get 17. The next power of 2 is 8. 2^8 is 256, and you add 1 and get 257. So you have this sequence, which is 3, 5, 17, 257, and the first elements of the sequence are prime numbers. 257 is a prime number. And it’s rather a famous gaffe of Fermat that he apparently claimed that all the numbers in the sequence were prime numbers, that you could just generate primes that way. But in fact, if you take the next one, it will not be prime, and I think all subsequent numbers that have been computed have been verified to be non-prime. So you get these Fermat numbers, a whole sequence of them, an infinite sequence of them, and it turns out that a very simple argument shows you that any two different numbers in the sequence have no common factor at all. And so, for example, if you take 257 and, say, the 19th Fermat number, that pair of numbers will have no common factor. So since 257 happens to be prime, you could say 257 doesn’t divide the 19th Fermat number. But the 19th Fermat number is a big number. It’s divisible by some prime. And you can take the sequence of numbers and for each element of the sequence, take the smallest prime divisor, and then you get a sequence of primes, and that’s a infinite sequence of primes. The primes are all different because none of the numbers have a common factor.

KK: That’s nice. I like that proof.

EL: Nice! It’s kind of like killing a mosquito with a sledgehammer. It’s a big sequence of these somewhat complicated numbers, but there’s something very fun about that. Probably not fun to try to kill mosquitoes with a sledgehammer. Don’t try that at home.

KK: You might need it in Florida. We have pretty big ones.

KR: I can tell you yet a third proof of the theorem if you think we have time.

KK: Sure!

KR: This proof I learned about, and it’s an exercise in a textbook that’s one of my all-time favorite books to read. It’s called A Classical Introduction to [Modern] Number Theory by Kenneth Irleand and Michael Rosen. When I was an undergraduate at Brown, Ireland and Rosen were two of my professors, and Ken Ireland passed away, unfortunately, about 25 years ago, but Mike Rosen is still at Brown University and is still teaching. They have as an exercise in their book a proof due to a mathematician at Kansas State, I think it was, named Eckford Cohen, and he published a paper in the American Mathematical Monthly in 1969. And the proof is very simple. I’ll tell you the gist of it. It’s a proof by contradiction. What you do is you take for the different numbers n, you take the geometric mean of the first n numbers. What that means is you take the numbers 1, 2, 3, you multiply them together, and in the case of 3, you take the cube root of that number. We could even do that for 2, you take 1 and 2 and multiply them together and take the square root, 1.42. And these numbers that you get are smaller than the averages of the numbers. For example, the square root of 2 is less than 1.5, and the cube root of 6, of 1x2x3, is less than 2, which is the average of 1, 2, and 3. But nevertheless these numbers get pretty big, and you can show using high school mathematics that these numbers approach infinity, they get bigger and bigger. You can show, using an argument by contradiction, that if there were only finitely many primes, these numbers would not get bigger and bigger, they would stop and be all less than some number, depending on the primes that you could list out.

EL: Huh, that’s really cool.

KK: I like that.

KR: That’s kind of an amazing proof, and you see that it has absolutely nothing to do with the two proofs I told you about before.

KK: Sure.

EL: Yeah.

KK: Well that’s what’s so nice about number theory. It’s such a rich field. You can ask these seemingly simple questions and prove them 10 different ways, or not prove them at all.

KR: That’s right. When number theory began, I think it was a real collection of miscellany. People would study equations one by one, and they’d observe facts and record them for later use, and there didn’t seem to be a lot of order to the garden. And the mathematicians who tried to introduce the conceptual techniques in the last part of the 20th century, Carl Ludwig Siegel, André Weil, Jean-Pierre Serre, and so on, these people tried to make everything be viewed from a systematic perspective. But nonetheless if you look down at the fine grain, you’ll see there are lots of special cases and lots of interesting phenomena. And there are lots of facts that you couldn’t predict just by flying at 30,000 feet and trying to make everything be orderly.

EL: So, I think now it’s pairing time. So on the show, we like to ask our mathematicians to pair their theorem with something—food, beverage, music, art, whatever your fancy is. What have you chosen to pair with the infinitude of primes?

KR: Well, this is interesting. Just as I’ve told you three proofs of this theorem, I’d like to discuss a number of possible pairings. Would that be okay?

KK: Sure. Not infinitely many, though.

KR: Not infinitely many.

EL: Yeah, one for each prime.

KR: One thing is that prime numbers are often associated with music in some way, and in fact there is a book by Marcus du Sautoy, which is called The Music of the Primes. So perhaps I could say that the subject could be paired with his book. Another thing I thought of was the question of algorithmic recursive music. You see, we had a recursive description of a sequence coming from Euclid’s method, and yesterday I did a Google search on recursive music, and I got lots of hits. Another thing that occurred to me is the word prime, because I like wine a lot and because I’ve spent a lot of time in France, it reminds me of the phrase vin primeur. So you probably know that in November there is a day when the Beaujolais nouveau is released all around the world, and people drink the wine of the year, a very fresh young wine with lots of flavor, low alcohol, and no tannin, and in France, the general category of new wines is called vin primeur. It sounds like prime wines. In fact, if you walk around in Paris in November or December and you try to buy vin primeur, you’ll see that there are many others, many in addition to the Beaujolais nouveau. We could pair this theorem with maybe a Côtes du Rhône primeur or something like that.

But finally, I wanted to settle on one thing, and a few days ago, maybe a week ago, someone told me that in 2017, actually just about a year ago, a woman named Maggie Roche passed away. She was one of three sisters who performed music in the 70s and 80s, and I’m sure beyond. The music group was called the Roches. And the Roches were a fantastic hit, R-O-C-H-E, and they are viewed as the predecessors for, for example, the Indigo Girls, and a number of groups who now perform. They would stand up, three women with guitars. They had wonderful harmonies, very simple songs, and they would weave their voices in and out. And I knew about their music when it first came out and found myself by accident in a record store in Berkeley the first year I was teaching, which was 1978-79, long ago, and the three Roches were there signing record albums. These were vinyl albums at the time, and they had big record jackets with room for signatures, and I went up to Maggie and started talking to her. I think I spoke to her for 10 or 15 minutes. It was just kind of an electrifying experience. I just felt somehow like I had bonded with someone whom I never expected to see again, and never did see again. I bought one or two of the albums and got their signatures. I no longer have the albums. I think I left them in France. But she made a big impression on me. So if I wanted to pair one piece of music with this discussion, it would be a piece by the Roches. There are lots of them on Youtube. One called the Hammond Song, is especially beautiful, and I will officially declare that I am pairing the infinitude of primes with the Hammond Song by the Roches.

EL: Okay, I’ll have to listen to that. I’m not familiar with them, so it sounds like a good thing to listen to once we hang up here.

KK: We’ll link it in the show notes, too, so everyone can see it.

EL: That sounds like a lot of fun. It’s always a cool experience to feel like you’re connecting with someone like that. I went to a King’s Singers concert one time a few years ago and got a CD signed, and how warm and friendly people can be sometimes even though they’re very busy and very fancy and everything.

KR: I’ve been around a long time, and people don’t appreciate the fact that until the last decade or two, people who performed publicly were quite accessible. You could just go up to people before concerts or after concerts and chat with them, and they really enjoyed chatting with the public. Now there’s so much emphasis on security that it’s very hard to actually be face to face with someone whose work you admire.

KK: Well this has been fun. I learned some new proofs today.

KR: Fun for me too.

EL: Thanks a lot for being on the show.

KR: It’s my great pleasure, and I love talking to you, and I love talking about the mathematics. Happy New Year to everyone.

[outro]

Episode 21 - Jana Rodriguez Hertz

Evelyn Lamb: Hello and welcome to My Favorite Theorem. I’m one of your hosts, Evelyn Lamb. I’m a freelance math and science writer in Salt Lake City, Utah. And this is your other host.

Kevin Knudson: Hi, I’m Kevin Knudson, professor of mathematics at the University of Florida. How are you doing, Evelyn?

EL: I’m all right. I’m excited because we’re trying a different recording setup today, and a few of our recent episodes, I’ve had a few connection problems, so I’m hoping that everything goes well, and I’ve probably jinxed myself by saying that.

KK: No, no, it’s going to be fine. Positive thinking.

EL: Yeah, I’m hoping that the blips that our listeners may have heard in recent episodes won’t be happening. How about you? Are you doing well?

KK: I’m fine. Spring break is next week, and we’ve had the air conditioning on this week. This is the absurdity of my life. It’s February, and the air conditioning is on. But it’s okay. It’s nice. My son is coming home for spring break, so we’re excited.

EL: Great. We’re very happy today to have Jana Rodriguez-Hertz on the show. So, Jana, would you like to tell us a little bit about yourself?

Jana Rodriguez-Hertz: Hi, thank you so much. I’m originally from Argentina, I have lived in Uruguay for 20 years, and now I live in China, in Shenzhen.

EL: Yeah, that’s quite a big change. When we were first talking, first emailing, I mean, you were in Uruguay then, you’re back in China now. What took you out there?

JRH: Well, we got a nice job offer, and we thought we’d like to try. We said, why not, and we went here. It’s nice. It’s a totally different culture, but I’m liking it so far.

KK: What part of China are you in, which university?

JRH: In Southern University of Science and Technology in Shenzhen. It’s in Shenzhen. Shenzhen is in mainland China in front of Hong Kong, right in front of Hong Kong.

KK: Okay. That’s very far south.

EL: I guess February weather isn’t too bad over there.

JRH: It’s still winter, but it’s not too bad.

EL: Of course, that will be very relevant to our listeners when they hear this in a few months. We’re glad to have you here. Can you tell us about your favorite theorem?

JRH: Well, you know, I live in China now, and every noon I see a dynamical process that looks like the theorem I want to talk to you about, which is the dynamical properties of Smale’s horseshoe. Here it goes. You know, at the canteen of my university, there is a cook that makes noodles.

EL: Oh, nice.

JRH: He takes the dough and stretches it and folds it without mixing, and stretches it and folds it again, until the strips are so thin that they’re ready to be noodles, and then he cuts the dough. Well, this procedure can be described as a chaotic dynamical system, which is the Smale’s horseshoe.

KK: Okay.

JRH: So I want to talk to you about this. But we will do it in a mathematical model so it is more precise. So suppose that the cook has a piece of dough in a square mold, say of side 1. Then the cook stretches the dough so it becomes three times longer in the vertical sense but 1/3 of its original width in the horizontal sense. Then he folds it and puts the dough again in the square mold, making a horseshoe form. So the lower third of the square is converted into a rectangle of height 1 and width 1/3 and will be placed on the left side of the mold. The middle third of the square is going to be bent and will go outside the mold and will be cut. The upper third will be converted to another rectangle of height 1 and width 1/3 and will be put upside down in the right side of the mold. Do you get it?

KK: Yeah.

JRH: Now in the mold there will be two connected components of dough, one in the left third of the square and one in the right third of the square, and the middle third will be empty. In this way, we have obtained a map from a subset of the square into another subset of the square. And each time this map is applied, that is, each time we stretch and fold the dough, and cut the bent part, it’s called a forward iteration. So in the first forward iteration of the square, we obtain two rectangles of width 1/3 and height 1. Now in the second forward iteration of the square, we obtain four rectangles of width 1/9 and height 1. Two rectangles are contained in the left third, two rectangles in the right third. These are four noodles in total.

Counting from left to right, we will see one noodle of width 1/9, one gap of width 1/9, a second noodle of width 1/9, a gap of 1/3, and two more noodles of width 1/9 separated by a gap of width 1/9. Is that okay?

KK: Yes.

JRH: So if we iterate n times, we will obtain 2n noodles of width (1/3)n. And if we let the number of iterations go to infinity, that is, if we stretch and fold infinitely many times, cutting each time the bent part, we will obtain a Cantor set of vertical noodles.

KK: Yes.

EL: Right. So as you were saying the ninths with these gaps, and this 1/3, I was thinking, huh, this sounds awfully familiar.

KK: Yeah, yeah.

EL: We’ll include a picture of the Cantor set in the show notes for people to look at.

JRH: When we iterate forward, in the limit we will obtain a Cantor set of noodles. We can also iterate backwards. And what is that? We want to know for each point in the square, that is, for each flour particle of the dough in the mold, where it was before the cook stretched vertically and folded the dough the first time, where it came from. Now we recall that the forward iteration was to stretch in the vertical sense and fold it, so if we zoom back and put it backwards, we will obtain that the backward sense the cook has squeezed in the vertical sense and stretched in the horizontal sense and folded, okay?

EL: Yes.

JRH: Each time we iterate backwards, we stretch in the horizontal sense and fold it and put it in that sense. In this way, the left vertical rectangle is converted into the lower rectangle, the lower third rectangle. And the right side rectangle, the vertical rectangle, is converted into the upper third rectangle, and the bent part is cut. If we iterate backwards, now we will get in the first backward iteration four horizontal rectangles of width 1/9 and the gaps, and if we let the iterations go to infinity, we will obtain a Cantor set of horizontal noodles.

When we iterate forward and consider only what’s left in the mold, we start with two horizontal rectangles and finish with two vertical rectangles. When we iterate backwards we start with two vertical rectangles and finish with two horizontal rectangles. Now we want to consider the particles that stay forever in the mold, that is, the points so that all of the forward iterates and all the backwards iterates stay in the square. This will be the product of two middle-thirds Cantor set. It will look more like grated cheese than noodles.

KK: Right.

JRH: This set will be called the invariant set.

KK: Although they’re not pointwise fixed, they just stay inside the set.

JRH: That’s right. They stay inside the square. In fact, not only will they be not fixed, they will have a chaotic behavior. That is what I want to tell you about.

KK: Okay.

JRH: This is one of the simplest models of an invertible map that is chaotic. So what is chaotic dynamics anyways? There is no universally accepted definition about that. But one that is more or less accepted is one that has three properties. These properties are that periodic points are dense, it is topologically mixing, and it has sensitivity to initial conditions. And let me explain a little bit about this.

A periodic point is a particle of flour that has a trajectory that comes back exactly to the position where it started. This is a periodic point. What does it mean that they are dense? As close as you wish from any point you get one of these.

Topologically mixing you can imagine that means that the dough gets completely mixed, so if you take any two small squares and iterate one of them, it will get completely mixed with the other one forever. From one iteration on, you will get dough from the first rectangle in the second rectangle, always. That means topologically mixing.

I would like to focus on the sensitivity to initial conditions because this is the essence of chaos.

EL: Yeah, that’s kind of what you think of for the idea of chaos. So yeah, can you talk a little about that?

JRH: Yeah. This means that any two particles of flour, no matter how close they are, they will get uniformly separated by the dynamics. in fact, they will be 1/3 apart for some forward or backward iterate. Let me explain this because it is not difficult. Remember that we had the lower third rectangle? Call this lower third rectangle 0, and the upper third rectangle 1. Then we will see that for some forward or backward iterate, any two different particles will be in different horizontal rectangles. One will be in 1, and the other one will be in the 0 rectangle. How is that? If two particles are at different heights, than either they are already in different rectangles, so we are done, or else they are in the same rectangle. But if they are in the same rectangle, the cook stretches the vertical distance by 3. Every time they are in the same horizontal rectangle, their vertical distance is stretched by 3, so they cannot stay forever in the same rectangle unless they are at the same height.

KK: Sure.

JRH: If they are at different heights, they will get eventually separated. On the other hand, if they are in the same vertical rectangle but at different x-coordinates, if we iterate backwards, the cook will stretch the dough in the horizontal sense, so the horizontal distance will be tripled. Each time they are in the same vertical rectangle, they cannot be forever in the same vertical rectangle unless they are in the same, unless their horizontal distance is 0. But if they are in different positions, then either their horizontal distance is positive or the vertical distance is positive. So in some iterate, they will be 1/3 apart. Not only that, if they are in two different vertical rectangles, then in the next backwards iterate, they are in different horizontal rectangles. So we can state that any two different particles for some iterate will be in different horizontal rectangles, no matter how close they are. So that’s something I like very much because each particle is defined by its trajectory.

EL: Right, so you can tell exactly what you are by where you’ve been.

JRH: Yeah, two particles are defined by what they have done and what they will do. That allows something that is very interesting in this type of chaotic dynamics, which is symbolic dynamics. Now you know that any two points in some iterate will have distinct horizontal rectangles, so you can code any particle by its position in the horizontal rectangles. If one particle is in the beginning in the 0 rectangle, you will assign to them a sequence so that its zero position is 0, a double infinite sequence. If the first iterate is in the rectangle 1, then in the first position you will put a 1. In this way you can code any particle by a bi-infinite sequence of zeroes and ones. So in dynamics this is called conjugation. You can conjugate the horseshoe map with a sequence of bi-infinite sequences. This means that you can code the dynamics. Anything that happens in the set of bi-infinite sequences, happens in the horseshoe and vice versa. This is very interesting because you will find particles that describe any trajectory that you wish because you can write any sequence of zeroes and ones as you wish. You will have all Shakespeare coded in the horseshoe map, all of Donald Trump’s tweets will be there too.

KK: Let’s hope not. Sad!

JRH: Everything will be there.

EL: History of the world, for better and worse.

KK: What about Borges’s Library of Babel? It’s in there too, right?

JRH: If you can code it with zeroes and ones, it’s there.

EL: Yeah, that’s really cool. So where did you first run into this theorem?

JRH: When I was a graduate student, I ran into chaos, and I first ran into a baby model of this, which is the tent map. A tent map is in the interval, and that was very cool. Unlike this model, it’s coded by one-sided sequences. And later on, I went to IMPA [Instituto de Matemática Pura e Aplicada] in Rio de Janeiro, and I learned that Smale, the author of this example, had produced this example while being at IMPA in Rio.

KK: Right.

JRH: It was cool. I learned a little more about dynamics, about hyperbolic dynamics, and in fact, now I’m working in partially hyperbolic dynamics, which is very much related to this, so that is why I like it so much.

KK: Yeah, one of my colleagues spends a lot of time in Brazil, and he’s still studying the tent map. It’s remarkable, I mean, it’s such a simple model, and it’s remarkable what we still don’t know about it. And this is even more complicated, it’s a 2-d version.

EL: So part of this show is asking our guests to pair their theorem with something. I have an idea of what you might have chosen to pair with your theorem, but can you tell us what you’ve chosen?

JRH: Yeah, I like this sensitivity to initial conditions because you are defined by your trajectory. That’s pretty cool. For instance, if you consider humans as particles in a system, actually nowadays in Shenzhen, it is only me who was born in Argentina, lived in Uruguay, and lives in Shenzhen.

EL: Oh wow.

JRH: This is a city of 20 million people. But I am defined by my trajectory. And I’m sure any one of you are defined by your trajectory. If you look at a couple of things in your life, you will discover that you are the only person in the world who has done that. That is something I like. You’re defined, either by what you’ve done, or what you will do.

EL: Your path in life. It’s interesting that you go there because when I was talking to Ami Radunskaya, who also chose a theorem in dynamics, she also talked about how her theorem related to this idea of your path in life, so that’s a fun idea.

JRH: I like it.

KK: Of course, I was thinking about taffy-pulling the whole time you were describing the horseshoe map. You’ve seen these machines that pull taffy, I think they’re patented, and everything’s getting mixed up.

EL: Yeah.

JRH: All of this mixing is what makes us unique.

EL: So you can enjoy this theorem while pondering your life’s path and maybe over a bowl of noodles with some taffy for dessert.

KK: This has been fun. I’d never really thought too much about the horseshoe map. I knew it as this classical example, and I always heard it was so complicated that Smale decided to give up on dynamics, and I’m sure that’s false. I know that’s false. He’s a brilliant man.

JRH: Actually, he’s coming to a conference we’re organizing this year.

EL: Oh, neat.

KK: He’s still doing amazingly interesting stuff. I work in topological data analysis, and he’s been working in that area lately. He’s just a brilliant guy. The Fields Medal was not wasted on him, for sure.

EL: Well thanks a lot for taking the time to talk to us. I really enjoyed talking with you.

JRH: Thank you for inviting me.

[outro]

Episode 20 - Francis Su

Evelyn Lamb: Hello and welcome to My Favorite Theorem. I’m your host Evelyn Lamb. I’m a freelance math and science writer in Salt Lake City, Utah. And this is your other host.

Kevin Knudson: Hi, I’m Kevin Knudson, professor of mathematics at the University of Florida. How are you doing, Evelyn?

EL: I’m all right. I am hanging in there in the winter as a displaced Texan.

KK: It’s not even winter yet.

EL: Yeah, well, somehow I manage to make it to the end of the season without dying every year outside of Texas, but yeah, the first few cold days really throw me for a loop.

KK: Well my son’s in college now, and they had snow last week.

EL: Well the south got a bunch of snow. Is he in South Carolina, is that right?

KK: North Carolina, and he’s never driven in snow before, and we told him not to, but of course he did. No incidents, so it was okay.

EL: So we’re very glad to have our guest today, who I believe is another displaced Texan, Francis Su. Francis, would you like to tell us a little bit about yourself?

Francis Su: Hi, Evelyn and Kevin. Sure. I’m a professor of mathematics at Harvey Mudd College, and that’s a small science and engineering school in southern California, and Evelyn is right. I am a displaced Texan from a small town in south Texas called Kingsville.

EL: Okay. I grew up in Dallas. Is Kingsville kind of between Houston and Beaumont?

FS: It’s between Houston and the valley. Closer to Corpus Christi.

EL: Ah, the other side. Many of us displaced Texans end up all over the country and elsewhere in the world.

FS: That’s right. I’m in California now, which means I don’t have to deal with the winter weather that you guys are wrestling with.

KK: I’m in Florida. I’m okay.

EL: Yeah. And you’re currently in the Bay Area at MSRI, so you’re not on fire right now.

FS: That’s right. I’m at the Math Sciences Research Institute. There’s a semester program going on in geometric and topological combinatorics.

KK: Cool.

EL: Yeah, that must be nice. Is this your, it’s not too long after your presidency of the Mathematical Association of America, so it must be nice to be able to not have those responsibilities and be able to just focus on research at MSRI this semester.

FS: That’s right. It was a way of hopping back into doing research after a couple of years doing some fun work for the MAA.

EL: So, what is your favorite theorem? We would love to hear it.

FS: You know, I went around and around with this because as mathematicians we have lots of favorite theorems. The one I kept coming back to was the Brouwer fixed point theorem.

KK: I love this theorem.

FS: Yes, so the Brouwer fixed point theorem is an amazing theorem. It’s about a hundred years old. It shows up in all sorts of unexpected places. But what it loosely says is if you have a continuous function from a ball to itself—and I’ll say what a ball means in a minute—it must have a fixed point, a point that doesn’t move. And a ball can be anything that basically has no holes.

EL: So anything you can make out of clay without punching a hole in it, or snaking it around and attaching two ends of it together. I’m gesturing with my hands. That’s very helpful for our podcast listeners.

KK: Right.

FS: Exactly.

KK: We don’t even need convexity, right? You can have some kind of dimpled blob and it still works.

FS: That’s right. It could be a blob with a funny shape. As long as it can be deformed to something that’s a ball, the ball has no holes, then the theorem applies. And a continuous function would be, one way of thinking about a continuous function from a ball to itself is let’s deform this blob, and as long as we deform the blob so that the blob stays within itself, then the blob doesn’t move. A very popular way of describing this theorem is if you take a cup of coffee, let’s say I have a cup of coffee and I take a picture of it. Then slosh the coffee around in a continuous fashion and then take another picture. There is going to be a point in the coffee that is in the same spot in both pictures. It might have moved around in between, but there’s going to be a point that’s in the same spot in both pictures. And then if I move that point out of its original position, I can’t help but move some other point into its original position.

EL: Yeah, almost like a reverse diagonalization. In diagonalization you show that there’s a problem because anything you thought you could get on your list, you show that something else, even if you stick it on the list, something else is not on the list still. Here, you’re saying even if you think, if I just had one fixed point, I could move it and then I wouldn’t have any, you’re saying you can’t do that without adding some other fixed point.

FS: That’s right. The coffee cup sloshing example is a nice one because you can see that if I take the cup of coffee and I just empty it and pour the liquid somewhere else, clearly there’s not going to be a fixed point. So you sort of see the necessity of having the ball, the coffee, mapped to itself.

KK: And if you had a donut-shaped cup of coffee, this would not be true, right? You could swirl it around longitudinally and nothing would be fixed.

FS: That’s right. If you had a donut-shaped coffee mug, we could. That’s right. The continuity is kind of interesting. Another way I like to think about this theorem is if you take a map of Texas and you crumple it up somewhere in Texas, there’s a point in the map that’s exactly above the point it represents in Texas. So that’s sort of a two-dimensional version of this theorem. And you see the necessity of continuity because if I tore the map in two pieces and threw east Texas into west Texas and west Texas into east Texas, it wouldn’t be true that there would be a point exactly above the point it represents. So continuity is really important in this theorem as well.

KK: Right. You know, for fun, I put the one-dimensional version of this as a bonus question on a calculus test this semester.

FS: I like that version. Are you referring to graphing this one-dimensional function?

KK: Right, so if you have a map from a unit interval to itself, it has a fixed point. This case is nice because it’s just a consequence of the intermediate value theorem.

FS: Yes, that’s a great one. I love that.

KK: But in higher dimensions you need a little more fire power.

FS: Right. So yeah, this is a fun theorem because it has all sorts of maybe surprising versions. I told you one of the popular versions with coffee. It can be used, for instance, to prove the fundamental theorem of algebra, that every polynomial has a root in the complex numbers.

EL: Oh, interesting! I don’t think I knew that.

KK: I’m trying to think of that proof.

FS: Yeah, so the idea here is that if you think about a polynomial as a function and you’re thinking of this as a function on the complex plane, basically it takes a two-dimensional region like Texas and maps it in some fashion back onto the plane. And you can show that there’s a region in this map that gets sent to itself, roughly speaking. That’s one way to think about what’s going on. And then the existence of a zero corresponds to a fixed point of a continuous function, which I haven’t named but that’s sort of the idea.

EL: Interesting. That’s nice. It’s so cool how, at least if I’m remembering correctly, all the proofs I know of the fundamental theorem of algebra are topological. It’s nice, I think, for topology to get to throw an assist to algebra. Algebra has helped topology so much.

FS: I love that too. I guess I’m attracted to topology because it says a lot of things that are interesting about the existence of certain things that have to happen. One of the things that’s going on at this program at MSRI, as the name implies, geometric and topological combinatorics, people are trying to think about how to use topology to solve problems in combinatorics, which seems strange because combinatorics feels like it just has to do with counting discrete objects.

EL: Right. Combinatorics feels very discrete, and topology feels very continuous, and how do you get that to translate across that boundary? That’s really interesting.

FS: I’ll give you another example of a surprising application. In the 1970s, actually people studied this game called Hex for a while. I guess Hex was developed in the ‘40s or ‘50s. Hex is a game that’s played on a board with hexagonal tiles, a diamond-shaped board. Two players take turns, X and O, and they’re trying to construct a chain from one side of the board to the other, to the opposite side. It turns out that the Brouwer fixed-point theorem, well you can ask the question: can that game ever end in a draw configuration where nobody wins? For large boards, it’s not so obvious that the game can’t end in a draw. But in a spectacular application of the Brouwer fixed-point theorem it can’t end in a draw using the Brouwer fixed-point theorem.

EL: Oh, that’s so cool.

KK: That is cool. And allegedly this game was invented by John Nash in the men’s room at Princeton, right?

FS: Yes, there’s some story like that, though I think it actually dates back to somebody before.

KK: Probably. But it’s a good story, right, because Nash is so famous.

EL: So was it love at first sight with the Brouwer fixed-point theorem for you, or how did you come across it and grow to love it?

FS: I guess I encountered it first as an undergraduate in college when a professor of mine, a topology professor of mine, showed me this theorem, and he showed me a combinatorial way to prove this theorem, using something known as Sperner’s lemma. There’s another connection between topology and combinatorics, and I really appreciated the way you could use combinatorics to prove something in topology.

EL: Cool.

KK: Very cool.

KK: You know, part of our show is we ask our guest to pair their theorem with something. So what have you chosen to pair the Brouwer fixed-point theorem with?

FS: I’d like to pair it with parlor games. Think of a game like chess, or think of a game like rock-paper-scissors. It turns out that the Brouwer fixed-point theorem is also related to how you play a game optimally, a game like chess or rock-paper-scissors optimally.

KK: So how do you get the optimal strategy for chess from the Brouwer fixed-point theorem?

FS: Very good question. So the Brouwer fixed-point theorem can’t tell you what the optimal strategy is.

KK: Just that it exists, right, yeah.

FS: It tells you that there is a pair of optimal strategies that players can play to play the game optimally. What I’m referring to is something known as the Nash equilibrium theorem. Nash makes another appearance in this segment. What Nash showed is that if you have a game, well there’s this concept called the Nash equilibrium. The question Nash asked is if you’re looking at some game, can you predict how players are going to play this game? That’s one question. Can you prescribe how players should play this game? That’s another question. And a third question is can you describe why players play a game a certain way? So there’s the prediction, descriptions, and prescription about games that mathematicians and economists have gotten interested in. And what Nash proposed is that in fact something called a Nash equilibrium is the best way to describe, prescribe, and predict how people are going to play a game. And the idea of a Nash equilibrium is very simple, it’s just players playing strategies that are mutually best responses to each other. And it turns out that if you allow what are called mixed strategies, every finite game has an equilibrium, which is kind of surprising. It suggests that you could maybe suggest to people what the best course of action is to play. There is some pair of strategies by both players, or by all players if it’s a multiplayer game, that actually are mutual best replies. People are not going to have an incentive to change their strategies by looking at the other strategies.

KK: The Brouwer fixed point theorem is so strange because it’s one of those existence things. It just says yeah, there is a fixed point. We tend to prove it by contradiction usually, or something. There’s not really any good constructive proofs. I guess you could just pick a point and start iterating. Then by compactness what it converges to is a fixed point.

FS: There is actually, maybe this is a little surprising as well, this theorem I mention learning as an undergrad, it’s called Sperner’s lemma, it actually has a constructive proof, in the sense that there’s an efficient way of finding the combinatorial object that corresponds to a fixed point. What’s surprising is that you can actually in many places use this constructive combinatorial proof to find, or get close to, a proposed fixed point.

KK: Very cool.

FS: That’s kind of led to a whole bunch of research in the last 40 years or so in various areas, to try to come up with constructive versions of things that prior to that people had thought of as non-constructive.

EL: Oh, that’s so cool. I must admit I did not have proper appreciation for the Brouwer fixed-point theorem before, so I’m very glad we had you on. I guess I kind of saw it as this novelty theorem. You see it often as you crumple up the map, or do these little tricks. But why did I really care that I could crumple up the map? I didn’t see all of these connections to these other points. I am sorry to the Brouwer fixed-point theorem for not properly appreciating it before now.

FS: Yes. I think it definitely belongs on a top ten list of top theorems in mathematics. I wonder how many mathematicians would agree.

KK: I read this book once, and the author is escaping me and I’m kind of embarrassed because it’s on the shelf in my other office, called Five Golden Rules. Have you ever seen this book? It was maybe 10 or 15 years ago.

EL: No.

KK: One of the theorems, there are like five big theorems in mathematics, it was the Brouwer fixed-point theorem. And yeah, it’s actually of fundamental importance to know that you have fixed points for maps. They are really important things. But the application he pointed to was to football ranking schemes, right? Because that’s clearly important. College football ranking schemes in which in essence you’re looking for an eigenvector of something, and an eigenvector is a fixed point with eigenvalue 1, and of course the details are escaping me now. This book is really well-done. Five Golden Rules.

EL: We’ll find that and put it in the show notes for sure.

FS: I haven’t heard of that. I should look that one up.

KK: It’s good stuff.

FS: I’ll just mention with this Nash theorem, the basic idea of using the Brouwer fixed-point theorem to prove it is pretty simple to describe. It’s that if you look at the set of all collections of strategies, if they’re mixed strategies allowing randomization, then in fact that space is a ball.

KK: That makes sense.

FS: And then the cool thing is if players have an incentive to deviate, to change their strategies, that suggests a direction in which each point could move. If they want to deviate, it suggests a motion of the ball to itself. And the fact that the ball has a fixed point means there’s a place where nobody is incentivized to change their strategy.

EL: Yeah.

KK: Well I’ve learned a lot. And I even knew about the Brouwer fixed-point theorem, but it’s nice to learn about all these extra applications. I should go learn more combinatorics, that’s my takeaway.

EL: Yeah, thanks so much for being on the show, Francis. If people want to find you, there are a few places online that they can find you, right? You’re on Twitter, and we’ll put a link to your Twitter in the show notes. You also have a blog, and I’m sorry I just forgot what it’s called.

FS: The Mathematical Yawp.

EL: That’s right. We’ll put that in the show notes. I know there are a lot of posts of yours that I’ve really appreciated, especially the ones about helping students thrive, doing math as a way for humans to grow as people and helping all students access that realm of learning and growth. I know those have been influential in the math community and fun to read and hear.

Episode 19 - Emily Riehl

Kevin Knudson: Welcome to My Favorite Theorem, a podcast about mathematics and everyone’s favorite theorem. I’m your host Kevin Knudson, professor of mathematics at the University of Florida. This is your other host.

Evelyn Lamb: Hi, I’m Evelyn Lamb, a freelance math and science writer in Salt Lake City. So how are things going, Kevin?

KK: Okay. We’re hiring a lot, and so I haven’t eaten a meal at home this week, and maybe not last week either. You think that might be fun until you’re in the middle of it. It’s been great meeting all these new people, and I’m really excited about getting some new colleagues in the department. It’s a fun time to be at the University of Florida. We’re hiring something like 500 new faculty in the next two years.

EL: Wow!

KK: It’s pretty ambitious. Not in the math department.

EL: Right.

KK: I wish. We could solve the mathematician glut just like that.

EL: Yeah, that would be great.

KK: How are things in Salt Lake?

EL: Pretty good. It’s a warm winter here, which will be very relevant to our listeners when they listen in the summer. But it’s hiring season at the University of Utah, where my spouse works. He’s been doing all of that handshaking.

KK: The handshaking, the taking to the dean and showing around, it’s fun. It’s good stuff. Anyway, enough about that. I’m excited about today’s guest. Today we are pleased to welcome Emily Riehl from Johns Hopkins. Hi, Emily.

Emily Riehl: Hi.

KK: Tell everyone about yourself.

ER: Let’s see. I’ve known I wanted to be a mathematician since I knew that that was a thing that somebody could be, so that’s what I’m up to. I’m at Johns Hopkins now. Before that I was a postdoc at Harvard, where I was also an undergraduate. My Ph.D. is from Chicago. I was a student of Peter May, an algebraic topologist, but I work mostly in category theory, and particularly in category theory as it relates to homotopy theory.

KK: So how many students does Peter have? Like 5000 or something?

ER: I was his 50th, and that was seven years ago.

EL: Emily and I have kind of a weird connection. We’ve never actually met, but we both lived in Chicago and I kind of replaced Emily in a chamber music group. I played with Walter and the gang I guess shortly after you graduated. I moved there in 2011. They’re like, oh, you must know Emily Riehl because you’re both mathematicians who play viola. I was like, no, that sounds like a person, though, because violists are all the best people.

KK: So, Emily, you’ve told us, and I’ve had time to think about it but still haven’t thought of my favorite application of this theorem. But what’s your favorite theorem?

ER: I should confess: my favorite theorem is not the theorem I want to talk about today. Maybe I’ll talk about what I don’t want to talk about briefly if you’ll indulge me.

KK: Sure.

ER: So I’m a category theorist, and every category theorist’s favorite theorem is the Yoneda lemma. It says that a mathematical object of some kind is uniquely determined by the relationships that it has to all other objects of the same type. In fact, it’s uniquely characterized in two different ways. You can either look at maps from the object you’re trying to understand or maps to the object you’re trying to understand, and either way suffices to determine in. This is an amazing theorem. There’s a joke in category that all proofs are the Yoneda lemma. I mean, all proofs [reduce] to the Yoneda lemma. The reason I don’t want to talk about it today is two-fold. Number one, the discussion might sound a little more philosophical than mathematical because one thing that the Yoneda lemma does is it orients the philosophy of category theory. Secondly, there’s this wonderful experience you have as a student when you see the Yoneda lemma for the first time because the statement you’ll probably see is not the one I just described but sort of a weirder one involving natural transformations from representable functors, and you see them, and you’re like, okay, I guess that’s plausible, but why on earth would anyone care about that? And then it sort of dawns on you over however many years, in my case, why it’s such a profound and useful observation. And I don’t want to ruin that experience for anybody.

KK: You’re not worried about getting excommunicated, right?

ER: That’s why I had to confess. I was joking with some category theorists, I was just in Sydney visiting the Center of Australian Category Theory, which is the name of the group, and it’s also the center of Australian category theory. And I want to be invited back, so yes, of course, my favorite theorem is the Yoneda lemma. But what I want to talk about today instead is a theorem I really like because it’s a relatively simple idea, and it comes up all over mathematics. Once it’s a pattern you know to look for, it’s quite likely that you’ll stumble upon it fairly frequently. The proof, it’s a general proof in category theory, specializes in each context to a really nice argument in that particular context. Anyway, the theorem is called right adjoints preserve limits.

EL: All right.

KK: So I’m a topologist, so to me, we put a modifier in front of our limit, so there’s direct and inverse. And limit in this context means inverse limit, right?

ER: Right. That’s the historical terminology for what category theorists call limits.

KK: So I always think of inverse limits as essentially products, more or less, and direct limits are unions, or direct sum kinds of things. Is that right?

ER: Right.

KK: I hope that’s right. I’m embarrassed if I’m wrong.

ER: You’re alluding to something great in category theory, which is that when you prove a theorem, you get another theorem for free, the dual theorem. A category is a collection of objects and a collection of transformations between them that you depict graphically as arrows. Kind of like in projective geometry, you can dualize the axioms, you can turn around the direction of the arrows, and you still have a category. What that means is that if you have a theorem in category theory that says for all categories blah blah blah, then you can apply that in particular to the opposite category where things are turned around. In this case, there are secretly two categories involved, so we have three dual versions of the original theorem, the most useful being that left adjoints preserve colimits, which are the direct limits that you’re talking about. So whether they’re inverse limits or direct limits, there’s a version of this theorem that’s relevant to that.

KK: Do we want to unpack what adjoint functors are?

ER: Yes.

EL: Yeah, let’s do that. For those of us who don’t really know category theory.

ER: Like anything, it’s a language that some people have learned to speak and some people are not acquainted with yet, and that’s totally fine. Firstly, a category is a type of mathematical object, basically it’s a theory of mathematical objects. We have a category of groups, and then the transformations between groups are the group homomorphisms. We have a category of sets and the functions between them. We have a category of spaces and the continuous functions. These are the categories. A morphism between categories is something called a functor. It’s a way of converting objects of one type to objects of another type, so a group has an underlying set, for instance. A set can be regarded as a discrete space, and these are the translations.

So sometimes if you have a functor from one category to another and another functor going back in the reverse direction, those functors can satisfy a special dual relationship, and this is a pair of adjoint functors. One of them gets called a left adjoint, and one of them the right adjoint. What the duality says is that if you look at maps out of the image of the left adjoint, then those correspond bijectively and naturally (which is a technical term I’m not going to get into) to maps in the other category into the image of the right adjoint. So maps in one category out of the image of the left adjoint correspond naturally to maps in the other category into the image of the right adjoint. So let me just mention one prototypical example.

KK: Yeah.

ER: So there’s a free and forgetful construction. So I mentioned that a group has an underlying set. The reverse process takes a set and freely makes a group out of that set, so the elements of that group will be words in the letters and formal inverses modulo some relation, blah blah blah, but the special property of these free groups is if I look at the group homomorphism that’s defined on a free group, so this is a map in the category of groups out of an object in the image of the left adjoint, to define that I just have to tell you where the generators go, and I’m allowed to make those choices freely, and I just need to find a function of sets from the generating set into the underlying set of the group I’m mapping into.

KK: Right.

ER: That’s this adjoint relationship. Group homomorphisms from a free group to whatever group correspond to functions from the generators of that free group to that underlying set of the group.

EL: I always feel like I’m about to drown when I try to think about category theory. It’s hard for me to read category theory, but when people talk to me about it, I always think, oh, okay, I see why people like this so much.

KK: Reading category theory is sort of like the whole picture being worth a thousand words thing. The diagrams are so lovely, and there’s so much information embedded in a diagram. Category theory used to get a bad rap, abstract nonsense or whatever, but it’s shown to be incredibly powerful, certainly as an organizing principle but also just in being able to help us push boundaries in various fields. Really if you think about it just right, if you think about things as functors, lots of things come out, almost for free. It feels like for free, but the category theorist would say, no, there’s a ton of work there. So what’s a good example of this particular theorem?

ER: Before I go there, exactly to this point, there’s a great quote by Eilenberg and Steenrod. So Eilenberg was one of the founders of category theory. Saunders MacLand wrote a paper, the “General Theory of Natural Equivalences,” in the ‘40s that defined these categories and functors and also the notion of naturality that I was alluding to. They thought that was going to be both the first and last. Anyway, ten years later, Eilenberg and Steenrod wrote this book, Foundations of Algebraic Topology, that incorporated these diagrammatic techniques into a pre-existing mathematical area, algebraic topology. It had been around since at least the beginning of the twentieth century, I’d say. So they write, “the diagrams incorporate a large amount of information. Their use provides extensive savings in space and in mental effort. In the case of many theorems, the setting up of the correct diagram is a major part of the proof. We therefore urge that the reader stop at the end of each theorem and attempt to construct for himself (it’s a quote here) the relevant diagram before examining the one which is given in the text. Once this is done, the subsequent demonstration can be followed more readily. In fact, the reader can usually supply it himself.”

KK: Right. Like proving Meier-Vietoris, for example. You just set up the right diagram, and in principle it drops out, right?

ER: Right, and in general in category theory, the definitions, the concepts are the hard thing. The proofs of the theorems are generally easier. And in fact, I’d like to prove my favorite theorem for you. I’m going to do it in a particular example, and actually I’m going to do it in the dual. So I’m going to prove that left adjoints preserve colimits.

EL: Okay.

ER: The statement I’m going to prove, the specific statement I’m going to prove by using the proof that left adjoints preserve colimits, is that for natural numbers a, b, and c, I’m going to prove that a(b+c)=ab+ac.

KK: Distributive law, yes!

ER: Distributive property of multiplication over addition. So how are we going to prove this? The first thing I’m going to do is categorify my natural numbers. And what is a natural number? It’s a cardinality of a finite set. In place of the natural numbers a, b, and c, I’m going to think about sets, which I’ll also call A, B, and C. The natural numbers stand for the cardinality of these sets.

EL: Cardinality being the size, basically.

ER: Absolutely. A, B, and C are now sets. If we’re trying to prove this statement about natural numbers, they’re finite sets. The theorem is actually true for arbitrary sets, so it doesn’t matter. And I’ve replaced a, b, and c by sets. Now I have this operation “times” and this operation “plus,” so I need to categorify those as well. I’m going to replace them by operations on sets. So what’s something you can do to two sets so that the cardinalities add, so that the sizes add?

KK: Disjoint union.

EL: Yeah, you could union them.

ER: So disjoint union is going to be my interpretation of the symbol plus. And we also need an interpretation of times, so what can I do for sets to multiply the cardinalities?

EL: Take the product, or pairs of elements in each set.

ER: That’s right. Absolutely. So we have the cartesian product of sets and the disjoint union of sets. The statement is now for any sets a, b, and c, I’m going to prove that if I take the disjoint union B+C, and then form the cartesian product with A, then that set is isomorphic to, has in particular the same number of elements as, the set that you’d get by first forming the products A times B and A times C and then taking the disjoint union.

KK: Okay.

ER: The disjoint union here is one of these colimits, one of these direct limits. When you stick two things next to each other — coproduct would be the categorical term — this is one of these colimits. The act of multiplying a set by a fixed set A is in fact a left adjoint, and I’ll make that a little clear as I make the argument.

ER: The disjoint union here is one of these colimits, one of these direct limits. When you stick two things next to each other — coproduct would be the categorical term — this is one of these colimits. The act of multiplying a set by a fixed set A is in fact a left adjoint, and I’ll make that a little clear as I make the argument.

EL: Okay.

ER: Okay. So let’s just try and begin. So the way I’m going to prove that A times (B+C) is (AxB) +(AxC) is actually using a Yoneda lemma-style proof because the Yoneda lemma comes up everywhere. We know that these sets are isomorphic by arguing that functions from them to another set X correspond. So if the sets have exactly the same functions to every other set, then they must be isomorphic. That’s the Yoneda lemma. Let’s now consider a function from the set A times the disjoint union of B+C to another set X. The first thing I can do with such a function is something called currying, or maybe uncurrying. (I never remember which way these go.) I have a function here of two variables. The domain is the set A times the disjoint union (B+C). So I can instead regard this as a function from the set (B+C), the disjoint union, into the set of functions from A to X.

KK: Yes.

ER: Rather than have A times (B+C) to X, I have from (B+C) to functions from A to X. There I’ve just transposed the cross and adjunction. That was the adjunction bit. So now I have a function from the disjoint union B+C to the set of functions from A to X. Now when I’m mapping out of a disjoint union, that just means a case analysis. Either I need to define a function like this, I have to define firstly a function from B to functions from A to the X, and also from C to functions from A to the X. So now a single function is given by these two functions. And if I look at the piece, now, which is a function from B to functions from A to the X, by this uncurrying thing, that’s equally just a function from A times B to X. Similarly on the C piece, it’s just my function from C to functions from A to X is just a function from A times C to X. So now I have a function from A times B to X and also from A times C to X, and those amalgamate to form a single function from the disjoint union A times B to X, or disjoint union A times C to X. So in summary, functions from A times the disjoint union (B+C) to X correspond in this way to functions from (AxB) disjoint union (AxC) to X, so therefore the sets A times B+C and A times B plus A times C.

EL: And now I feel like I know a category theory proof.

ER: So what’s great about that proof is that it’s completely independent of the context. It’s all about the formal relationships between the mathematical objects, so if you want to interpret A, B, and C as vector spaces and plus as the direct sum, which you might as an example of a colimit, and times as a tensor product, I’ve just proven that the tensor product distributes as a direct sum, like modules over commutative rings. That’s a much more complicated setting, but the exact same argument goes through. And of course there are lots of other examples of limits and colimits. One thing that kind of mystified me as an undergraduate is that if you have a function between sets, the inverse image preserves both unions and intersections, whereas the direct image preserves only unions and not intersections. And there’s a reason for that. The inverse image is a functor between these poset categories of sets of subsets, and admits both left and right adjoints, so it preserves all limits and all colimits of both intersections and unions, whereas this left adjoint, which is the direct image, only preserves the colimits.

KK: Right. So here’s the philosophical question. You didn’t want to get philosophical, but here it is anyway. So category theory in a lot of ways reminds me of the new math. We had this idea that we were going to teach set theory to kindergarteners. Would it be the right way to teach mathematics? So you mention all of these things that sort of drop out of this rather straightforward fact. So should we start there? Or should we develop this whole library? The example of tensor products distributing over direct sums, I mean, everybody’s seen a proof of that in Atiyah and McDonald or whatever, and okay, fine, it works. But wouldn’t it be nice to just get out your sledgehammer and say, look, limits and adjoints commute. Boom!

ER: So I give little hints of category theory when I teach undergraduate point-set topology. So in Munkres, chapter 2 is constructing the product topology, constructing the quotient topology, constructing subspace topologies, and rather than treat these all as completely separate topics, I group all the limits together and group all the colimits together, and I present the features of the constructions. This is the coarsest topology so that such and such maps are continuous, this is the finest topology so that the dual maps are continuous. I don’t define limit or colimit. Too much of a digression. In teaching abstract algebra to undergraduates in an undergraduate course, I do say a little bit about categories. I guess I think it’s useful to precisely understand function composition before getting into technical arguments about group homomorphisms, and the first isomorphism theorem is essentially the same for groups and for rings and for modules, and if we’re going to see the same theorem over and over again, we should acknowledge that that’s what happens.

KK: Right.

ER: I think category theory is not hard. You can teach it on day one to undergraduates. But appreciating what it’s for takes some mathematical sophistication. I think it’s worth waiting.

EL: Yeah. You need to travel on the path a little while before bringing that in, seeing it from that point of view.

ER: The other thing to acknowledge is it’s not equally relevant to all mathematical disciplines. In algebraic geometry, you can’t even define the basic objects of study anymore without using categorical language, but that’s not true for PDEs.

KK: So another fun thing we like to do on this podcast is ask our guest to pair their theorem with something. So what have you chosen to pair this theorem with?

ER: Right. In honor of the way Evelyn and I almost met, I’ve chosen a piece that I’ve loved since I was in middle school. It’s Benjamin Britten’s Simple Symphony, his movement 3, which is the Sentimental Sarabande. The reason I love this piece, so Benjamin Britten is a British composer. I found out when I was looking this up this morning that he composed this when he was 20.

EL: Wow.

ER: The themes that he used, it’s pretty easy to understand. It isn’t dark, stormy classical music. The themes are relatively simple, and they’re things I think he wrote as a young teenager, which is insane to me. What I love about this piece is that it starts, it’s for string orchestra, so it’s a simple mix of different textures. It starts in this stormy, dramatic, unified fashion where the violins are carrying the main theme, and the cellos are echoing it in a much deeper register. And when I played this in an orchestra, I was in the viola section, I think I was 13 or so, and the violas sort of never get good parts. I think the violists in the orchestra are sort of like category theory in mathematics. If you take away the viola section, it’s not like a main theme will disappear, but all of a sudden the orchestra sounds horrible, and you’re not sure why. What’s missing? And then very occasionally, the clouds part, and the violas do get to play a more prominent role. And that’s exactly what happens in this movement. A few minutes in, it gets quiet, and then all of a sudden there’s this beautiful viola soli, which means the entire viola section gets to play this theme while the rest of the orchestra bows out. It’s this really lovely moment. The violas will all play way too loud because we’re so excited. [music clip] Then of course, 16 bars later, the violins take the theme away. The violins get everything.

EL: Yeah, I mean it’s always short-lived when we have that moment of glory.

ER: I still remember, I haven’t played this in an orchestra for 20 years now, but I still remember it like it was yesterday.

EL: Yeah, well I listened to this after you shared it with us over email, and I turned it on and then did something else, and the moment that happened, I said, oh, this is the part she was talking about!

KK: We’ll be sure to highlight that part.

EL: I must say, the comparison of category theory to violists is the single best way to get me to want to know more about category theory. I don’t know how effective it is for other people, but you hooked me for sure.

KK: We also like to give our guests a chance to plug whatever they’re doing. When did your book come out? Pretty recently, a year or two ago?

EL: You’ve got two of them, right?

ER: I do. My new book is called Category Theory in Context, and the intended audience is mathematicians in other disciplines. So you know you like mathematics. Why might category theory be relevant? Actually, in the context of my favorite theorem, the proof that right adjoints preserve limits is actually the watermark on the book.

KK: Oh, nice.

ER: I had nothing to do with that. Whoever the graphic designer is, like you said, the diagrams are very pretty. They pulled them out, and that’s the watermark. It’s something I’ve taught at the advanced undergraduate or beginning graduate level. It was a lot of fun to write. Something interesting about the writing process is I wanted a category theory book that was really rich with compelling examples of the ideas, so I emailed the category theory mailing list, I posted on a category theory blog, and I just got all these wonderful suggestions from colleagues. For instance, row reduction, the fact that the elementary row operations can be implemented by multiplication by an elementary matrix, and then you take the identity matrix and perform the row operations on that matrix, that’s the Yoneda lemma.

KK: Wow, okay.

ER: A colleague friend told me about that example, so it’s really a kind of community effort in some sense.

KK: Very cool. And our regular listeners also found out on a previous episode that you’re also an elite athlete. Why don’t you tell us about that a little bit?

ER: So I think I already mentioned the Center of Australian Category Theory. So there’s this really famous category theory group based in Sydney, Australia, and when I was a Ph.D. student, I went for a few months to visit Dominic Verity, who’s ~28:40 now my main research collaborator. It was really an eventful trip. I had been a rugby player in college, so then when I was in Sydney, I thought it might be fun to try this thing called Australian rules football, which I’d heard about as another contact sport, and I just completely fell in love. It’s a beautiful game, in my opinion. So then I came back to the US and looked up Australian rules football because I wanted to keep playing, and it does exist here. It’s pretty obscure. I guess a consequence of that is I was able to play on the US women’s national team. I’ve been doing that for the past seven years, and what’s great about that is occasionally we play tournaments in Australia, so whenever that happens, I get to visit my research colleagues in Sydney, and then go down to Melbourne, which is really the center of footie, and combine these two passions.

EL: We were talking about this with John Urschel, who of course plays American rules football, or recently retired. This is one time when I wish we had a video feed for this because his face when we were trying to explain, which of course, two mathematicians who have sort of seen this on a TV in a bar trying to explain what Australian rules football is, he had this look of bewilderment.

KK: Yeah, I was explaining that the pitch is a big oval and there’s the big posts on the end, and he was like, wait a minute.

EL: His face was priceless there.

KK: It was good. I used to love watching it. I used to watch it in the early days of ESPN. I thought it was just a fun game to watch. Well, Emily, this has been fun. Thanks for joining us.

ER: Thanks for having me. I’ve loved listening to the past episodes, and I can’t wait to see what’s in the pipeline.

KK: Neither can we. I think we’re still figuring it out. But we’re having a good time, too. Thanks again, Emily.

EL: All right, bye.

ER: Bye.

[end stuff]

Episode 18 - John Urschel

Kevin Knudson: Welcome to My Favorite Theorem. I’m your host Kevin Knudson, professor of mathematics at the University of Florida. I’m joined by your cohost.

Evelyn Lamb: Hi, I’m Evelyn Lamb. I’m a math and science writer in Salt Lake City, Utah, where it is very cold now, and so I’m very jealous of Kevin living in Florida.

KK: It’s a dreary day here today. It’s raining and it’s “cold.” Our listeners can’t see me doing the air quotes. It’s only about 60 degrees and rainy. It’s actually kind of lousy. but it’s our department holiday party today, and I have my festive candy cane tie on, and I’m good to go. And I’m super excited.

John Urschel: So I haven’t been introduced yet, but can I jump in on this weather conversation? I’m in Cambridge right now, and I must say, I think it’s probably nicer in Cambridge, Massachusetts than it is in Utah right now. It’s a nice breezy day, high 40s, low 50s, put on a little sweater and you’re good to go.

EL: Yeah, I’m jealous of both of you.

KK: Evelyn, I don’t know about you, but I’m super excited about this one. I mean, I’m always excited to do these, but it’s the rare day you get to talk to a professional athlete about math. This is really very cool. So our guest on this episode is John Urschel. John, do you want to tell everyone about yourself?

JU: Yes, I’d be happy to. I think I might actually be the only person, the only professional athlete you can ask high-level math about.

KK: That might be true. Emily Riehl, Emily Riehl counts, right?

EL: Yeah.

KK: She’s a category theorist at Johns Hopkins. She’s on the US women’s Australian rules football team.

EL: Yeah,

JU: Australian rules football? You mean rugby?

KK: Australian rules football is like rugby, but it’s a little different. See, you guys aren’t old enough. I’m old enough to remember ESPN in the early days when they didn’t have the high-end contracts, they’d show things like Australian rules football. It’s fascinating. It’s kind of like rugby, but not really at the same time. It’s very weird.

JU: What are the main differences?

EL: You punch the ball sometimes.

KK: They don’t have a scrum, but they have this thing where they bounce the ball really hard. (We should get Emily on here.) They bounce the ball up in the air, and they jump up to get it. You can run with it, and you can sort of punch the ball underhanded, and you can kick it through these three posts on either end [Editor's note: there are 4 poles on either end.]. It’s sort of this big oval-shaped field, and there are three poles at either end, and you try to kick it. If you get it through the middle pair, that’s a goal. If you get it on either of the sides, that’s called a “behind.” The referees wear a coat and tie and a little hat. I used to love watching it.

JU: Wait, you say the field is an oval shape?

KK: It’s like an oval pitch, yeah.

JU: Interesting.

KK: Yeah. You should look this up. It’s very cool. It is a bit like rugby in that there are no pads, and they’re wearing shorts and all of that.

JU: And it’s a very continuous game like rugby?

KK: Yes, very fast. It’s great.

JU: Gotcha.

KK: Anyway, that’s enough of us. You didn’t tell us about yourself.

JU: Oh yeah. My name is John Urschel. I’m a retired NFL offensive lineman. I played for the Baltimore Ravens. I’m also a mathematician. I am getting my Ph.D. in applied math at MIT.

KK: Good for you.

EL: Yeah.

KK: Do you miss the NFL? I don’t want to belabor the football thing, but do you miss playing in the NFL?

JU: No, not really. I really loved playing in the NFL, and it was a really amazing experience to be an elite, elite at whatever sport you love, but at the same time I’m very happy to be focusing on math full-time, focusing on my Ph.D. I’m in my third year right now, and being able to sort of devote more time to this passion of mine, which is ideally going to be my lifelong career.

EL: Right. Yeah, so not to be creepy, but I have followed your career and the writing you’ve done and stuff like that, and it’s been really cool to see what you’ve written about combining being an athlete with being a mathematician and how you’ve changed your focus as you’ve left playing in the NFL and moved to doing this full-time. It’s very neat.

KK: So, John, what’s your favorite theorem?

JU: Yes, so I guess this is the name of the podcast?

KK: Yeah.

JU: So I should probably give you a theorem. So my favorite theorem is a theorem by Batson, Spielman, and Srivastava.

EL: No, I don’t. Please educate us.

JU: Good! So this is perfect because I’m about to introduce you to my mathematical idol.

KK: Okay, great.

JU: Pretty much who I think is the most amazing applied mathematician of this generation, Dan Spielman at Yale. Dan Spielman got his Ph.D. at MIT. He was advised by Mike Sipser, and he was a professor at MIT and eventually moved to Yale. He’s done amazing work in a number of fields, but this paper, it’s a very elegant paper in applied math that doesn’t really have direct algorithmic applications but has some elegance. The formulation is as follows. So suppose you have some graph, vertices and edges. What I want to tell you is that there exists some other weighted graph with at most a constant times the order of the graph number of edges, so linear in number of edges with respect to vertices, that approximates the Laplacian of this original very dense graph, no matter how dense it is.

So I’m doing not the very best job of explaining this, but let me put it like this. You have a graph. It’s very dense. You have this elliptic operator on this graph, and there’s somehow some way to find a graph that’s not dense at all, but extremely, extremely sparse, but somehow with the exact, well not exact, but nearly the exact same properties. These operators are very, very close.

KK: Can you remind our reader—readers, our listeners—what the Laplacian is?

JU: Yeah, so the graph Laplacian, what you can do, the way I like to introduce it, especially for people not in graph theory type things, is you can define a gradient on a graph. You take every edge, directed in some way, and you can think of the gradient as being a discrete derivative along the edge. And now, as in the continuous case, you take this gradient, you get your Laplacian, and the same way you get a Laplacian in the continuous case, this is how you get your graph Laplacian.

KK: This theorem, so the problem is that dense graphs are kind of hard to work with because, well, they’re dense?

EL: So can I jump in? Dense meaning a lot of edges, I assume?

JU: Lots of edges, as many edges as you want.

KK: So a high degree on every vertex.

JU: Lots of edges, edges going everywhere.

EL: And then with the weighting, that might also mean something like, not that many total edges, but they have a high weight? Does that also make it dense, or is that a different property?

JU: No, in that case, we wouldn’t really consider it very dense.

KK: But the new graph you construct is weighted?

JU: And the old graph can be weighted as well.

KK: All right. What do the weights tell you?

JU: What do you mean?

KK: On the new graph. You generate this new graph that’s more sparse, but it’s weighted. Why do you want the weights? What do the weights get you?

JU: The benefit of the weights is it gives you additional leeway about how you’re scaling things because the weights actually come into the Laplacian because for weighted graphs, when you take this Laplacian, it’s the difference between the average of each node, of all its neighbors, and the node, in a way, and the weights tell you how much each edge counts for. In that way, it allows you greater leeway. If you weren’t able to weight this very sparse graph, this wouldn’t work very well at all.

KK: Right, because like you said, you think of sort of having a gradient on your graph, so this new graph should somehow have the same kind of dynamics as your original.

JU: Exactly. And the really interesting thing is that you can capture these dynamics. Not only can you capture them, but you can capture them with a linear number of edges, linear in the order of the graph.

KK: Right.

JU: So Dan Spielman is famous for many things. One of the things he’s famous for is he was one of the first people to give provable guarantees for algorithms that can solve, like, a Laplacian system of equations in near-linear time, so O(n) plus some logs. From his work there have been many, many different sorts of improvements, and this one is extremely interesting to me because you only use a linear number of edges, which implies that this technique, given this graph you have should be extremely efficient. And that’s exactly what you want because it’s a linear number of edges, you apply this via some iterative algorithm, and you can use this guy as a sort of preconditioner, and things get very nice. The issue is, I believe—and it has been a little bit since I’ve read the paper—I believe the amount of time it takes to find this graph, I think is cubic.

EL: Okay.

JU: So it’s not a sort of paper where it’s extremely useful algorithmically, I would say, but it is a paper that is very beautiful from a mathematical perspective.

KK: Has the algorithm been improved? Has somebody found a better than cubic way to generate this thing?

JU: Don’t quote me on that, I do not know, but I think that no one has found a good way yet. And by good I mean good enough to make it algorithmically useful. For instance, if the amount of time it takes to find this thing is quadratic, or even maybe n to the 1.5 or something like that, this is already not useful for anything greater than near-linear. It’s a very interesting thing, and it’s something that really spoke to me, and I really just fell in love with it, and I think what I like about it most is that it is a very sort of applied area, and it is applied mathematics, theoretical computer science type things, but it is very theoretical and very elegant. Though I am an applied mathematician, I do like very clean things. I do like very nice looking things. And perhaps I can be a bad applied mathematician because I don’t always care about applications. Which kind of makes you a bad applied mathematicians, but in all my papers I’m not sure I’ve ever really, really cared about the applications, in the sense that if I see a very interesting problem that someone brings to me, and it happens to have, like some of the things I’ve gotten to do in machine learning, great, this is like the cherry on top, but that isn’t the motivating thing. If it’s an amazing application but some ugly, ugly thing, I’m not touching it.

EL: Well, before we actually started recording, we talked a little bit about how there are different flavors of applied math. There are ones that are more on the theoretical side, and probably people who do a lot of things with theoretical computer science would tend towards that more, and then there are the people who are actually looking at a biological system and solving differential equations or something like this, where they’re really getting their hands dirty. It sounds like you’re more interested in the theoretical side of applied math.

JU: Yeah.

KK: Applied math needs good theory, though.

JU: That’s just true.

KK: You’ve got to develop good theory so that you know your algorithms work, and you want them to be efficient and all that, but if you can’t prove that they actually work, then you’re a physicist.

JU: There’s nothing I hate more than heuristics. But heuristics do have a place in this world. They’re an important thing, but there’s nothing I dislike more in this world than doing things with heuristics without being able to give any guarantees.

EL: So where did you first encounter this theorem? Was it in the research you’ve been doing, the study you’ve been doing for your Ph.D.?

JU: Yes, I did encounter this, I think it was when I was preparing for my qualifying exams. I was reading a number of different things on so-called spectral graph theory, which is this whole field of, you have a graph and some sort of elliptic operator on it, and this paper obviously falls under this category. I saw a lecture on it, and I was just fascinated. You know it’s a very nice result when you hear about it and you’re almost in disbelief.

KK: Right.

JU: I heard about it and I thought I didn’t quite hear the formulation correctly, but in fact I did.

KK: And I seem to remember reading in Sports Illustrated — that’s an odd sentence to say — that you were working on some version of the traveling salesman problem.

JU: That is true. But I would say,

KK: That’s hard.

JU: Just because I’m working on the asymmetric traveling salesman problem does not mean you should be holding your breath for me to produce something on the traveling salesman problem. This is an interesting thing because I am getting my Ph.D., and you do want, you want to try to find a research project where yes, it’s tough and it’s challenging you, but at the end of your four or five years you have something to show for it.

KK: Right. Is this version of the problem NP-hard?

JU: Yes, it is. But this version, there isn’t any sort of inapproximability result as in some of the other versions of TSP. But my advisor Michele Gomez [spelling], who—for the record, I’m convinced I have the single best advisor in the world, like he is amazing, amazing. He has a strong background in combinatorial optimization, which is the idea that you have some set of discrete objects. You need to pick your best option when the number of choices you have is often not polynomial in the size of your input. But you need to pick the best option in some reasonable amount of time that perhaps is polynomial.

EL: Yeah, so are these results that will say something like, we know we can get within 3 percent of the optimal…

JU: Exactly. These sorts of things are called approximation algorithms. If it runs in polynomial time and you can guarantee it’s within, say, a constant factor of the optimal solution, then you have a constant approximation algorithm. We’ve been reading up on some of the more recent breakthroughs on ATSP. There was a breakthrough this August someone proved the first constant approximation algorithm for the asymmetric traveling salesman problem, and Michele Gomez, who also is the department head at MIT of math, he had the previous best paper on this. He had a log log approximation algorithm from maybe 2008 or 2009, but don’t quote me on this. Late 2000s, so this is something we’ve been reading about and thinking about.

EL: Trying to chip away a little bit at that.

JU: Exactly. It’s interesting because this constant approximation algorithm that came out, it used this approach that, I think Michele won’t mind me saying this, it used an approach that Michele didn’t think was the right way to go about it, and so it’s very interesting. There are different ways to construct an approximation algorithm. At its core, you have something you’re trying to solve, and this thing is hard, but now you have to ask yourself, what makes it hard? Then you need to sort of take one of the things that makes it hard and you need to loosen that. And his approach in his previous paper was quite different than their approach, so it’s interesting.

KK: So the other thing we like to do on this show is to ask our guest to pair their theorem with something. So what have you chosen to pair your theorem with?

JU: I still haven’t fully thought about this, but you’ve put me on the spot, and so I’m going to say this: I would pair this with, I think this is a thing, Miller 64. That’s a thing, right?

KK: This is a beer?

JU: Yeah, the beer.

KK: It’s a super low-calorie beer?

JU: It’s a beer, and they advertise it on TV.

KK: I see, it’s very sparse.

JU: People weightlifting, people running, and then drinking a 64-calorie beer. It’s the beer for athletes.

EL: Okay.

JU: I think it’s a very, very good beer because it at least claims to taste like a beer, be very much like a beer, and yet be very sparse.

EL: Okay, so it’s, yeah, I guess I don’t know a good name for this kind of graphs, but it’s this graph of beers.

JU: Yes, it’s like, these things are called spectral sparsifiers.

EL: Okay, it’s the spectral sparsifier of beers.

KK: That’s it.

EL: So they’ve used the “Champagne of beers” slogan before, but I really think they should switch to the “spectral sparsifier of beers.” That’s a free idea, by the way, Miller, you can just take that.

JU: Hold on.

KK: John’s all about the endorsements, right?

JU: Let’s not start giving things away for free now.

KK: John has representation.

EL: That’s true.

JU: We will give this to you guys, but you need to sponsor the podcast. This needs to be done.

EL: Okay. I’m sure if they try to expand their market share of mathematicians, this will be the first podcast they come to.

KK: That’s right. So hey, do you want to talk some smack? Were you actually the smartest athlete in the NFL?

JU: I am not the person to ask about that.

KK: I knew you would defer.

JU: Trust me, I’ve gone through many, many hours of media training. You need something a little more high-level to catch me than that.

KK: I’m sure. You know, I wasn’t really trying to catch you. You know, Aaron Rodgers looked good on Jeopardy. I don’t know if you saw him on Celebrity Jeopardy a couple years ago.

JU: No.

KK: He won his game. My mother—sorry—was a huge Packers fan. She grew up near Green Bay, and she loved Aaron Rodgers, and I think she recorded that episode of Jeopardy and watched it all the time.

JU: I was invited to go on Family Feud once, the celebrity Family Feud.

KK: Yeah?

JU: But I don’t know why, but I wasn’t really about that life. I wasn’t really into it.

KK: You didn’t want Steve Harvey making fun of you?

JU: Also, I’m not sure I’m great at guessing what people think.

EL: Yeah.

JU: That’s not one of my talents.

EL: Finger isn’t on the pulse of America?

JU: No, my finger is not on the pulse. What do people, what’s people’s favorite, I can’t even think of a question.

EL: Yeah.

KK: Well, John, this has been great. Thanks for joining us.

JU: Thanks for having me. I can say this with certainty, this is my second favorite podcast I have ever done.

KK: Okay. We’ll take that. We won’t even put you on the spot and ask you what the favorite was. We won’t even ask.

JU: When I started the sentence, know that I was going to say favorite, and then I remembered that one other. I’ve done many podcasts, and this is one of my favorites. It’s a fascinating idea, and I think my favorite thing about the podcast is that the audience is really the people I really like.

KK: Thanks, John.

EL: Thanks for being here.

[end stuff]

Episode 17 - Nalini Joshi

Evelyn Lamb: Hello and welcome to My Favorite Theorem. I’m your cohost Evelyn Lamb. I’m a freelance math and science writer in Salt Lake City, Utah. And this is your other cohost.

Kevin Knudson: Hi, I’m Kevin Knudson, professor of mathematics at the University of Florida. I’m looking forward to this because of the time zone issue here. This is taking place on two different days.

EL: Yes, yes, we are delighted to be joined by Nalini Joshi, who is joining us from tomorrow in Australia, which we’re getting a kick out of because we’re easily amused.

KK: That’s right.

EL: Hi, Nalini. Would you like to tell us a little bit about yourself?

Nalini Joshi: Sure. My name is Nalini Joshi. I’m a professor of applied mathematics at the University of Sydney. What else can I say except I’m broadcasting from the future? I was born in Burma and I moved to Australia as a child with my parents when they emigrated to Australia, and most of my education has been in Australia except for going to the U.S. to do a Ph.D., which I did at Princeton.

EL: Okay, so you’ve spent some time in both hemispheres. I guess in multiple times in your life.

NJ: Yeah.

EL: So when I was a little kid I had this idea that the world could never end because, you know, in the U.S., there’s always someone who’s a full day ahead, so I know that Thursday would have to happen because if it was Wednesday where I was, someone was already living in Thursday, so the world could never end.

NJ: That’s such a deep insight. That’s wonderful.

KK: That’s pretty good.

EL: Well…

KK: I was watching football when I was a kid.

NJ: I used to hang out at the back of the school library reading through all the old Scientific American magazines. If only they had columns like yours, Evelyn. Fantastic. I really, really wanted to work out what was happening in the universe, and so I thought about time travel and space travel a lot as a teenager.

EL: Oh. So did you start your career wanting to maybe go more into physics, or did you always know you wanted to be a mathematician?

NJ: No, I really wanted to become an astrophysicist, because I thought that was the way, surely, to understand space travel. I wanted to be an astronaut, actually. I went to an all-girls school for the first half of my school years, and I still remember going to see the careers counselor and telling her I wanted to be an astronaut. She looked at me and she said, you have to be more realistic, dear. There was no way that somebody like me could ever aspire to it. And nowadays it’s normal almost. People from all different countries around the world become astronauts. But at the time I had to think about something else, and I thought, okay, I’m going to become a scientist, explore things through my own mind, and that was one way I could explore the universe. So I wanted to do physics when I came to university. I studied at the University of Sydney as an undergraduate. When I got to first-year physics, I realized my other big problem, which is that I have no physical intuition. So I thought, I really needed to understand things from a really explicit, literal, logical, analytical point of view, and that’s how I came to know I must be more of a mathematician.

EL: Okay.

KK: I have the same problem. I was always going to be a math major, but I thought I might pick up a second major in physics, and then I walked into this junior-level relativity class, and I just couldn’t do it. I couldn’t wrap my head around it at all. I dropped it and took logic instead. I was much happier.

NJ: Yeah. Oh good.

EL: So we invited you on to find out what your favorite theorem is.

NJ: Yes. Well that was a very difficult thing to do. It was like choosing my favorite child, which I would never do. But I finally decided I would choose Mittag-Leffler’s theorem because that was something that really I was blown away by when I started reading more about complex analysis as a student. I mean, we all learnt the basics of complex analysis, which is beautiful in itself. But then when you went a little bit further, so I started reading, for example, the book by Lars Ahlfors, which I still have, called Complex Analysis.

KK: Still in use.

EL: That’s a great one.

NJ: Which was first I think published in 1953. I had the 1979 version. I saw that there were so many powerful things you could do with complex analysis. And the Mittag-Leffler theorem was one of the first ones that gave me that perspective. The main thing I loved about it is that you were taking what was a local, small piece of information, around, for example, poles of a function. So we’re talking about meromorphic functions here, that’s the subject of the theorem.

EL: Can we maybe set the stage a little bit? So what is a meromorphic function?

NJ: A meromorphic function is a function that’s analytic except at isolated points, which are poles. The worst singularities it has are poles.

EL: So these are places where the function explodes, but otherwise it’s very smooth and friendly.

KK: And it explodes in a controlled way, it’s like 1/zn for some finite n kind of thing.

NJ: Exactly. Right. An integer, positive n. When I try to explain this kind of thing to people who are not mathematicians, I say it’s like walking around in a landscape with volcanoes. Well-timed, well-controlled, well-spaced volcanoes. You’re walking in the landscape of just the Earth, say, walking around these places. There are well-defined pathways for you to move along by analytic continuation. You know ahead of time how strong the volcano’s eruption is going to be. You can observe it from a little distance away if you like because there is no danger because you can skirt all of these volcanoes.

KK: That’s a really good metaphor. I’m going to start using that. I teach complex variables in the summer. I’m going to start using that. That’s good.

NJ: So a meromorphic function, as I say, [cut ~7:29-7:31?] it’s a function that gives you a pathway and the elevation, the smoothness of your path in this landscape. And its poles are where the volcanoes are.

EL: So Mittag-Leffler’s theorem, then, is about controlling exactly where those poles are?

NJ: Not quite. It’s the other way around. If you give me information about locations of poles and how strong they are, the most singular part of that pole, then I can reconstruct a function that has poles exactly at those points and with exactly those strengths. That’s what the theorem tells you. And what you need is just a sequence of points and that information about the strength of the poles, and you need potentially an infinite number of these poles. There’s one other condition, that the sequence of these poles has a limit at infinity.

KK: Okay, so they don’t cluster, in other words.

NJ: Exactly. They don’t coalesce anywhere. They don’t have a limit point in the finite plane. Their limit point is at infinity.

EL: But there could be an infinite number of these poles if they’re isolated, on integer lattice points in the complex plane or something like that.

NJ: Right, for example.

KK: That’s pretty remarkable.

NJ: If you take your standard trigonometric functions, like the sine function or the cosine function, you know it has periodically spaced zeroes. You take the reciprocal of that function, then you’ve got periodically placed poles, and it’s a meromorphic function, and you can work out which trig function it is by knowing those poles. It’s powerful in the sense that you can reconstruct the function everywhere not just at the precise points which are poles. You can work out that function anywhere in between the poles by using this theorem.

KK: That’s really remarkable. That’s the surprising part, right?

NJ: Exactly.

KK: If you knew you had a finite number of poles, you could sort of imagine that you could kind of locally construct the function and glue it together, that wouldn’t be a problem. But the fact that you can do this for infinitely many is really pretty remarkable.

NJ: Right. It’s like going from local information that you might have in one little patch of time or one little patch of space and working out what happens everywhere in the universe by knowing those little local patches. It’s the local to global information I find so intriguing, so powerful. And then it struck me that this information is given in the form of a sum of those singular parts. So the function is reconstructed as a series, as an infinite sum of the singular parts of the information you’re given around each pole. That’s a very simple way of defining the function, just taking the sum of all these singular things.

KK: Right.

EL: Yeah, I love complex analysis. It’s just full of all of these things where you can take such a small amount of local information and suddenly know what has to be happening everywhere. It’s so wonderful.

NJ: Right, right. Those two elements, the local to global and the fact that you have information coming from a discrete set of points to give you continuous smooth information everywhere in between, those two elements, I realized much later, feature in a lot of the research that I do. So I was already primed to look for that kind of information in my later work.

EL: Yeah, so I was going to ask, I was wondering how this came up for you, maybe not the Mittag-Leffler theorem specifically, but using complex analysis in your work as an applied mathematician.

NJ: Right. So what I do is build toolboxes of methods. So I’m an applied mathematician in the sense that I want to make usable tools. So I study asymptotics of functions, I study how you define functions globally, functions that turn out to be useful in various mathematical physics contexts. I’m more of a theoretical applied mathematician, if you like, or I often say to people I’m actually a mathematician without an adjective.

KK: Right. Yeah.

NJ: You know that there is kind of a hierarchy of numbers in the number system. We start with the counting numbers, and we can add and subtract them. Subtraction leads you to negative integers. Multiplication and division leads you to rational numbers, and then solving polynomial equations leads you to algebraic numbers. Each time you’re building a higher being of a type of number. Beyond all of those are numbers like π and e, which are transcendental numbers, in the sense that they can’t be constructed in terms of a finite number of operations from these earlier known operations and earlier known objects.

So alongside that hierarchy of numbers there’s a hierarchy, a very, very closely related hierarchy of functions. So integers correspond to polynomials. Square roots and so on correspond to algebraic functions. And then there are transcendental functions, the exponential being one of them, exponential of x. So a lot of the transcendentality of functions is occupied by functions which are defined by differential equations.

I started off by studying differential equations and the corresponding functions that they define. So even when you’re looking at linear differential equations, you get very complicated transcendental functions, things like the exponential being one of them. So I study functions that are even more highly transcendental, in the sense that they solve nonlinear equations, and they are like π in the sense that these functions turn out to be universal models in many different contexts, particularly in random matrix theory where you might be, for example, trying to work out the statistics of how fundamental particles interact when you fire them around the huge loop of the CERN collider. You do that by looking at distributions of entries in infinitely large matrices where the entries are random variables. Now under certain symmetries, symmetry groups acting on, for example, you might have particles that have properties that allow these random matrices to be orthogonal matrices, or Hermitian matrices, or some other kind of matrices. So when you study these ensembles of matrices with these symmetry properties and you study properties like what’s their largest eigenvalue, then you get a probability distribution function which happens to be, by some miracle, one of those functions I’ve studied. There’s kind of a miraculous bridge there that nobody really knows why these happen. Then there’s another miraculous thing, which is that these models, using random matrices, happen to be valid not just for particle physics but if you’re studying bus arrival times in Cuernavaca, or aircraft boarding times, or when you study patient card sorting, all kinds of things are universally described by these models and therefore these functions. So first of all, these functions have this property: they’re locally defined by initial value problems given for the differential equation.

KK: Right.

NJ: But then they have these amazing properties which allow them to be globally defined in the complex plane. So even though we didn’t have the technology to describe these functions explicitly, not like I could say, take 1 over the sine function, that gives you a meromorphic function, whose formulae I could write down, whose picture I could draw, these functions are so transcendental that you can’t do that very easily, but I study their global properties that make them more predictable wherever you go in the complex plane. So the Mittag-Leffler theorem sort of sets up the baseline. I could just write them as the sum of their poles. And that’s just so powerful to me. There are so many facets of this. I could go on and on. There is another direction I wanted to insert into our conversation, which is that the next natural level when you go beyond things like trigonometric functions and their reciprocals is to take functions that are doubly periodic, so trigonometric functions have one period. If you take double periodicity in the complex plane, then you get elliptic functions, right? So these also have sums of their poles as an expression for them. Now take any one of these functions. They turn out to be functions that parametrize very nice curves, cubic curves, for example, in two dimensions. And so the whole picture shifts from an analytic one to an algebraic geometric one. There are two sides to the same function. You have meromorphic functions on one side, and differential equations, and on the other side you have algebraic functions and curves, and algebraic properties and geometric properties of these curves, and they give you information about the functions on the other side of that perspective. So that’s what I’ve been doing for the last ten years or so, trying to understand the converse side so I can get more information about those functions.

EL: Yeah, so using the algebraic world,

NJ: Exactly, the algebro-geometric world. This was a huge challenge at the beginning, because as I said, I was educated as an applied mathematician, and that means primarily the analytic point of view. But to try and marry that to the algebraic point of view is something that turned out to be a hurdle at the beginning, but once you get past that, it’s so freeing and so beautiful and so strikingly informative that I’m now saying to people, all applied mathematicians should be learning algebraic geometry.

KK: And I would say the converse is true. I think the algebraic geometers should probably learn some applied math, right?

NJ: True, that too. There’s so many different perspectives here. It all started for me with the Mittag-Leffler theorem.

EL: So something we like to do on this show is to ask our guest to pair their theorem with something: food, beverage, music, anything like that. So what have you chosen to pair your theorem with?

NJ: That was another difficult question, and I decided that I would concentrate on the discrete to continuous aspect of this, or volcanoes to landscapes if you like. As I said, I was born in Burma, and in Burma there are these amazing dishes called le thoke. I’ll send you a Wikipedia link so you can see the spelling and description. Not all of it is accurate, by the way, from what I remember, but anyway. Le thoke is a hand-mixed salad. “Le” is hand and “thoke” is mixture. In particular, the one that’s based on rice is one of my favorites. You take a series of different ingredients, so one is rice, another might be noodles, there have to be specific types, another is tamarind. Tamarind is a sour plant-based thing, which you make into a sauce. Another is fried onions, fried garlic. Then there’s roasted chickpea flour, or garbanzo flour.

KK: This sounds amazing.

NJ: Then another one is potatoes, boiled potatoes. Another one is coriander leaves. Each person might have their favorite suite of these many, many little dishes, which are all just independent ingredients. And you take each of them into a bigger bowl. You mix it with your hands. Add as much spices as you want: chili powder, salt, lemon juice, and what you’re doing is amalgamating and combining those discrete ingredients to create something that transcends the discrete. So you’re no longer tasting the distinct tamarind, or the distinct fried onion, or potatoes. You have something that’s a fusion, if you like, but the taste is totally different. You’ve created your meromorphic function, which is that taste in your mouth, by combining those discrete things, which each of them you wouldn’t eat separately.

KK: Sure. It’s not fair. It’s almost dinner time here, and I’m hungry.

NJ: I’m sorry!

EL: Are there any Burmese restaurants in Gainesville?

NJ: I don’t know. I think there’s one in San Francisco.

EL: Yes! I actually was just at a Burmese restaurant in San Francisco last month. I had this tea leaf salad that sounds like this.

NJ: Yeah, that’s a variation. Pickled tea leaves as an ingredient.

EL: Yeah, it was great.

NJ: I was also thinking about music. So there are these compositions by Philip Glass and Steve Reich which are basically percussive, independent sounds. Then when they interweave into those patterns you create these harmonies and music that transcends each of those particular percussive instruments, the strikes on the marimba and the xylophones and so on.

EL: Like Six Marimbas by Steve Reich?

NJ: Yeah.

EL: Another of our guests, her episode hasn’t aired yet, though it will by the time our listeners are hearing this, another of our guests chose Steve Reich to pair with her theorem.

KK: That’s right.

EL: One of the most popular musicians among mathematicians pairing their theorems with music.

NJ: Somebody should write a book about this.

KK: I’m sure. So my son is a college student. He’s studying music composition. He’s a percussionist. I need to get on him about this Steve Reich business. He must know.

EL: Yeah, he’s got to.

KK: This has been great fun, Nalini. I learned a lot about not just math, but I really knew nothing about Burmese food.

NJ: Right. I recommend it highly.

KK: Next time I’m there.

NJ: You said something about mentioning books?

EL: Yeah, yeah, if you have a website or book or anything you’d like to mention on here.

NJ: This is my book. I think it would be a bit too far away from the topic of this composition, but it has this idea of going from continuous to discrete.

EL: It’s called Discrete Systems and Integrability.

NJ: Yes.

EL: We’ll put a link to some information about that book, and we’ll also link to your website on the show notes so people can find you. You tweet some. I think we kind of met in the first place on Twitter.

NJ: That’s right, Exactly.

EL: We’ll put a link to that as well so people can follow you there.

NJ: Excellent. Thank you so much.

EL: Thank you so much for being here. I hope Friday is great. You can give us a preview while we’re still here.

KK: We’ll find out tomorrow, I guess.

NJ: Thank you for inviting me, and I’m sorry about the long delay. It’s been a very intense few years for me.

EL: Understandable. Well, we’re glad you could fit it in. Have a good day.

NJ: Thank you. Bye.

[outro]

Episode 16 - Jayadev Athreya

Evelyn Lamb: Hello and welcome to My Favorite Theorem. I’m Evelyn Lamb, one of your hosts. And this is your other host.

Kevin Knudson: Hi, I’m Kevin Knudson, professor of mathematics at the University of Florida. How are you doing, Evelyn?

EL: I’m good. I actually forgot to say what I do. In case anyone doesn’t know, I’m a freelance math and science writer, and I live in Salt Lake City, Utah, where it has been very cold recently, and I’m from Texas originally, so I am not okay with this.

KK: Everyone knows who you are, Evelyn. In fact, Princeton University Press just sent me a complimentary copy of the Best Math Writing of 2017, and you’re in it, so congratulations, it’s really very cool. [clapping]

EL: Well thanks. And that clapping you heard from the peanut gallery is our guest today, Jayadev Athreya. Do you want to tell us a little bit about yourself?

Jayadev Athreya: Yeah, so I’m based in Seattle, Washington, where it is, at least for the last 15 minutes it has not been raining. I’m an associate professor of mathematics at the University of Washington, and I’m the director of the Washington Experimental Mathematics Lab. My work is in geometry, dynamical systems, connections to number theory, and I’m passionate about getting as many people involved in mathematics as a creative enterprise as is possible.

KK: Very cool.

EL: And we actually met a while ago because my spouse also works in your field. I have the nice privilege of getting to know you and not having to learn too much about dynamical systems.

JA: Evelyn and I have actually known each other since, I think Evelyn was in grad school at Rice. I think we met at some conferences, and Evelyn’s partner and I have worked on several papers together, and I’ve been a guest in their wonderful home and eaten tons of great granola among other things. On one incredibly memorable occasion, a buttermilk pie, which I won’t forget for a long time.

KK: Nice. I’ve visited your department several times. I love Seattle. You have a great department there.

JA: It’s a wonderful group of people, and one of the great things about it is of course all departments recognize research, and many departments also recognize teaching, but this department has a great tradition of public engagement with people like Jim Morrow, who was part of the annual [ed. note: JA meant inaugural; see https://sites.google.com/site/awmmath/awm-fellows] class of AWM fellows and runs this REU and this amazing event called Math Day where he gets two thousand high school kids from the Seattle area on campus. It’s just a very cool thing for a research math department to seriously recognize and appreciate these efforts. I’m very lucky to be here.

KK: Also because I’m a topologist, I have to take a moment to give, well, I don’t know what the word is, but you guys lost a colleague recently.

JA: We did.

KK: Steve Mitchell. He was a great topologist, but even more, he was just a really great guy. Sort of unfailingly kind and always really friendly and helpful to me when I was just starting out in the game. My condolences to you and your colleagues because Steve really was great, and he’s going to be missed.

JA: Thank you, Kevin. There was a really moving memorial service for Steve. For any of the readers who are interested in learning more about Steve, for the last few years of his life he wrote a really wonderful blog reflecting on mathematics and life and how the two go together, and I really recommend it. It’s very thoughtful. It’s very funny, even as he was facing a series of challenges, and I think it really reflects Steve really well.

KK: His biography that he wrote was really interesting too.

JA: Amazing. He came with a background that was very different to a lot of mathematicians.

EL: I’ll have to check it out.

KK: Enough of that. Let’s talk about theorems.

EL: Would you like to share your favorite theorem?

JA: Sure. So now that I’m in the northwest, and in fact I’m even wearing a flannel shirt today, I’m going to state the theorem from the perspective of a lumberjack.

EL: Okay.

JA: So when trees are planted by a paper company, they’re planted in a fairly regular grid. So imagine you have the plane, two number lines meeting at a 90 degree angle, and you have a grid, and you plant a tree at each grid point. So from a mathematician’s perspective, we’re just talking about the integer lattice, points with integer coordinates. So let’s say where I’m standing there’s a center point where maybe there’s no tree, and we call that the origin. That’s maybe the only place where we don’t plant a tree. And I stand there and I look out. Now there are a lot of trees around me. Let’s say I look around, and I can see maybe distance R in any direction, and I say, hm, I wonder how many trees there are? And of course you can do kind of a rough estimate.

Now I’m going to switch analogies and I’ll be working in flooring. I’m going to be tiling a floor. So if you think about the space between the trees as a tile and say that has area 1, you look out a distance R and say, well, the area of the region that you can see is about πR2, it’s the area of the circle, and each of these tiles has size 1, so maybe you might guess that there are roughly πR2 trees. That’s what’s called the Gauss circle problem or the lattice point counting problem. And the fact that that is actually increasingly accurate as your range of vision gets bigger and bigger, as R gets bigger and bigger, is a beautiful theorem with an elementary proof, which we could talk about later, but what I want to talk about is when you’re looking out, turning around in this spot, you can’t see every tree.

EL: Right.

JA: For instance, there’s a tree just to the right of you. You can see that tree, but there’s a tree just to the right of that tree that you can’t because it’s blocked by the first tree that you see. There’s a tree at 45 degrees that would have the coordinate (1,1), and that blocks all the other trees with coordinates (2,2) or (3,3). It blocks all the other trees in that line. We call the trees that we can see, the visible trees, we call those primitive lattice points. It’s a really nice exercise to see that if you label it by how many steps to the right and how many steps forward it is, call that that the integer coordinates (m,n), or maybe since we’re on the radio and can’t write, we’ll call it (m,k), so the sounds don’t get too confusing.

EL: Okay.

JA: A point (m,k) is visible if the greatest common divisor of the numbers m and k is 1. That’s an elementary exercise because, well maybe we’ll just talk a little bit about it, if you had m and k and they didn’t have greatest common divisor 1, you could divide them by their greatest common divisor and you’d get a tree that blocks (m,k) from where you’re sitting.

EL: Right.

JA: We call these lattice points, they’re called visible points, or sometimes they’re called primitive points, and a much trickier question is how many primitive points are there in the ball of radius R, or in any kind of increasingly large sequence of sets. And this was actually computed, I believe for the first time, by Euler

KK: Probably. Sure, why not?

JA: Yeah, Euler, I think Cauchy also noticed this. These are names, anything you get at the beginning of analysis or number theory, these names are going to show up.

KK: Right.

JA: And miraculously enough, we agreed that in the ball of radius R, the total number of trees was roughly the area of the ball, πR2. Now if you look at the proportion of these that are primitive, it’s actually 6/π2.

KK: Oh.

JA: So the total number of primitive lattice points is actually 6/π2 times πR2. And now, listeners of this podcast might remember some of their sequences and series from calc 1, or 2, or 3, and you might remember seeing, probably not proving, but seeing, that if you add up the following series: 1 plus 1/4 plus 1/9 plus 1/16 plus 1/25, and so on, and you can actually do this, you can write a little Python script to do this. You’ll get closer and closer to π2/6. Now it’s amazing, now there is of course this principle that there aren’t enough small numbers in mathematics, which is why you have all these coincidences, but this isn’t a coincidence. That π2/6 and our 6/π2 are in a very real mathematical sense the same object. So that’s my favorite mathematical theorem. So when you count all lattice points, you get π showing up in the numerator. When you count primitive ones, you get π showing up in the denominator.

KK: So the primitive ones, that must be related to the fact that if you pick two random integers, the probability that they’re relatively prime is this number, 6/π2.

JA: These are essentially equivalent statements exactly. What we’re saying is, look in the ball of radius R. Take two integers sort of randomly in between, so that m2+n2 is less than R squared, what’s the proportion of primitive ones is exactly the probability that they’re relatively prime. That’s a beautiful reformulation of this theorem.

KK: Exactly. And asymptotically, as you go off to infinity, that’s 6/π2.

JA: Yeah, and what’s fun is, if a listener does like to do a little Python programming, in this case, infinity doesn’t even have to be so big. You can see 6/π2 happening relatively quickly. Even at R=100, you’re not far off.

EL: Well the squares get smaller so fast. You’re just adding up something quite small in not too long.

JA: That’s right. That’s my favorite mathematical theorem for many reasons. For one, this number, 6/π2, it shows up in so many places. What I do is at the intersection of many fields of mathematics. I’m interested in how objects change. I’m interested in counting things, and I’m interested in the geometry of things. And all of these things come into play when you’re thinking about this theorem and thinking about various incarnations of this theorem.

EL: Yeah, I was a little surprised when you told us this was going to be your theorem because I was thinking it was going to be some kind of ergodic theorem for flows or something because the stuff I know about your field is more what my spouse does, which is more related to dynamical systems. I actually think of myself as a dynamicist-in-law.

JA: That’s right. The family of dynamicists actually views you as a favorite in-law, Evelyn. You publicize us very nicely. You write about things like billiards with a slit, which is something that we’ve been telling the world about, but until you did.

EL: And that was a birthday gift for my spouse. He had been wanting me to write about that, and I just thought it was so technical, I don’t feel like it. Finally, it’s a really cool space, but it’s just a lot to actually go in and write about that. But yeah, I was surprised to see something I think of as more number theory related show up here. That number 6/π2, or π2/6, whichever way you see it, it’s one of those things where the first time you see it, you wonder why would you ever square π? It comes as an area thing, so something else is usually being squared when you see it. Strange thing.

JA: So now what I’m going to say is maybe a little bit more about why I picked it. For me, that number π2/6 is actually the volume of a moduli space of abelian differentials.

KK: Ah!

EL: Of course!

JA: Of course it is. It’s what’s called a Siegel-Veech constant, or a Siegel constant. Can I say just a couple words about why I love π2/6 so much?

EL: Of course.

JA: Let’s say that instead of planting your trees in a square grid, you have a timber company where they wanted to shoot an ad where they shot over the forest and they wanted it to look cool, and instead of doing a square grid, they decided to do a grid with parallelograms. Still the trees are planted in a regular grid, but now you have a parallelogram. So in mathematical terms, instead of taking the lattice generated by (1,0) and (0,1), you just take two vectors in the plane. As long as they’re linearly independent, you can generate a lattice. You can still talk about primitive vectors, which are the ones you can see from (0,0). There are some that are going to be blocked and some that aren’t going to be blocked. In fact, it’s a nice formulation. If you think of your vectors as (a,c) and (b,d), then what you’re essentially doing is taking the matrix (ab,cd)[ed. note: this is a square array of numbers where the numbers a and b are in the top row and c and d are in the bottom row] and applying it to the integer grid. You’re transforming your squares into parallelograms.

KK: Right.

JA: And a vector in your new lattice is primitive if it’s the image of a primitive vector from the integer lattice.

EL: Yeah, so there’s this linear relationship. You can easily take what you know about the regular integer lattice and send it over to whatever cool commercial tree lattice you have.

JA: That’s right. Whatever parallelogram tiling of the plane you want. What’s interesting is even with this change, the proportion of primitive guys is still 6/π2. The limiting proportion. That’s maybe not so surprising given what I just said. But here’s something that is a little bit more surprising. Since we care about proportions of primitive guys, we really don’t care if we were to inflate our parallelograms or deflate them. If they were area 17 or area 1, this proportion wouldn’t change. So let’s just look at area 1 guys, just to nail one class down. This is the notion of an equivalence class essentially. You can look at all possible area 1 lattices. This is something mathematicians love to do. You have an object, and you realize that it comes as part of a family of objects. So we started with this square grid. We realized it sits inside this family of parallelogram grids. And then we want to package all of these grids into its own object. And this procedure is usually called building a moduli space, or sometimes a parameter space of objects. Here the moduli space is really simple. You just have your matrices, and if you want it to be area 1, the determinant of the matrix has to be 1. In mathematical terms, this is called SL(2,R), the special linear group with real coefficients. There’s a joke somewhere that Serge Lang was dedicating a book to his friend R, and so he inscribed it “SL2R,” but that’s a truly terrible joke that I’m sorry, you should definitely delete from your podcast.

KK: No, that’s staying in.

JA: Great.

EL: You’re on the record with this.

JA: Great. That’s sort of all possible deformations, but then you realize that if you hit the integer lattice with integer matrices, you just get it back. Basically the space of all lattices you can basically think of as 2 by 2 matrices with real entries and determinant 1 up to 2x2 matrices with integer entries. What this allows you to do is allows you to give a notion of a random lattice. There’s a probability measure you can put on this space that tells you what it means to choose one of these lattices at random. Basically what this means is you pick your first vector at random, and then you pick your second vector at random as uniformly as possible from the ones that make determinant 1 with it. That’s actually accurate. That’s actually a technically accurate statement.

Now what that means is you can talk about the average behavior of a lattice. You can say, look, I have all of these lattices, I can average. And now what’s amazing is you can fix your R. R could be 1. R could be 100. R could be a million. And now you can look at the number of primitive points divided by the number of total points in the lattice. You average that, or let me put it a slightly different way: you average the number of primitive points and divide by the average number of total points.

KK: Okay.

JA: That’s 6/π2.

EL: So is that…

JA: That’s not an asymptotic. That’s, if you average, if you integrate over the space of lattices, you integrate and you look at the number of primitive points, you divide by the average number of total points, it’s 6/π2.That’s no matter the shape of the region you’re looking in. It doesn’t have to be a ball, it can be anything. That’s an honest-to-God, dead-on statement that’s not asymptotic.

EL: So is that basically saying that the integer lattice behaves like the average lattice?

JA: It’s saying at the very large scale, every lattice behaves like the average lattice. Basically there’s this function on the space of lattices that’s becoming closer and closer to constant. If you take the sequence of functions which is proportion of primitive vectors, that’s becoming closer and closer to constant. At each scale when you average it, it averages out nicely. There might be some fluctuations at any given scale, and what it’s saying is if you look at larger and larger scales, these fluctuations are getting smaller and smaller. In fact, you can kind of make this precise, if you’re in probability, what we’ve been talking about is basically computing a mean or an expectation. You can try and compute a variance of the number of primitive points in a ball. And that’s actually something my student Sam Fairchild and I are working on right now. There are methods that people have thought about, and there’s in fact a paper by a mathematician named Rogers in the 1950s who wrote about 15 different papers called Mean Values on the Space of Lattices, all of which contain a phenomenal number of really interesting ideas. But he got the dimension 2 case slightly wrong. We’re in the process of fixing that right now and understanding how to compute the variance. It turns out that what we do goes back to work of Wolfgang Schmidt, and we’re kind of assembling that in a little bit more modern language and pushing it a little further.

I do want to mention one more name, which is, I mentioned it very briefly already. I said this is what is called a Siegel-Veech constant. Siegel was the one who computed many of these averages. He was a German mathematician who was famous for his work on a field called the geometry of numbers. It’s about the geometry of grids. Inspired by Siegel, a mathematician named William Veech, who was one of Evelyn’s teachers at Rice, started to think about how to generalize this problem to what are called higher-genus surfaces, how to average certain things over slightly more complicated spaces of geometric objects. I particularly wanted to mention Bill Veech because he passed away somewhat unexpectedly.

EL: A year ago or so?

JA: Yeah, a little bit less than a year ago. He was somebody who was a big inspiration to a lot of people in this field, who really had just an enormous number of brilliant ideas, and I still think we’re still kind of exploring those ideas.

EL: Yeah, and a very humble person too, at least in the interactions I had with him, and very approachable considering what enormous work he did.

JA: That’s right. He was deeply modest and an incredibly approachable person. I remember the first time I went to Rice. I was a graduate student, and he had read things I had written. This was huge deal for me, to know that, I didn’t think anybody was reading things I’d written. And not to make this, I guess we started off with remembering Steve, and we’re remembering Bill.

There’s one more person who I think is very important to remember in this context, somebody who took Siegel’s ideas about averaging things over spaces and really pushed them to an extent that’s just incredible, and the number 6/π2 shows up in the introduction to one of the papers that came out of her thesis. This was Maryam Mirzakhani, who also we lost at a very, very young age. She was a person who, like Veech, had incredibly deep contributions that I think we’re going to continue to mine for ideas, and she’s going to continue having a really incredible legacy, who was also very encouraging to colleagues, contemporaries, and young people. If you’re interested in 6/π2 and how it connects to not just lattices in the plane but other surfaces, her thesis resulted in three papers, one in Inventiones, one in the Annals, and one in the Journal of the American Math Society, which might be the three top journals in the field.

EL: Right.

JA: For the record, for instance, I think of myself as a pretty good research mathematician, and I have a total over 12 years of zero in any of those three journals.

KK: Right there with you.

JA: The introduction to this paper, she studies simple closed curves on the punctured torus, which are very closely linked to integer lattice points. She shows how 6/π2 also shows up as what’s called a Weil-Peterson volume, or rather π2/6 shows up as what’s called a Weil-Peterson volume of the moduli space. Again, a way of packaging lots of spaces together.

EL: We’ll link to that, I’m sure we can find links for that for the show notes so people can read a little more about that if they want.

JA: Yeah. I think even there are very nice survey papers that have come out recently that describe some of the links there. These are sort of the big things I wanted to hit on with this theorem. What I love about it is it’s a thread that shows up in number theory, as you pointed out. It’s a thread that shows up in geometry. It’s a thread that shows up in dynamical systems. You can use dynamics to actually do this counting problem.

EL: Okay.

JA: Yeah, so there’s a way of doing dynamics on this object where we package everything together to get the 6/π2. It’s not the most efficient, not the most direct proof, but it’s a proof that generalizes in really interesting ways. For me, a theorem in mathematics is really beautiful if you can see it from many different perspectives, and this one to me starts so many stories. It starts a story where if you think of a lattice, you can think about going to higher-dimensional lattices. Or you can think of it as a surface, where you take the parallelogram or the square and glue opposite sides and get a torus, or you can start doing more holes, that’s higher genus. It’s rare that all of these different generalizations will hold really fruitful and beautiful mathematics, but in this case I think it does.

KK: So hey, another part of this podcast is that we ask our guest to pair their theorem with something. So what have you chosen to pair your theorem with?

JA: So there’s a grape called, I’m just going to look it up so I make sure I get everything right about it. It’s called primitivo. So it’s an Italian grape. It’s closely related to zinfandel, which I kind of like also because I want primitive, and of course I want the integers in there, so I’ve got a Z. Primitivos are also an excellent value wine, so that makes me very happy. It’s an Italian wine. Both primitivo and zinfandel are apparently descended from a Croatian grape, and so what I like about it is it’s something connected, it connects in a lot of different ways to a lot of different things. Now I don’t know how trustworthy this site is, it’s a site called winegeeks.com. Apparently primitivo can trace its ancestry from the ancient Phoenicians in the province of Apulia, the heel of Italy’s boot. I’m a big fan of the Phoenicians because they were these cosmopolitan seafarers who founded one of my favorite cities in the world, Marseille, actually Marseille might be the first place I learned about this theorem, so there you go.

EL: Another connection.

JA: Yeah. And it’s apparently the wine that was served at the last supper.

KK: Okay.

EL: I’m sure that’s very reliable.

JA: I’m sure.

EL: Good information about vintages of those.

JA: I would pair it with a primitivo wine because of the connections, these visible points are also called primitive points by mathematicians, so therefore I’m going to pair it with a primitivo wine. Another possible option, if you can’t get your hands on that, is to pair it with a spontaneously fermented, or primitive beer.

EL: Oh yeah.

JA: I’m a big fan of spontaneously fermented beers. I like lambics, I like other things.

EL: Two choices. If you’re more of a wine person or more of a beer person, you’ve got your pairing picked out. I’m glad you’re so considerate to make sure we’ve got options there.

JA: Or I might drink too much, that’s the other possibility.

KK: No, not possible.

EL: Well it’s 9:30 where you are, so I’m hoping you’re not about to go out and have one of these to start your day. Maybe at the end of the day.

JA: I think I’ll go with my usual cappuccino to start my day.

KK: Well this has been great fun. I learned a lot today.

EL: Yeah. Thanks for being on. You had mentioned that you wanted to make sure our listeners know about the website for the Washington math lab, which is where you do some outreach and some student training.

JA: That’s right. The website is wxml.math.washington.edu. It’s the Washington Experimental Math Lab. WXML is also a Christian radio station in Ohio. We are not affiliated with the Christian radio station in Ohio. If anybody listens to that, please don’t sue us. So what I said at the top of the podcast, we’re very interested in trying to create as large as possible a community of people who are creating their own mathematics. To that end, we have student research projects where undergraduate students work together with faculty and graduate students and collaborative teams to do exploratory and experimental mathematics, teams have done projects ranging from creating sounds associated to number theory sequences to updating and maintaining OEIS and Wikipedia pages about mathematical concepts to doing research modeling stock prices, modeling rare events in protein folding, to right now one of my teams is working on counting pairs and triples and quadruples of primitive integer vectors and trying to understand how those behave. So that’s one side of it. The other side is we do a lot of, like Evelyn said, public engagement. We run teacher’s circles for middle schools and elementary schools throughout the Seattle area and the northwest, and we do a lot of fabrication with 3d printing teaching tools. Right now I’m teaching calculus 3, so we’re printing Riemann sums, 3d Riemann sums as we do integration in two variables. The reason I’m spending so much time plugging this is if you’re in a university and this sounds intriguing to you, we have a lab starter kit on our webpage which gives you information on how you might want to start a lab. All labs look different, but at this point we just had our Geometry Labs United conference this summer. There are labs at Maryland, at the University of Illinois Urbana-Champaign, at the University of Illinois in Chicago, at George Mason University, at University of Texas Rio Grande Valley, Kansas State. There’s one starting at Oklahoma State, at the University of Kentucky. So the lab movement is on the march, and if you’re interested in joining that, please go to our website, check out our lab starter kit, and please feel free to contact us about what are some good ways to get started on this track.

EL: All right. Thanks for being on the show.

JA: Thanks so much for the opportunity. I really appreciate it, and I’m a big fan of the podcast. I loved the episode with Eriko Hironaka. I thought that was just amazing.

KK: Thanks. We liked that one too.

JA: Take care, guys.

EL: Bye.

[outro]

Episode 15 - Federico Ardila

Evelyn Lamb: Welcome to My Favorite Theorem. I'm your host Evelyn Lamb, a freelance math and science writer in Salt Lake City, Utah, and this is your cohost.

Kevin Knudson: I'm Kevin Knudson, professor of mathematics at the University of Florida. How are you doing, Evelyn?

EL: I am still on an eclipse high. On Monday, a friend and I got up, well got up in time to get going by 5 in the morning, to get up to Idaho and got to experience a total eclipse, which really lived up to the hype.

KK: You got totality?

EL: Yes, we got in the band of totality for a little over two minutes.

KK: We had 90 percent totality. It was still pretty impressive. Our astronomy department here set up their telescopes. We have a great astronomy department here. They had the filters on. There were probably 500 kids in line to see the eclipse. It was really pretty spectacular.

EL: It was pretty cool. I'm already making plans to go visit my parents on April 8, 2024 because they're in Dallas, which is in the path for that one.

KK: Very nice.

EL: So I've been trying to get some work done this week, but then I just keep going and looking at my friends' pictures of the eclipse, and NASA's pictures and everything. I'm sure I will get over that at some point.

KK: It was the first day of classes here for the eclipse. It was a bit disruptive, but in a good way.

EL: My spouse also had his first day of class, so he couldn't come with us.

KK: Too bad.

EL: But anyway, we are not here to talk about my feels about the eclipse. We are here to welcome Federico Ardila to the podcast. So Federico, would you like to say a bit about yourself?

Federico Ardila: Yeah, first of all, thanks so much for having me. As Evelyn just said, my name is Federico Ardila. I never quite know how to introduce myself. I'm a mathematician, I'm a DJ, I'm an immigrant from Colombia to the US, and I guess most relevant to the podcast, I'm a math professor at San Francisco state university. I also have an adjunct position in Colombia at theUniversidad de los Andes. I'm also spending the semester at MSRI [Mathematical Sciences Research Institute] in Berkeley as a research professor, so that's what I'm up to these days.

KK: I love MSRI. I love it over there. I spent a semester there, and every day at teatime, you walk into the lounge and get the full panoramic view of the bay. You can watch the fog roll in through the gate. It's really spectacular.

FA: Yeah, you know, one tricky thing is you kind of want to stay for the sunset because it's so beautiful, but then you end up staying really late at work because of it. It's a balance, I guess.

KK: So, the point of this thing is that someone has a favorite theorem, so I actually don't know what your favorite theorem is, so I'm going to be surprised. What's your favorite theorem, Federico?

FA: Yeah, so first of all I apologize for not following your directions, but it was deliberate. You both asked me to tell you my favorite theorem ahead of time, but I'm not very good at following directions. But I also thought that since I want to talk about something that I think not a lot of people think about, maybe I shouldn't give you a heads-up so we can talk about it, and you can interrupt me with any questions that you have.

EL: Get our real-time reactions here.

FA: Exactly. The other thing is that instead of talking about a favorite theorem, I want to talk about a favorite object. There's a theorem related to it, but more than the theorem, what I really like is the object.

EL: Okay.

FA: I want to talk a little about matroid theory. How much do you two think about matroids?

KK: I don't think about them much.

EL: Not at all.

KK: I used to know what a matroid is, so remind us.

FA: Excellent. Yeah, so matroid theory was basically an abstraction of the notion of independence. So something that was developed by Hassler Whitney, George Birkhoff, and Saunders MacLane in the '30s. Back then, you could write a thesis in graph theory at Harvard. This was part of Hassler Whitney's Ph.D. thesis where he was trying to solve the four-color theorem, which basically says that if you want to color the countries in a map, and you only have four colors, you will always be able to do that in such a way that no two neighboring countries are going to have the same color. So this was one of the big open problems at the time. At the time they were trying to figure out a more mathematical grounding or structure that they could put on graphs, and so out of that the theory of matroids was born. This was in a paper of Whitney in 1935, and he had the realization that the properties that graphs have with regards to how graphs cycle around, what the cycles are, what the spanning trees are, and so on, are exactly the same properties that vectors have. So there was a very strong link between graph theory and linear algebra, and he basically tried to pursue an axiomatization of what was the key combinatorial essence of independence?

EL: Okay, and so by independence, is that like we would think of linear independence in a matrix? Matroid and matrix are kind of suggestively similarly named. So is that the right thing we should be thinking about for independence?

FA: Exactly, so you might think that you have a finite set of vectors in a vector space, and now you want to figure out the linear dependencies between them. And actually that information is what's called the matroid. Basically you're saying these two vectors are aligned, or these three vectors lie on the same plane. So that information is called the matroid, and Whitney basically laid out some axioms for what the kind of combinatorial properties that linear independence has, and what he realizes is that these are exactly the same axioms that graphs have when you think about independence. Now you need a new notion of independence. In a graph you're going to say you have a dependency in edges whenever they form a cycle. So somehow it is redundant to be able to walk from point A to point B in two different ways, so whenever there is that redundancy, we call it dependency in a graph.

Basically Whitney that these were the same kind of properties, and he defined a matrix to be an abstract mathematical object that was supposed to capture that notion of independence.

EL: Okay. So this is very new to me, so I'm just kind of doing free association here. So I'm familiar with the adjacency matrix of a graph. Does this contain information about the matroid, or is this a little side path that is not really the same thing?

FA: This is a really good point. To every graph you can associate an adjacency matrix. Basically what you do is if you have an edge from vertex i to vertex j in the graph, in the matrix you put a column that has a bunch of 0's with a 1 in position i and a -1 in position j. You might think of this as the vector ei-ej where the e's are the standard basis in your vector space. And you're absolutely right, Evelyn, that when you look at the combinatorial dependencies between the graph in terms of graph dependence, they're exactly the linear dependencies in that set of vectors, so in that sense, that vector perfectly models the graph as matroid theory is concerned.

EL: Okay.

FA: So, yeah, that's a really good comparison. One reason that I love matroids is that it turns out that they actually apply in a lot of other different settings. There are many different notions of independence in mathematics, and it was realized over the years that they also satisfy these properties. Another notion of independence that you might be familiar with is the notion of algebraic independence. You learn this in a course in field extensions, and you learn about extension degrees and transcendence bases and things like this. That's the notion of algebraic independence, and it turns out that that notion of independence also satisfies these axioms that Whitney laid out, and so they also form a matroid. So whenever you have a field extension, you also have a matroid.

KK: So what's the data you present? Say X is a matroid. If you're trying to write this down, what gets handed to you?

FA: That's another really good question, and I think it's a bit of a frustrating question because it depends on who you ask. The reason for this is that so many people encounter matroids in their everyday objects that they think of them in very different ways. Some people, if they hand you a matroid, they're going to give you a bunch of sets. Maybe this is the most common things. If you give me a list of vectors, then I could give you the linearly independent sets out of these sets of vectors. That would be a list, say 1 and 2 are independent, 1 and 4 are independent, 1, 6, and 7 are dependent, and so on. That would be a set system. If you asked somebody else, then they might think of that as a simplicial complex, and they might hand you a simplicial complex and say that's a matroid. One thing that Birkhoff realized, and this was very fashionable in the '30s at Harvard, is to think about lattices in the sense of posets. If you had Birkhoff, he would actually hand you a lattice and say that's a matroid. I think this is something that's a bit frustrating for people that are trying to learn matroids. I think there are at least 10 different definitions of what a matroid is, and they're all equivalent to each other. Actually Rota made up the name cryptomorphism. You have the same theory, and you have two different axiom systems for the same theory, and you need to prove they're equivalent. This is something that when I first learned about matroids, I hated it. I found it really frustrating. But I think as you work in this topic, you realize that it's very useful to have the insight someone in linear algebra would have, the insight somebody in graph theory would have, the insight that somebody in algebraic geometry would have. And so to do that, you end up kind of going back and forth between these different ways of presenting a matroid.

EL: Like the clothing that the matroid is wearing at the time. Which outfit do you prefer?

FA: Absolutely.

KK: Being a good algebraic topologist, I want to say that this sort of reminds me of category theory. Can you describe these things as a functor from something to something else? It sort of sounds like you've got these sort of structures that are preserved, they're all the same, or they're cryptomorphic, right? So there must be something, you've got a category of something and another different category, and the matroid is sort of this functor that shows a realization between them, or am I just making stuff up?

FA: I should admit that I'm not a topologist, so I don't think a lot about categories, but I definitely do agree that over the last few years, one program has been to set down stronger algebraic foundations, and there's definitely a program of categorizing matroids. I'm not sure what you're saying is exactly correct.

KK: I'm sure it isn't.

FA: But that kind of philosophy is at play here.

KK: So you mentioned that there was a theorem lurking behind your love of matroids.

FA: So let me first mention one quick application, and then I'll tell you what the object is that I really like.

There's another application of this to matching problems. One example that I think academic mathematicians are very familiar with is the problem of matching job candidates and positions. It's a very difficult problem. Here you have a notion of dependences; for example, if the same person is offered two different jobs, they can only take one of those jobs, so in that sense, those two jobs kind of depend on each other. It turns out that this setting also provides a matroid. One reason that that is important is it's a much more applied situation because, you know, there are many situations in real life where you really need to do matchings, and you need to do it quickly and inexpensively and so on. Now when this kind of combinatorial optimization community got a hold of these ideas, and they wanted to find a cheap matching quickly, then one thing that people do in optimization a lot is if you want to optimize something, you make a polytope out of it. And so this is the object that I really like and want to tell you about. This is called the matroid polytope.

EL: Okay.

FA: Out of all these twelve different sets of clothing that matroids like to wear, my favorite outfit is the matroid polytope. Maybe I'll tell you first in the abstract why I like this so much.

EL: First, can we say exactly what a polytope is? So, are we thinking a collection of vertices, edges, faces, and higher-dimensional things because this polytope might live in a high-dimensional space? Is that what we mean?

FA: Exactly. If your polytope is in two dimensions, it's a polygon. If it's in three dimensions, it's the usual solids that we're used to, like cubes, pyramids, and prisms, and they should have flat edges, so they should have vertices, edges, and faces like you said. And then the polytope is just the higher-dimensional generalization for that. This is something that in combinatorial optimization is very natural. They really need these higher-dimensional polytopes because if you have to match ten different jobs, you have ten different axes you have to consider, so you get a polytope in ten dimensions.

KK: Sort of the simultaneous, feasible regions for multiple linear inequalities, right?

FA: Exactly. But yeah, I think Edmonds was the first person who said, okay, I want to study matroids. I'm going to make a polytope out of them. Then one thing that they realized is there is a notion in algorithms of greedy algorithms, which is, a greedy algorithm is when you're trying to accomplish a task quickly, what you do is you just, at each point in time, you just do the thing that seems best at the time. If we go back to the situation of matching jobs, then the first thing you might say is ask one school, okay, what do you want? And then they would hire the first person, and they would choose a person, and then you'd ask the next school, what do you want, and they would choose the next best person, and so on. We know that this strategy doesn't usually work. This is the no long-term planning solution. You just do what immediately what seems best to do, and what the community realized was that matroids are exactly where greedy strategies work. That's another way of thinking of matroids is that's where the greedy algorithm works. And the way they proved this was with this polytope.

So for optimization people, there's this polytope. It turns out that this polytope also arises in several other settings. There's a beautiful paper of Gelfand, Goresky, MacPherson, and and Serganova, and they're doing algebraic geometry. They're studying toric varieties. You don't need to know too much about what this is, but the main point is that if you have a toric variety, there is a polytope associated to it. There's something called the moment map that picks up a toric variety and takes it to a polytope. In this very different setting of toric varieties, they encounter the same polytope, coming from algebraic geometry. Also there's a third way of seeing this polytope coming from commutative algebra. If you have an ideal in a polynomial ring, and again it's not too important that you know exactly what this means, but there's a recipe, given an ideal, to get a polytope out of it. Again, there's a very natural way that, given a very natural ideal, you get the same polytope, coming from commutative algebra.

This is one reason that I like this polytope a lot. It really is kind of a very interdisciplinary object. It's nature. It drops from optimization, it drops from algebraic geometry, it drops from commutative algebra. It really captures the essence of these matroids that have applications in many different fields. So that's the favorite object that I wanted to tell you about.

KK: I like this instead of a theorem in some sense. I learned something today. I mean, I learn something every day. But this idea that, mathematicians know this and a lot of people outside of mathematics don't, that the same structures show up all over the place. Like you say, combinatorics is interesting this way. You count things two different ways and you get a theorem. This is a meta-version of that. You've got these different instances of this fundamental object. Whitney essentially found this fundamental idea. And we can point at it and say, oh, it's there, it's there, it's there, it's there. That's very rich, and it gives you lots to do. You never run out of problems, in some sense. And it also forces you to learn all this new stuff. Maybe you came at this from combinatorics to begin with, but you've had to learn some algebraic geometry, you've had to learn all these other things. It's really wonderful.

FA: I think you're really getting at one thing I really like about studying which is that, I'm always arguing with my students that they'll say, oh, I do analysis, I don't do algebra. Or I do algebra, I don't do topology. And this is one field where you really can't get away with that. You need to appreciate that mathematics is very interconnected and that if you really want to get the full power of the objects and you really want to understand them, you kind of have to learn many different ways of thinking about the same thing, which I think is really very beautiful and very powerful.

EL: So then was the theorem that you were talking about, is this the theorem that the greedy algorithm works on polytopes, or is this something else?

FA: No, so the theorem is a little different. I'll tell you what the theorem is. Out of all the polytopes, there is one which is very fundamental, which is the cube. Now as you know mathematicians are weird, and for us cubes, a square is a cube. A segment is a cube. Cubes exist in every dimension. In zero dimensions it's a point, in one dimension it's a segment, in two dimensions it's a square, in three dimensions it's the 3-cube, and in any dimension there is a cube. And so the theorem that Gelfand, Goresky, MacPherson, and Serganova proved, which probably Edmonds knew at least to some extent, so he was coming from optimization, is that matroids are exactly the sub-polytopes of the cube-in other words, you choose some vertices of the cube and you don't choose others, and then you look at what polytope that determines-that polytope is going to be a matroid if and only if the edges of that polytope are all of the form ei-ej. This goes back to what you were saying at the beginning, Evelyn, that these are exactly those vectors that have a bunch of zeroes, and then they have one 1 and one -1. So matroid polytopes have the property that every edge is one of those vectors, and what I find really striking is that the opposite is true: if you just take any sub-polytope of the cube and the edges have those directions, then you have a matroid on your hands. First of all, I think that's a really beautiful characterization.

KK: It's so clean. It's just very neat.

FA: But then the other thing is that this collection of vectors ei-ej is a very fundamental collection of vectors, so you know, this is the root system of the Lie algebra of type A. This might sound like nonsense, but the point is that this is one of about seven families of root systems that control a lot of very important things in mathematics. Lie groups, Lie algebras, regular polytopes, things like this. And so also this theorem points to how the theory of matroids is just a theory of type A, so to say, that has analogues in many other Coxeter groups. It basically connects to the tradition of Lie groups and Lie theory, and it begins to show how this is a much deeper theory mathematically than I think anybody anticipated.

EL: Oh cool.

KK: Wow.

EL: So I understand that you have a musical pairing for us today. We all have it queued up. We're recording this with headphones, and we're all going to listen to this simultaneously. Then you'll tell us a little bit about what it is.

KK: Are we ready? I'll count us down. 3-2-1-play.

EL: There we go.

FA: We'll let this play for a little while, and I'm going to ask you what you hear when you hear this. One reason I chose this was I saw that you like percussion.

KK: I do. My son is a percussionist.

FA: One thing I want to ask you is when you hear this, what do you hear?

KK: I hear a lot.

EL: It has a really neat complex rhythm going.

FA: Do you speak Spanish?

KK: A little. Otro ves.

EL: I do not, sadly.

KK: It's called Quítalo del rincón, which, I'm sorry, I don't know what quitálo means.

FA: The song is called Quítalo del Rincón by Carlos Embales. And he was a Cuban musician. One thing is that Cubans are famously hard to understand.

KK: Sure.

FA: So I think even for Spanish speakers, this can be a bit tricky to understand. So do you have any idea what's going on, what he's singing?

EL: No idea.

FA: So this is actually a math lesson.

KK: I was going to say, he's counting. I heard some numbers in there.

FA: Yeah, yeah, yeah. It's actually a math lesson. I just think, man, why can't we get our math lessons to feel like this? This has been something that has kind of shifted a lot my understanding about pedagogy of mathematics. Just kind of imagine a math class that looks like this.

KK: Is he just trying to teach us how to count, or is there more going on back there?

FA: It's kind of an arithmetic lesson, but one thing that I really like is it's all about treating mathematics as a community lesson, and it's saying, okay, you know, if there's somebody that doesn't want to learn, we're going to put them in the middle, and they're going to learn with us.

KK: Oh. So they're not going to let anyone off the hook.

FA: Exactly. We all need to succeed together. It's not about the top students only.

KK: Very cool. We'll put a link to this on the blog post. I'm going to fade it out a little bit.

FA: Same here. Maybe I can tell you a little bit more about why I chose this song.

EL: Yeah.

FA: I should say that this was a very difficult task for me because if choosing one theorem is hard for me, choosing one song is even harder.

KK: Sure.

FA: As I mentioned, I also DJ, and whenever I go to a math conference, I always set aside one day to go to the local record stores and see what I will find. Oddly enough, I found this record in a record store in, I want to say Ann Arbor, Michigan, a very unexpected place for this kind of music. It was a very nice find that managed to explain to me how my being as a mathematician, my being as a DJ might actually influence each other. As a DJ, my job is always to provide an atmosphere where people are enjoying themselves, and it took me hearing this record to connect for me that it's also my job as a mathematician, as a math teacher, also to create atmospheres where people can learn math joyfully and everybody can have a good experience and learn something. In that sense it's a very powerful song for me. The other thing that I really like about it and why I wanted to pair it with the matroids is I think this is music that you cannot possibly understand if you don't appreciate the complexity of the history of what goes behind this music. There's definitely a very strong African influence. They're singing in Spanish, there are indigenous instruments. And I've always been fascinated by how people always try to put borders up. They always tell people not to cross borders, and they divide. But music is something that has never respected those borders. I'm fascinated by how this song has roots in Africa and then went to Cuba. Then this type of music actually went back to Congo and became a form of music called the Congolese rumba, and then that music evolved and went back to Colombia, and that music evolved and became a Colombian form of music called champeta. In my mind, it's similar to something I said earlier, that in mathematics you have to appreciate that you cannot put things into separate silos. You can't just be a combinatorialist or just be an algebraist or just a geometer. If you really want to understand the full power of mathematics, you have to travel with the mathematics. This resonates with my taste in music. I think if you really want to understand music, you have to appreciate how it travels around the world and celebrate that.

KK: This isn't just a math podcast today. It's also enthnomusicology.

FA: Something like that.

KK: Something about that, you know, rhythms are universal, right? We all feel these things. You can't help yourself. You start hearing this rhythm and you go, yeah, I get this. This is fantastic.

FA: What our listeners cannot see but I can is how everybody was dancing.

KK: Yeah, it's undeniable. Of course, Cuban music is so interesting because it's such a diverse place. So many diverse influences. People think of Cuba as being this closed off place, well that's just because from the United States you can't go there, right?

FA: Right.

KK: Everybody else goes there, and they think it's great. Of course, living in Florida there's a weird relationship with Cuba here, which is a real shame. What an interesting culture. Oh well. Maybe someday, maybe someday. It's just right there, you know? Why can't we go?

EL: Well, thanks a lot. Would you like to share any websites or social media or anything that our listeners can find you on, or any projects you're excited about?

FA: Sure, so I do have a Twitter account. I occasionally tweet about math or music or soccer. I try not to tweet too much about politics, but sometimes I can't help myself. People can find that at @FedericoArdila. That's my Twitter feed. I also have an Instagram feed with the same name. Then if people are interested in the music nerd side of what I do, my DJ collective is called La Pelanga, and we have a website lapelanga.com. We have Twitter, Instagram, all these things. We actually, one thing we do is collect a lot of old records that have traveled from Africa to the Caribbean to Colombia to various different parts. Many of these records are not available digitally, so sometimes we'll just digitalize a song and put it up there for people to hear. If people like this kind of music, it might be interesting for people to visit. And then I have my website. People can Google my name and find information there.

EL: Well thank you so much for joining us.

KK: This has been great fun, Federico.

FA: Thank you so much. This has been really fun.

KK: Take care.

[outro]

Episode 14 - Laura Taalman

Kevin Knudson: Welcome to My Favorite Theorem. I’m your host, professor of mathematics at the University of Florida Kevin Knudson. This is my cohost.

Evelyn Lamb: Hi! I’m Evelyn Lamb, a math and science writer in Salt Lake City, Utah. Yeah, things are going well here. I went to the mall the other day, and I was leaving—I had to go to get my computer repaired, and I was in a bad mood and stuff, and I was leaving, and there was just, I walked into the parking lot, there was this beautiful view of this mountain. It’s a mall I don’t normally go to, and these mountains: Wow, it’s amazing that I live here.

KK: Is this the picture you put on Twitter?

EL: Yeah, or Facebook.

KK: Yeah, that is pretty spectacular. Well, I had a haircut today, that’s all I can say. Anyway, let’s get to it. We are very pleased in this episode to welcome Laura Taalman. Laura, do you want to introduce yourself and tell people about yourself?

Laura Taalman: Sure. Hi, thank you for having me on this podcast. I am extremely excited to be on it. Thank you.

EL: We’re glad you’re here.

LT: I’m a math professor at James Madison University, which is in Virginia. I’ve been here since 2000. We don’t have graduate students in our department, we only have undergraduate students. So when I got here, straight out of grad school, I had been studying singular algebraic geometry, and I just could not talk about that with students when we were doing undergraduate research. And I switched to knot theory. I’ve since switched to many things. I seem to switch to a new hat every year or so. My new hat is 3D printing. I’ve been doing a lot with mathematical 3D printing, but I think I’m still wearing that math jacket while I’m wearing the 3D printing hat.

EL: That’s a very exciting costume.

LT: Yes, it’s a very exciting costume, that’s true.

KK: And for a while you were the mathematician in residence at the National Museum of Mathematics, right?

LT: MoMath, that’s true. I did a semester at that, and that was the start of me living in New York City for a couple years to solve a two-body problem. I spent a couple years working in industry in 3D printing there. I just recently, last year, came back to the university. I now have the jacket and hat problem.

KK: Well, that’s better than the two-body problem.

LT: It’s better than not having a jacket or a hat.

KK: That too, right. So actually I was just visiting James Madison a couple of months ago. Laura’s department was very nice. Actually, my wife was visiting, and I was just tagging along, so I crashed their colloquium and just gave one. And everybody was really nice. I really, you know, I went to college at Virginia Tech two hours down the road. I’d never really spent any time in Harrisonburg, but it’s a lovely little town.

LT: It is.

KK: It’s very diverse. I had lunch at an Indonesian place.

EL: Oh wow.

KK: It was fantastic. I can’t get that here, you know.

LT: It’s an amazing place.

KK: It is. I thought it was really great. Anyway, so, you’re going to tell us about your favorite theorem. You told us once beforehand, but I’ve kind of forgotten. I remember, but this is pretty great. So Laura, what’s your favorite theorem?

LT: My favorite theorem comes from my knot theory phase. It’s a theorem in knot theory. I don’t know how much knot theory I should assume before saying what this theorem is, but maybe I should just set it up a little bit.

KK: Yeah, set it up a little bit.

EL: That would be great.

LT: In knot theory, you’re in studying, say, you tie a shoelace and you connect the ends, and you do that again with a different piece of string, and you’re wondering if these could possibly be the same knot in disguise, like you could deform one to another. Of course, we don’t study knots in three dimensions like that because no one can draw that. This is, in fact, how I got into 3D printing was trying to print three-dimensional versions of knots that I could look at their conformations.

KK: Very cool.

LT: But really mathematicians study knots as planar diagrams. You’ve got a diagram of a knot with crossings: over crossings and under crossings, a collections of arcs in the plane with crossings. A very old result in knot theory is that if two of those diagrams represent the same knot secretly (they might look very different), there is a sequence of what are known as Reidemeister moves that gets from one to the other. Reidemeister moves are super simple moves, like putting a twist in a strand or moving one strand over another strand, or moving a strand over or under a crossing, right? Super simple. It’s been proved that that’s sufficient, that’s all you need to change one diagram into any other equivalent diagram.

KK: OK.

LT: So my favorite theorem is by Joel Haas and Jeffrey Lagarias, I think is his name. Haas is from UC Davis, and Lagarias is at Michigan. And in 2001, they proved an upper bound for the number of Reidemeister of moves that it takes to turn a knot diagram that’s secretly unknotted and turn it into basically a circle, the unknot. So they wanted to answer this question.

We know we can, if it’s unknotted, turn it into a circle. The question is how many of these Reidemeister moves are you going to need, and even worse than that, if you start with a diagram that has, like, 10 crossings, you might actually have to increase the number of crossings along the way while simplifying the knot. It’s not necessarily true that the number of crossings will be monotonically decreasing throughout the Reidemeister move process. You might increase the number, you might have to increase the number of crossings by a lot. So this is a nontrivial question of how many Reidemeister moves. So they said, OK, look. We want to find this one constant that will give you an upper bound for any knot that’s trivial to unknot it, the number of Reidemeister moves, and they said that the bound would be of the form 2 times [ed note: Taalman misspoke here and meant to the power instead of times, as is clear from the rest of the conversation] a constant times n, where n is the number of crossings. So if it’s a 10-crossing knot, it would be like 2^10 times this constant, right?

KK: OK.

LT: I was playing around with some numbers, so for example, if you had a 6-crossing knot, right, and if the constant happened to be 10, this would be 2^60, which is over a quintillion.

KK: That’s a lot.

LT: If that constant were 10, and your knot started out with just 6 crossings, that’s a big number. But that is not the bound that they found.

KK: It’s not 10.

LT: Their theorem, my favorite theorem, is that they came up with a bound that the maximum number of Reidemeister moves that would be needed to unknot a trivial knot, that constant is 2^10^11 times n. The constant is 10^11, so 2^(10^11) times n. So I put this into Wolfram Alpha with n=6. So say you have a 6-crossing knot. It’s not so bad. I put in 2^10million [ed note: Taalman misspoke here and meant hundred billion; 10^7 or 10 million comes up as a bound in a different part of the paper], and then also times 6 in the exponent. I just did this this afternoon, and do you know what Wolfram Alpha said?

KK: It couldn’t do it?

LT: I’ve never seen this. It said nothing.

EL: You broke it?

LT: It didn’t spin and think about it, and it didn’t attempt to say something. It literally just pretended that I did not press the button. This is really a big number.

KK: I’m surprised. You know what it should have done? It should have given you the shrug emoji.

LT: Yeah, that would be great if it had that. That would be perfect. So the reason it’s my favorite theorem, I guess there are a lot of reasons, but the primary reason is: this is ridiculous, right? If you have a 6-crossing knot, there’s no way you’re going to need even a quintillion Reidemeister moves in reality. If I actually give you a 6-crossing knot in reality, you’re not going to need a quintillion Reidemeister moves, let alone this number of silence that Wolfram Alpha can’t even calculate. So to me, it’s just really funny. And I could talk a little more about that. But it’s an important result because it’s the first upper bound, which is great, but also, it’s just, it’s ridiculous.

KK: It’s clearly not sharp. They didn’t cook up an example.

LT: It’s clearly not sharp.

KK: They didn’t cook up an example where they had to use that many moves.

LT: Right, no, they did not. It’s kind of like what happened with the twin prime conjecture, and people online were looking at the largest gap you could guarantee, I don’t know if I’m going to say this right, the largest gap.

KK: Right, it was 70 million.

LT: And eventually primes would have to appear with that gap. That gap started out being huge, I don’t remember what it was, but it was really big, and it ended up getting better and better and better and better.

KK: Right.

LT: So this is like the first shot in that game for Reidemeister moves, is 2 to the 10 to the 11th times the number of crossings.

KK: Has anybody made that better yet?

LT: They have. So that was in 2001, this exponential upper bound with very large exponent, and in 2011, two different mathematicians, Coward and Lackenby, I think, proved a different bound that involved an exponential tower. That gives you an idea of just how big that first bound was, if this bound is an exponential tower.

EL: And it’s better?

LT: Actually, let me say that slightly differently because this is not necessarily better. Their result was actually a little bit different. Their result wasn’t taking a knot to the unknot. It was taking any knot to any other knot it was equivalent to.

KK: OK.

EL: OK.

LT: This could well be worse, actually. And to tell you the truth, I was not entirely certain how to type this number into Mathematica, into Wolfram Alpha. It could be a lot worse. Their bound for the maximum number of Reidemeister moves that you need to take one knot to another knot that it’s ambient isotopy equivalent to in 3-space, if you had that knot. I’ve got to get my piece of paper to look at this. Their number is what they call exp^c^n(n), so the n is the sum of the crossing numbers of the two knots. The c^n: c is some constant to be determined. It could be laughably large, right? And what exp means is that it’s 2^n iterated that many times. So exp^k, or exp(k)(n) would be 2^n iterated k times.

KK: Right. 2 to the 2 to the 2 to the…

LT: …2 to the n. So this number is 2 to the 2 to the 2 to the…tower, and the height of this tower is c^n, where n is the number of crossings, and then there’s an n at the top. And the number c is 10 to the one millionth power.

KK: Wow.

EL: Wow. So this is bad news.

LT: This is very bad. So the tower is 10 to the one million high. I’m sure this is worse than the other one.

KK: It’s got to be worse.

LT: They didn’t try at all to make that low. I did a small example: what if the tower was only length 2 and there was a 6 on the top, so 2^2^6. And you’re doing your brackets from the top down, so 2 to the quantity 2^6.

EL: Right.

LT: That is over a quintillion.

KK: Sure.

EL: Yeah, like this is Graham’s number stuff.

LT: Yeah, Graham’s number, all that stuff with the arrows. All that stuff with the arrows.

EL: Yeah, basically you can’t even tell someone how big Graham’s number is because you don’t have the words to describe the bigness of this number.

LT: Yeah, and even with a tower of 2, I’m getting a quintillion. Their length is 10 to the one million. I already don’t understand what 10 to the one million is.

KK: No. You know this thing where you pack the known universe with protons, do you know how many there’d be?

LT: No. Not many?

KK: 10^126.

LT: Oh my God.

KK: So 10 to the one million. You’ve surely seen Powers of 10, this old Eames movie, right?

LT: Yeah, yeah.

KK: The known universe just isn’t that big, you know? It’s what, 10 to the 30th across or whatever. It’s nothing.

EL: You definitely can’t come up with an example that needs this because the heat death of the universe would occur well before we proved this example needed this many steps.

KK: Yeah.

LT: I think that these mathematicians know how funny their result it. It’s definitely, it’s not just funny. The proofs are very complicated and have to do with piecewise linear 3-manifolds and all this. I don’t understand the proofs. This is very sophisticated, so I’m not besmirching them by saying it’s funny. But I think they understand how crazy this sounds. They’ll say things like, this Coward-Lackenby paper has a line in there like, notice that this solves the problem of figuring out if two knots are Reidemeister equivalent because all you have to do is look at every sequence of Reidemeister moves of that length, look at them all, and then see if any two of them turn out to be the same knot. Boom, you’ve solved your problem.

KK: All you have to do.

LT: All you have to do! Problem solved.

EL: Yes.

LT: Or that, so earlier you asked if the result has been improved upon, and it has, but that wasn’t the reference I wanted to say for that. It has been improved just three years ago by Lackenby, one of the authors of that other result, and their result is polynomial. They found a polynomial bound, not an exponential bound. It’s much better. They found that if n is the number of crossings to go from a trivial knot to the trivial circle, this is back to that problem, it’s 236 times n to the 11th power.

KK: OK.

LT: It’s not so bad.

KK: Right.

LT: Not so bad. It is actually pretty bad. But it’s something that Wolfram could calculate. So I did it for example with n equals 3. So say you have a 3-crossing trivial knot. What’s the most number of Reidemeister moves that you would need according to this bound to unknot it? That would be 236 times 3 to the 11th power. That is 2 times 10^31 power, which is 10 nonillion.

KK: Right, OK.

LT: 10 nonillion.

EL: So this isn’t great.

LT: But it had a name! Dressed in scientific notation. Positive change.

EL: It didn’t cause Wolfram Alpha to run away in fright.

LT: No. I think this is the best one so far, this 2014 result by Lackenby. I think it’s the best one.

EL: Well that’s interesting, because you know, just for the example of 3, if you try, like, 10 Reidemeister moves, that’s gotta be it. It feels like that has to be so much lower. It’ll be interesting to see if it’s possible to shrink this down more to meet some more realistic bound.

LT: Honestly, 3 is a ridiculous example. I used it because it was the smallest, but you’re right. If you think about it, there’s really not that many three-crossing diagrams that one can draw.

KK: Right.

LT: Of the ones that are trivial, I’m sure you could find a path of Reidemeister moves. This result isn’t made for low-crossing knots, really, I think. Or at least not three. But you’re right, it’s got to be way better than this.

KK: This is where mathematicians and computer scientists are never going to see eye to eye on something. A computer scientist will look at this and say, that’s ridiculous. You have not solved the problem.

LT: I agree. It’s not good enough. They did have one result in this 2014 paper. Remember I said that you may have to increase the number of crossings? Well back in the original 2001 paper, Haas and Lagarias were like, hey, here’s a fun corollary: you only have to increase the number of crossings by 2 to the power of 10 to the 11th times n at most, because you can’t have more crossings than what it would take for the number of Reidemeister moves. So that’s their corollary. In 2014, that bound is super significantly improved. They just say it’s (7n) squared. That’s not bad at all. They’re saying it doesn’t have to get worse than that on your way to making it the unknot.

KK: You might have to go up and down and up and down and up and down, right?

LT: Right. I guess then they’re saying the most it would ever have to go up is to that.

KK: Yeah.

LT: So things are getting better.

KK: All the time getting better. So part of the fun of this podcast, aside from just learning about absurd numbers, is that we ask our guests to pair their theorem with something. So what have you chosen to pair your theorem with?

LT: That one is actually harder to answer than what is your favorite theorem.

KK: Sure.

LT: I could answer that right away. But I’ve thought about it, and I’ve decided that the best thing to pair it with is champagne.

KK: OK.

LT: Here’s why. First of all, you should really celebrate that a first upper bound has been found.

EL: Yeah.

LT: Especially in terms of when you have undergraduates who are doing research, this kind of meta question of what does it mean to have a first upper bound, a completely non-practical upper bound. The fact that that’s worthy of celebration is something I want them to know. It doesn’t have to be practical. The theory of having an upper bound is very important.

KK: Right.

LT: So champagne is to celebrate, but it’s also to get you over these numbers. I don’t know, maybe it represents how you feel when you’re thinking about the numbers, or what you need to do when you have been thinking about the numbers, is you need a stiff drink. It can be for both.

EL: And champagne is kind of funny, too. It’s got the funny little bubbles, and you’re always happy when you have it. I think it goes very well with the spirit. It’s not practical either.

KK: No.

LT: Yeah.

EL: As drinks go, it’s one of the less practical ones.

KK: And if you get cheap champagne, it will give you a headache, just like these big numbers.

LT: It’s very serious if you had a tower of exponential champagne, this would be a serious problem for you.

KK: Yeah.

EL: Yeah.

KK: Oh wow. We always like to give our guests a chance to plug anything they’re working on. You tweet a lot. I enjoy it.

LT: I do tweet a lot. If you want to find me online, I’m usually known as mathgrrl, like riot grrl but for math. If you’re interested in 3D printable mathematical designs, I have a ton of free math designs on Thingiverse under that name, and I also have a shop on Shapeways which makes you great 3D printed mathematical jewelry and stuff.

EL: It’s all really pretty. You also have a blog, is Hacktastic still going?

LT: Hacktastic is still there. A lot of it has been taken over by these tutorials I’ve been writing about 3D printing with a million different types of software. If you go to mathgrrl.com, Hacktastic is one of the tabs on that.

EL: I like that one.

KK: All over the internet.

EL: Yeah. She will definitely bring some joy to your life on Twitter and on 3D printing worlds. Yeah, thank you so much for being on here. I’m definitely going to look up these papers and try to conceptualize these numbers a little bit.

LT: These are very big numbers. Thank you so much. It’s been really fun talking about this, and thank you for asking what my favorite theorem is.

KK: Thanks, Laura.

[outro]