# Episode 19 - Emily Riehl

/

Kevin Knudson: Welcome to My Favorite Theorem, a podcast about mathematics and everyone’s favorite theorem. I’m your host Kevin Knudson, professor of mathematics at the University of Florida. This is your other host.

Evelyn Lamb: Hi, I’m Evelyn Lamb, a freelance math and science writer in Salt Lake City. So how are things going, Kevin?

KK: Okay. We’re hiring a lot, and so I haven’t eaten a meal at home this week, and maybe not last week either. You think that might be fun until you’re in the middle of it. It’s been great meeting all these new people, and I’m really excited about getting some new colleagues in the department. It’s a fun time to be at the University of Florida. We’re hiring something like 500 new faculty in the next two years.

EL: Wow!

KK: It’s pretty ambitious. Not in the math department.

EL: Right.

KK: I wish. We could solve the mathematician glut just like that.

EL: Yeah, that would be great.

KK: How are things in Salt Lake?

EL: Pretty good. It’s a warm winter here, which will be very relevant to our listeners when they listen in the summer. But it’s hiring season at the University of Utah, where my spouse works. He’s been doing all of that handshaking.

KK: The handshaking, the taking to the dean and showing around, it’s fun. It’s good stuff. Anyway, enough about that. I’m excited about today’s guest. Today we are pleased to welcome Emily Riehl from Johns Hopkins. Hi, Emily.

Emily Riehl: Hi.

KK: Tell everyone about yourself.

ER: Let’s see. I’ve known I wanted to be a mathematician since I knew that that was a thing that somebody could be, so that’s what I’m up to. I’m at Johns Hopkins now. Before that I was a postdoc at Harvard, where I was also an undergraduate. My Ph.D. is from Chicago. I was a student of Peter May, an algebraic topologist, but I work mostly in category theory, and particularly in category theory as it relates to homotopy theory.

KK: So how many students does Peter have? Like 5000 or something?

ER: I was his 50th, and that was seven years ago.

EL: Emily and I have kind of a weird connection. We’ve never actually met, but we both lived in Chicago and I kind of replaced Emily in a chamber music group. I played with Walter and the gang I guess shortly after you graduated. I moved there in 2011. They’re like, oh, you must know Emily Riehl because you’re both mathematicians who play viola. I was like, no, that sounds like a person, though, because violists are all the best people.

KK: So, Emily, you’ve told us, and I’ve had time to think about it but still haven’t thought of my favorite application of this theorem. But what’s your favorite theorem?

ER: I should confess: my favorite theorem is not the theorem I want to talk about today. Maybe I’ll talk about what I don’t want to talk about briefly if you’ll indulge me.

KK: Sure.

ER: So I’m a category theorist, and every category theorist’s favorite theorem is the Yoneda lemma. It says that a mathematical object of some kind is uniquely determined by the relationships that it has to all other objects of the same type. In fact, it’s uniquely characterized in two different ways. You can either look at maps from the object you’re trying to understand or maps to the object you’re trying to understand, and either way suffices to determine in. This is an amazing theorem. There’s a joke in category that all proofs are the Yoneda lemma. I mean, all proofs [reduce] to the Yoneda lemma. The reason I don’t want to talk about it today is two-fold. Number one, the discussion might sound a little more philosophical than mathematical because one thing that the Yoneda lemma does is it orients the philosophy of category theory. Secondly, there’s this wonderful experience you have as a student when you see the Yoneda lemma for the first time because the statement you’ll probably see is not the one I just described but sort of a weirder one involving natural transformations from representable functors, and you see them, and you’re like, okay, I guess that’s plausible, but why on earth would anyone care about that? And then it sort of dawns on you over however many years, in my case, why it’s such a profound and useful observation. And I don’t want to ruin that experience for anybody.

KK: You’re not worried about getting excommunicated, right?

ER: That’s why I had to confess. I was joking with some category theorists, I was just in Sydney visiting the Center of Australian Category Theory, which is the name of the group, and it’s also the center of Australian category theory. And I want to be invited back, so yes, of course, my favorite theorem is the Yoneda lemma. But what I want to talk about today instead is a theorem I really like because it’s a relatively simple idea, and it comes up all over mathematics. Once it’s a pattern you know to look for, it’s quite likely that you’ll stumble upon it fairly frequently. The proof, it’s a general proof in category theory, specializes in each context to a really nice argument in that particular context. Anyway, the theorem is called right adjoints preserve limits.

EL: All right.

KK: So I’m a topologist, so to me, we put a modifier in front of our limit, so there’s direct and inverse. And limit in this context means inverse limit, right?

ER: Right. That’s the historical terminology for what category theorists call limits.

KK: So I always think of inverse limits as essentially products, more or less, and direct limits are unions, or direct sum kinds of things. Is that right?

ER: Right.

KK: I hope that’s right. I’m embarrassed if I’m wrong.

ER: You’re alluding to something great in category theory, which is that when you prove a theorem, you get another theorem for free, the dual theorem. A category is a collection of objects and a collection of transformations between them that you depict graphically as arrows. Kind of like in projective geometry, you can dualize the axioms, you can turn around the direction of the arrows, and you still have a category. What that means is that if you have a theorem in category theory that says for all categories blah blah blah, then you can apply that in particular to the opposite category where things are turned around. In this case, there are secretly two categories involved, so we have three dual versions of the original theorem, the most useful being that left adjoints preserve colimits, which are the direct limits that you’re talking about. So whether they’re inverse limits or direct limits, there’s a version of this theorem that’s relevant to that.

KK: Do we want to unpack what adjoint functors are?

ER: Yes.

EL: Yeah, let’s do that. For those of us who don’t really know category theory.

ER: Like anything, it’s a language that some people have learned to speak and some people are not acquainted with yet, and that’s totally fine. Firstly, a category is a type of mathematical object, basically it’s a theory of mathematical objects. We have a category of groups, and then the transformations between groups are the group homomorphisms. We have a category of sets and the functions between them. We have a category of spaces and the continuous functions. These are the categories. A morphism between categories is something called a functor. It’s a way of converting objects of one type to objects of another type, so a group has an underlying set, for instance. A set can be regarded as a discrete space, and these are the translations.

So sometimes if you have a functor from one category to another and another functor going back in the reverse direction, those functors can satisfy a special dual relationship, and this is a pair of adjoint functors. One of them gets called a left adjoint, and one of them the right adjoint. What the duality says is that if you look at maps out of the image of the left adjoint, then those correspond bijectively and naturally (which is a technical term I’m not going to get into) to maps in the other category into the image of the right adjoint. So maps in one category out of the image of the left adjoint correspond naturally to maps in the other category into the image of the right adjoint. So let me just mention one prototypical example.

KK: Yeah.

ER: So there’s a free and forgetful construction. So I mentioned that a group has an underlying set. The reverse process takes a set and freely makes a group out of that set, so the elements of that group will be words in the letters and formal inverses modulo some relation, blah blah blah, but the special property of these free groups is if I look at the group homomorphism that’s defined on a free group, so this is a map in the category of groups out of an object in the image of the left adjoint, to define that I just have to tell you where the generators go, and I’m allowed to make those choices freely, and I just need to find a function of sets from the generating set into the underlying set of the group I’m mapping into.

KK: Right.

ER: That’s this adjoint relationship. Group homomorphisms from a free group to whatever group correspond to functions from the generators of that free group to that underlying set of the group.

EL: I always feel like I’m about to drown when I try to think about category theory. It’s hard for me to read category theory, but when people talk to me about it, I always think, oh, okay, I see why people like this so much.

KK: Reading category theory is sort of like the whole picture being worth a thousand words thing. The diagrams are so lovely, and there’s so much information embedded in a diagram. Category theory used to get a bad rap, abstract nonsense or whatever, but it’s shown to be incredibly powerful, certainly as an organizing principle but also just in being able to help us push boundaries in various fields. Really if you think about it just right, if you think about things as functors, lots of things come out, almost for free. It feels like for free, but the category theorist would say, no, there’s a ton of work there. So what’s a good example of this particular theorem?

ER: Before I go there, exactly to this point, there’s a great quote by Eilenberg and Steenrod. So Eilenberg was one of the founders of category theory. Saunders MacLand wrote a paper, the “General Theory of Natural Equivalences,” in the ‘40s that defined these categories and functors and also the notion of naturality that I was alluding to. They thought that was going to be both the first and last. Anyway, ten years later, Eilenberg and Steenrod wrote this book, Foundations of Algebraic Topology, that incorporated these diagrammatic techniques into a pre-existing mathematical area, algebraic topology. It had been around since at least the beginning of the twentieth century, I’d say. So they write, “the diagrams incorporate a large amount of information. Their use provides extensive savings in space and in mental effort. In the case of many theorems, the setting up of the correct diagram is a major part of the proof. We therefore urge that the reader stop at the end of each theorem and attempt to construct for himself (it’s a quote here) the relevant diagram before examining the one which is given in the text. Once this is done, the subsequent demonstration can be followed more readily. In fact, the reader can usually supply it himself.”

KK: Right. Like proving Meier-Vietoris, for example. You just set up the right diagram, and in principle it drops out, right?

ER: Right, and in general in category theory, the definitions, the concepts are the hard thing. The proofs of the theorems are generally easier. And in fact, I’d like to prove my favorite theorem for you. I’m going to do it in a particular example, and actually I’m going to do it in the dual. So I’m going to prove that left adjoints preserve colimits.

EL: Okay.

ER: The statement I’m going to prove, the specific statement I’m going to prove by using the proof that left adjoints preserve colimits, is that for natural numbers a, b, and c, I’m going to prove that a(b+c)=ab+ac.

KK: Distributive law, yes!

ER: Distributive property of multiplication over addition. So how are we going to prove this? The first thing I’m going to do is categorify my natural numbers. And what is a natural number? It’s a cardinality of a finite set. In place of the natural numbers a, b, and c, I’m going to think about sets, which I’ll also call A, B, and C. The natural numbers stand for the cardinality of these sets.

EL: Cardinality being the size, basically.

ER: Absolutely. A, B, and C are now sets. If we’re trying to prove this statement about natural numbers, they’re finite sets. The theorem is actually true for arbitrary sets, so it doesn’t matter. And I’ve replaced a, b, and c by sets. Now I have this operation “times” and this operation “plus,” so I need to categorify those as well. I’m going to replace them by operations on sets. So what’s something you can do to two sets so that the cardinalities add, so that the sizes add?

KK: Disjoint union.

EL: Yeah, you could union them.

ER: So disjoint union is going to be my interpretation of the symbol plus. And we also need an interpretation of times, so what can I do for sets to multiply the cardinalities?

EL: Take the product, or pairs of elements in each set.

ER: That’s right. Absolutely. So we have the cartesian product of sets and the disjoint union of sets. The statement is now for any sets a, b, and c, I’m going to prove that if I take the disjoint union B+C, and then form the cartesian product with A, then that set is isomorphic to, has in particular the same number of elements as, the set that you’d get by first forming the products A times B and A times C and then taking the disjoint union.

KK: Okay.

ER: The disjoint union here is one of these colimits, one of these direct limits. When you stick two things next to each other — coproduct would be the categorical term — this is one of these colimits. The act of multiplying a set by a fixed set A is in fact a left adjoint, and I’ll make that a little clear as I make the argument.

ER: The disjoint union here is one of these colimits, one of these direct limits. When you stick two things next to each other — coproduct would be the categorical term — this is one of these colimits. The act of multiplying a set by a fixed set A is in fact a left adjoint, and I’ll make that a little clear as I make the argument.

EL: Okay.

ER: Okay. So let’s just try and begin. So the way I’m going to prove that A times (B+C) is (AxB) +(AxC) is actually using a Yoneda lemma-style proof because the Yoneda lemma comes up everywhere. We know that these sets are isomorphic by arguing that functions from them to another set X correspond. So if the sets have exactly the same functions to every other set, then they must be isomorphic. That’s the Yoneda lemma. Let’s now consider a function from the set A times the disjoint union of B+C to another set X. The first thing I can do with such a function is something called currying, or maybe uncurrying. (I never remember which way these go.) I have a function here of two variables. The domain is the set A times the disjoint union (B+C). So I can instead regard this as a function from the set (B+C), the disjoint union, into the set of functions from A to X.

KK: Yes.

ER: Rather than have A times (B+C) to X, I have from (B+C) to functions from A to X. There I’ve just transposed the cross and adjunction. That was the adjunction bit. So now I have a function from the disjoint union B+C to the set of functions from A to X. Now when I’m mapping out of a disjoint union, that just means a case analysis. Either I need to define a function like this, I have to define firstly a function from B to functions from A to the X, and also from C to functions from A to the X. So now a single function is given by these two functions. And if I look at the piece, now, which is a function from B to functions from A to the X, by this uncurrying thing, that’s equally just a function from A times B to X. Similarly on the C piece, it’s just my function from C to functions from A to X is just a function from A times C to X. So now I have a function from A times B to X and also from A times C to X, and those amalgamate to form a single function from the disjoint union A times B to X, or disjoint union A times C to X. So in summary, functions from A times the disjoint union (B+C) to X correspond in this way to functions from (AxB) disjoint union (AxC) to X, so therefore the sets A times B+C and A times B plus A times C.

EL: And now I feel like I know a category theory proof.

ER: So what’s great about that proof is that it’s completely independent of the context. It’s all about the formal relationships between the mathematical objects, so if you want to interpret A, B, and C as vector spaces and plus as the direct sum, which you might as an example of a colimit, and times as a tensor product, I’ve just proven that the tensor product distributes as a direct sum, like modules over commutative rings. That’s a much more complicated setting, but the exact same argument goes through. And of course there are lots of other examples of limits and colimits. One thing that kind of mystified me as an undergraduate is that if you have a function between sets, the inverse image preserves both unions and intersections, whereas the direct image preserves only unions and not intersections. And there’s a reason for that. The inverse image is a functor between these poset categories of sets of subsets, and admits both left and right adjoints, so it preserves all limits and all colimits of both intersections and unions, whereas this left adjoint, which is the direct image, only preserves the colimits.

KK: Right. So here’s the philosophical question. You didn’t want to get philosophical, but here it is anyway. So category theory in a lot of ways reminds me of the new math. We had this idea that we were going to teach set theory to kindergarteners. Would it be the right way to teach mathematics? So you mention all of these things that sort of drop out of this rather straightforward fact. So should we start there? Or should we develop this whole library? The example of tensor products distributing over direct sums, I mean, everybody’s seen a proof of that in Atiyah and McDonald or whatever, and okay, fine, it works. But wouldn’t it be nice to just get out your sledgehammer and say, look, limits and adjoints commute. Boom!

ER: So I give little hints of category theory when I teach undergraduate point-set topology. So in Munkres, chapter 2 is constructing the product topology, constructing the quotient topology, constructing subspace topologies, and rather than treat these all as completely separate topics, I group all the limits together and group all the colimits together, and I present the features of the constructions. This is the coarsest topology so that such and such maps are continuous, this is the finest topology so that the dual maps are continuous. I don’t define limit or colimit. Too much of a digression. In teaching abstract algebra to undergraduates in an undergraduate course, I do say a little bit about categories. I guess I think it’s useful to precisely understand function composition before getting into technical arguments about group homomorphisms, and the first isomorphism theorem is essentially the same for groups and for rings and for modules, and if we’re going to see the same theorem over and over again, we should acknowledge that that’s what happens.

KK: Right.

ER: I think category theory is not hard. You can teach it on day one to undergraduates. But appreciating what it’s for takes some mathematical sophistication. I think it’s worth waiting.

EL: Yeah. You need to travel on the path a little while before bringing that in, seeing it from that point of view.

ER: The other thing to acknowledge is it’s not equally relevant to all mathematical disciplines. In algebraic geometry, you can’t even define the basic objects of study anymore without using categorical language, but that’s not true for PDEs.

KK: So another fun thing we like to do on this podcast is ask our guest to pair their theorem with something. So what have you chosen to pair this theorem with?

ER: Right. In honor of the way Evelyn and I almost met, I’ve chosen a piece that I’ve loved since I was in middle school. It’s Benjamin Britten’s Simple Symphony, his movement 3, which is the Sentimental Sarabande. The reason I love this piece, so Benjamin Britten is a British composer. I found out when I was looking this up this morning that he composed this when he was 20.

EL: Wow.

ER: The themes that he used, it’s pretty easy to understand. It isn’t dark, stormy classical music. The themes are relatively simple, and they’re things I think he wrote as a young teenager, which is insane to me. What I love about this piece is that it starts, it’s for string orchestra, so it’s a simple mix of different textures. It starts in this stormy, dramatic, unified fashion where the violins are carrying the main theme, and the cellos are echoing it in a much deeper register. And when I played this in an orchestra, I was in the viola section, I think I was 13 or so, and the violas sort of never get good parts. I think the violists in the orchestra are sort of like category theory in mathematics. If you take away the viola section, it’s not like a main theme will disappear, but all of a sudden the orchestra sounds horrible, and you’re not sure why. What’s missing? And then very occasionally, the clouds part, and the violas do get to play a more prominent role. And that’s exactly what happens in this movement. A few minutes in, it gets quiet, and then all of a sudden there’s this beautiful viola soli, which means the entire viola section gets to play this theme while the rest of the orchestra bows out. It’s this really lovely moment. The violas will all play way too loud because we’re so excited. [music clip] Then of course, 16 bars later, the violins take the theme away. The violins get everything.

EL: Yeah, I mean it’s always short-lived when we have that moment of glory.

ER: I still remember, I haven’t played this in an orchestra for 20 years now, but I still remember it like it was yesterday.

EL: Yeah, well I listened to this after you shared it with us over email, and I turned it on and then did something else, and the moment that happened, I said, oh, this is the part she was talking about!

KK: We’ll be sure to highlight that part.

EL: I must say, the comparison of category theory to violists is the single best way to get me to want to know more about category theory. I don’t know how effective it is for other people, but you hooked me for sure.

KK: We also like to give our guests a chance to plug whatever they’re doing. When did your book come out? Pretty recently, a year or two ago?

EL: You’ve got two of them, right?

ER: I do. My new book is called Category Theory in Context, and the intended audience is mathematicians in other disciplines. So you know you like mathematics. Why might category theory be relevant? Actually, in the context of my favorite theorem, the proof that right adjoints preserve limits is actually the watermark on the book.

KK: Oh, nice.

ER: I had nothing to do with that. Whoever the graphic designer is, like you said, the diagrams are very pretty. They pulled them out, and that’s the watermark. It’s something I’ve taught at the advanced undergraduate or beginning graduate level. It was a lot of fun to write. Something interesting about the writing process is I wanted a category theory book that was really rich with compelling examples of the ideas, so I emailed the category theory mailing list, I posted on a category theory blog, and I just got all these wonderful suggestions from colleagues. For instance, row reduction, the fact that the elementary row operations can be implemented by multiplication by an elementary matrix, and then you take the identity matrix and perform the row operations on that matrix, that’s the Yoneda lemma.

KK: Wow, okay.

ER: A colleague friend told me about that example, so it’s really a kind of community effort in some sense.

KK: Very cool. And our regular listeners also found out on a previous episode that you’re also an elite athlete. Why don’t you tell us about that a little bit?

ER: So I think I already mentioned the Center of Australian Category Theory. So there’s this really famous category theory group based in Sydney, Australia, and when I was a Ph.D. student, I went for a few months to visit Dominic Verity, who’s ~28:40 now my main research collaborator. It was really an eventful trip. I had been a rugby player in college, so then when I was in Sydney, I thought it might be fun to try this thing called Australian rules football, which I’d heard about as another contact sport, and I just completely fell in love. It’s a beautiful game, in my opinion. So then I came back to the US and looked up Australian rules football because I wanted to keep playing, and it does exist here. It’s pretty obscure. I guess a consequence of that is I was able to play on the US women’s national team. I’ve been doing that for the past seven years, and what’s great about that is occasionally we play tournaments in Australia, so whenever that happens, I get to visit my research colleagues in Sydney, and then go down to Melbourne, which is really the center of footie, and combine these two passions.

EL: We were talking about this with John Urschel, who of course plays American rules football, or recently retired. This is one time when I wish we had a video feed for this because his face when we were trying to explain, which of course, two mathematicians who have sort of seen this on a TV in a bar trying to explain what Australian rules football is, he had this look of bewilderment.

KK: Yeah, I was explaining that the pitch is a big oval and there’s the big posts on the end, and he was like, wait a minute.

EL: His face was priceless there.

KK: It was good. I used to love watching it. I used to watch it in the early days of ESPN. I thought it was just a fun game to watch. Well, Emily, this has been fun. Thanks for joining us.

ER: Thanks for having me. I’ve loved listening to the past episodes, and I can’t wait to see what’s in the pipeline.

KK: Neither can we. I think we’re still figuring it out. But we’re having a good time, too. Thanks again, Emily.

EL: All right, bye.

ER: Bye.

[end stuff]