playlist
stringclasses
160 values
file_name
stringlengths
9
102
content
stringlengths
29
329k
AI_LLM_Stanford_CS229
Stanford_CS229_Machine_Learning_I_Introduction_I_2022_I_Lecture_1.txt
So I am Tony Ma. This quarter, we are going to have two instructors-- me and Chris. I am Tony Ma, I work on machine learning, machine learning theory, including the theory for different topics in machine learning, reinforced learning, repetition learning, supervised learning, so on and so forth. I guess I would like Chris to say something about him. Whatever he wants to say. Yeah. I'm Chris. I'm also in the machine learning group. I'm really interested in how systems we build are changing with machine learning. It's been a really interesting time for the last 10 years. Started a lot on optimization, how we scale up these big models. That was when machine learning had very few applications in our lives around you. Over the last couple of years, we've built things that, hopefully, some of you in this room have used. My students contributed to things like Search and Gmail and Assistants and other places there. And more recently, really interested in how to make these models robust. And we'll have a great new lecture that Tengyu is going to give about what are called foundation models or these large, self-supervised models that are kind of all the rage. Percy and Tatsu and I cotaught a course about them last term. And this course is really exciting because it's giving you kind of that absolutely foundational layer of machine learning that all that stuff is built on. So this is a great time to study it because it's no longer abstract. You get to use machine learning products every day. And hopefully, you'll get some insight into how they actually work and why there's still so much research to do. So really excited and looking forward to lecturing you folks. Great. I guess you'll see me and Chris alternate every few weeks. Next lecture, you'll see Chris. And then after two or three weeks, you're going to see me. So, to this lecture, I guess I'm going to-- okay, I guess the first thing, the second thing is that, let me introduce the teaching team. So we're going to have 12 fantastic TAs and one head TA and a course coordinator. So is the head TA, and is going to be the course coordinator. They will be, probably, doing most of the works behind the scenes. You probably don't, necessarily, have to interact with them very often. So they are organizing the whole TA team. And we have currently 12 TAs. Probably, we're going to have more if we have more enrollments. And I guess I didn't ask the TAs to show up in the first lecture just because I guess they also have to wear masks. And maybe the pictures serve the same need. But I guess you'll see them pretty often in office hours and different scenarios. Cool. So this lecture I'm going to spend the first, probably, the logistics, some of the basic kind of like the structure of the course, so on and so forth. And then, I'm going to introduce, at a high level, the topics that is covered by this course. So I guess we tried very hard to make everything available online, like in a single doc, on a single website. So we have this course website, which has links to a few different Google Docs. One of them is about all the logistical stuff, and the other is about the syllabus. And the final one is about-- also, there's the links to the lecture notes, and there's links to some guidelines on the final project. So, in theory, I think all the information I presented today will be subset of what you can find on the website. And it's actually a very small subset. So I do encourage you to read through the documents to some extent, especially when you have some questions. Maybe first, go to see whether the documents answer those questions. And then, feel free to ask questions to us. So I guess the first thing I'm going to talk about is the prerequisite. So this course, as you will see, will be-- at least of some of the students in the past say this course is challenging. Of course, some students say it's on the easier side. They are different kind of backgrounds. So I think that's why we do-- this is my first slide right, because I think it's important for you to have some kind of backgrounds to be able to achieve your goals in this course. So I think the most important prerequisite is probably some knowledge about probability on the level of CS109 or Stats For example, you probably should be-- at least, you should have heard of these terms like dispersion, random variables, expectation, conditional probability, variance, density, so on and so forth, right? You don't necessarily have to know exactly all of them off the top of your head, but this probably should be something you have seen in one of the previous courses. Another thing is linear algebra. Matrices multiplication, eigenvectors. I guess linear algebra was offered in Math 104, 113, 205. Actually, there's a longer list of kind of relevant courses in the logistic doc, which taught linear algebra. And the most important thing I think we need is matrix multiplication and eigenvectors. And we also require some basic knowledge of programming, especially in Python and NumPy. So I think if you only know Python but not NumPy, I think that's probably pretty much fine because NumPy is, really, just some basic numerical operations. But if you don't know Python or NumPy but you know, for example, C++, I think that's still probably pretty fun because I think migrating from C to Python is pretty relatively easy in my opinion. But I think they'll have similar-- you just have to change the syntax. But if you know nothing about programming, I think that's probably going to be difficult because a lot of the homeworks, they have some math part and they have the programming part. And well, the most challenging thing I've seen in my past about homeworks is that, when you write a piece of code and something goes wrong-- which happens all the time; even when I write code, there's something that seems to be wrong-- and you don't know whether it's about the syntax or it's about the math, right? So these two things sometimes entangle together. So you thought that I derived the wrong equations. But actually, probably, you didn't use NumPy in the right way. So we're going to cover Python, NumPy in some of the TA lectures, just to kind of give you some refreshment. Or kind of if you didn't know them, you can learn something from the TA lectures. But I think you need to have some basic programming knowledge. Yeah. So we also have materials for the TA lectures, so you can-- we're going to have three lectures on each of these topics-- programming, linear algebra, and probability to kind of review some of the backgrounds for you. This is a mathematically-intense course-- at least, according to-- of course, depending on your backgrounds. But kind of a good portion of students found that this course is mathematically intense. So just kind of a heads-up. So it's probably good for you to have at least, at two out of the three, like among the three things, probably, you need to know at least two of them relatively well so that you don't get kind of entangled issues when you do the homeworks. But that's kind of why this is exciting and rewarding. With that said, the goal of this course is to give you the foundations of machine learning. This is the foundational layer. So this is simultaneously a introductory course to machine learning. We don't require you to have taken a machine learning course to take this, right? So it's an introductory course. But on the other hand, we hope that, after you take this course, you feel somewhat comfortable that you know enough basics of machine learning so that you can apply machine learning to some of the applications. Of course, if you really want to kind of be an expert in some of the applications like NLP and Vision, you probably have to take those courses. But this course, probably, will set up the foundations for the machine learning component of the general kindof like AI or other applications of AI. Right. So that's why this course actually covers a diverse set of topics and does involve some mathematics. We don't have mathematical proofs. Probably, we have a little bit proofs, but very little proofs. But we do have a lot of mathematical derivations, right? You probably have to do some kind of math derivations in the homeworks. And then, we are going to do derivations in the lectures, as well. By the way, if you have any questions, just feel free to stop me. I'm happy to answer any questions. Yes, the lectures are recorded. And you can find the recording on Canvas I guess. So the second important thing that I want to say is the honor code. It's probably a little bit kind of awkward to say this so early. And I think the reason is that, in the past, unfortunately we do have some kind of, there are some kind of issues-- let's be frank. There are some issues with the honor code violations. I don't want to see them. It's very sad for me to have to report students with honor code violation, but that happened in the past. So that's why I want to kind of put this up front. If you don't intentionally violate the honor code, I don't think there's anything you should worry about. But anyway, let me briefly say this is actually a subset of things that we have on the course website. But I think these are the important things. So for example, on one side, we do encourage you to have study groups so you can collaborate with other people on homeworks or on homework questions. But the thing is that you cannot-- OK, so you can discuss works on homework problems in groups, but you have to write down solutions independently. And you also have to write down the names of people with whom you discuss the homework. I'm copying this from the logistic doc, which is a little bit longer. You probably should read that piece of text in the doc, as well. So it's the honor code violation to copy, refer to, or look at written or coding solutions from a previous year, including not limited to official solutions from a previous year, solutions posted online, solutions you or someone else may have written up in previous years, solutions for related problems. If you apply common sense, you should be fine. But as long as you don't intentionally kind of do anything bad, don't be stressed out about it. But on the other hand, they were reporting honor code violations in the past. So we do kind of check the code, using some kind of softwares. And also, we all have TAs to kind of like deal with this kind of honor code violations. Anyway, I don't want to give you too much stress about this. But I do want to kind of put it up front here. OK, another component that I would to kind of like-- Besides homework-- homework is kind of like obvious, why we have to have homeworks-- another component of the course is the course project. So we encourage you to form groups of one to three people. And so, you do a project with three people, for example. And it's the same criterion for either one people or two or three people. And there are more informations on the course website. And typically, you'll apply machine learning to kind of some applications or some kind of topics you are interested in, right? So this is actually one thing that I really like about this course. Eventually, after every quarter, we got probably 100 submissions from the projects, and we see all kinds of topics-- you know, like all kind of applications of machine learning. These are just a list of topics we have seen in the past. And you are welcome to even work on other topics, right? So, of course, you can also work on just the pure algorithms for machine learning. That's also fun. But many people actually also work on applications of machine learning to other kind of topics, like music, and finance, which are kind of interesting. OK, great. So and we have homeworks. We have four homeworks, you'll see. And we are also going to have a midterm. There is no final exam. So the midterm, course project, and homework. Those are the main things for the course. And another component of the course is the TA lectures. So these are optional. You don't have to attend them if you don't find them to be useful. And also, there are actually two sets of TA lectures. So one type of TA lecture is the so-called Friday TA lecture, or Friday section. So we're going to have probably six to seven weeks of these lectures. The first three weeks will be about reviewing some of the basics, and especially the part of the basic concepts related to machine learning. And then, the other weeks are about more advanced topics, which are not required for the course but may be interesting for some subset of you. And we also have the discussion sections. The goal of this is to have some interactive sessions. Our course is pretty big, right? So you can feel free to ask questions. But I guess-- it's a little less interactive per person, compared to other courses. So we're going to have these small sessions led by TAs, which the goal is to kind of like imitate more traditional classroom settings and also work on kind of more of bridging the gap between the lectures and the homeworks, right? So basically, the TAs will largely work through problems that are very similar to the homeworks or even sometimes simpler than the homeworks so that, if you need them, they will help you to kind of like make it easier to solve homework questions, right, so and the midterms. And these kinds of sessions will be more interactive. The TAs, probably, will let you do some questions live and maybe present your solutions and discuss with other students, so on and so forth. And the exact time and format will be-- you can find them on the Google Doc about the logistics. Oops. OK. So there are many other informations on the course website, on the Google Doc. The doc is actually It's pretty comprehensive. So, for example, recordings, they can be found on Canvas. There's a course calendar on Canvas. There's a syllabus page which will link to the lecture notes. And we're going to have the Ed-- the platform for question answering. We do encourage you to use that to communicate with us, if that makes sense. Almost, in all situations, I think you probably should use Ed to communicate with us. You can have private posts or anonymous posts-- different type of posts-- depending on what you need. And if you don't have access to that, then you probably have to email some of us. You can email the head TA to add you-- to give you access to that. And there's Gradescope, which is used to submit homeworks. And there are some late day policies, which you can find in the doc, as well. So I guess one thing that I need to mention here, just as a heads up is that we don't allow late days for the final project. And the reason is just that, especially for spring quarter, the grading deadline is very tight. So it's pretty much just a few days after the final exam week. And especially because some of the students have to graduate and they have-- the timeline is very, very strict. And we don't want to make the final project deadline very early, because then, that would conflict with the homework deadlines, so on and so forth. So the final project deadline-- I think it's on Monday of the finals week. Double check that. We tried to put it as late as possible. But on the other hand, because the final grading deadline, we don't allow late days for the final project. And there are some other FAQs in the Google Doc, as well. Any other questions before I move on to the more kind of scientific topics? Just a question. For the discussion, will we be assigned to a specific session, or do we get to choose which discussion sessions we go to? Right. So currently, we have two TAs offering two discussion sessions. I think We will try to make sure that the materials in the two sessions are pretty much the same. And the times are kind of somewhat-- I think we haven't set a time yet. So you can feel free to choose any sessions you want to go. Probably, it's the best for you to consistently go to one session. Maybe the TA knows you better. But you don't necessarily have to. And this is also optional. We don't have to really, say, go to all of them, depending on your needs. Other questions? OK. Sounds great. So then, I will move on to the more scientific part of the course. So as I said, the main goal of this course is to set up you for the foundations of machine learning. And we're going to cover a pretty diverse set of topics in machine learning with some kind of mathematical way. So let me start by some definitions of machine learning. What is machine learning, right? As you can imagine, when you are speaking about such a hot topic that people are constantly researching on-- so there's probably not unique definition, right, that can fit everything, right? But I'm trying to find out some kind of historical definitions of machine learning, which I think describes the field pretty well. So in 1959, I think this is probably the first time the phrase "machine learning" was introduced by Arthur Samuel. He says that machine learning is the field of study that gives the computer the ability to learn without being explicitly programmed. So I guess without being explicitly programmed is probably something pretty important. For example, suppose-- I guess this is in the paper titled "Some Studies in Machine Learning Using the Game of Checkers-- Recent Progress." I don't exactly know what the game of checkers is, so don't ask me about what the rules of the game. But the point is that if you explicitly write a piece of code that plays the checkers, right, that doesn't really mean that you are using machine learning, rigt? So if you just say I have this fixed strategy I know which is actually very good for checkers. The first step would be this and the second step would be that move, right? I just explicitly code that in my computer with some branching algorithms, right, so that probably doesn't count towards machine learning, right? So if you use machine learning, you have to rely on the computer to learn without being explicitly programmed. So you shouldn't have explicit programming. But how do you learn, right? How do you give the computer the ability to learn? I think in the second definition of machine learning by Tom Mitchell, it describes more context or more context about how do you really let the program to learn without being explicitly programmed? I think it says that a computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T as measured by P improves the experience E. The rhythm is kind of nice. So I guess there are several important concepts in this passage. So one thing is that the experience E, right? So I guess let's still use this example of the game of checkers. So the experience, in this case, could mean, for example, the data. So data basically means that this possibly could be games played by the program with itself. It could mean games played by humans in the past. It could mean other kind of data you collected from other sources of information. Maybe you collect some data from, in this case, maybe-- you can collect data from other source of information. Maybe here, mostly, just collecting data by playing itself or playing by humans. So the experiences mostly means data. And there's a concept called performance measure, which is important in machine learning. Of course, there's no unique performance measure. For different tasks, you have different measures of performance. But this metric, the performance measure, is pretty important. Here, you can call it-- the performance measure could be the winning rate. It could be the winning rate plus, for example, the number of steps you play. You probably want to win as fast as possible. In some other cases, the performance measure, that could be how likely you can predict something very accurately, right? So you can define many performance measures. And I guess actually, in machine learning, if you look at the research, some papers are about understanding what's the right performance measure? What's the right way to formulate our problem? And some of the papers are about that, given the performance measure, how do you make the performance as best as possible? And also, there is this last sentence where it says that, if its performance at the task in T as measured by P improves with experience. What does improves with experience E mean? I think it means that, if you have more experiences-- if you have more data-- then your algorithm should have better performance. So that, in some sense, is kind of evidence for you to learn something from the experiences, right? If you have more and more experiences and your performance has not improved, maybe that doesn't really mean that you learned something, right? That could probably just mean that you just explicitly programmed some strategy, and that strategy probably wouldn't improve as you have more experiences, right? So in some sense, these last few words indicates that you are learning from the experiences. All right. So I guess the final thing is the tasks, ri ght? So here, the tasks is, really, winning. It's this context of playing the game, right? And we are going to see, actually, many different type of tasks, actually, just in this lecture. So they are tasks about predicting the labels of predicting something given the input, or the task could be finding out certain structures in the data. Or the task could be something like this, where you want to make decisions about how do you play the game. In any case, feel free to stop me. Just raise your hand. And I'm happy to answer any questions. This lecture is supposed to be very high-level, so feel free to ask any questions. So I guess speaking of tasks-- so this is a pretty simplistic view based on tasks. How do you have a taxonomy of machine learning, right? So I don't think everyone agrees with this 100%, but it's a reasonable baseline as a kind of high-level taxonomy. So I think supervised learning, unsupervised learning, and reinforcement learning. I'm going to introduce these separately. But it's not like these tasks are completely separated, right? So probably, the real figure would be like this. So there are some overlaps. So in reinforced learning, probably, you have to use supervised learning as a component. And as I said, for many machine learners, right, people are trying to figure out what's the right way to formulate the question to solve the applications. So maybe for some applications, you have to use two of these together. So they are not necessarily only tasks. But sometimes, they also can be viewed as tools or methods to solve your question, right? So maybe some questions requires a formulation that involves both of these three ingredients in some way. But as a first-order batch, you can think of them as three roughly separate type of tasks. So I'm going to introduce supervised learning first. So supervised learning-- I guess actually, in the lecture, we are going to use this house price prediction as a kind of running example. So the kind of the idea is that I'm going to have a relatively abstract way to introduce this. But you can think of the house price prediction as the application. So what you are given is a data set that contains n samples. And what are these n samples? These n samples are n pairs of numbers-- n pairs of vectors, where x could be a vector or a number. Let's say x is a number and y is also a number. So you have n pairs of x, y numbers, and you can actually draw these numbers here. You have a scatterplot, right, so every cross is really just one x, y pair. So the x means, as shown here in the caption-- so the square feet is x, and the y is the price, right? So basically, for every example, it's a pair of square feet and price. You are trying to use the square feet x to predict the price y. That's the task. And using Tom Mitchell's language, this data set is the experience. In a, probably, more than language, we call this data set our data. So basically, our goal is to learn, from the data set, how to predict the price given the square feet of the house. And so, basically, if x is 800, then what is y? And this x might not be seen in the data set. If your x already shows up in the data set, then it's easy. You can just read it off. But x could be something that you haven't seen in a data set. And one of the way that-- you probably have seen this in some of the other lectures or other courses, where you can do a linear regression. You fit a linear line. And then, when you predict, you just read off the corresponding number on this linear line. What is the corresponding y when x is 800? And of course, you can do other things. For example, you can try to fit a quadratic line, right? So which, actually, in this case-- in this artificial example I created, a quadratic line problem will fit the data better. And in lecture two and three, I think our goal would be to discuss how do you fit a linear model and how do you fit a quadratic model for the data to predict the house price? Of course, the house price prediction is only your application. You can imagine many other applications where you are given a data set of x, y pairs, and your goal is to predict y given x. For example, we can just simply make the house price prediction problem a little more complicated, right? So we said that, in the previous slide, we use the size to predict the price. But actually, you know, probably, more about the house, right? And you know, for example, the lot size. And maybe you know other things, right? Then, for example, suppose you also know the lot size. Then your goal could be to predict the price using size and lot size. And we call this kind of input different dimensions of the input x features. So size is one feature, and lot size is another feature. So now, you have two features of the particular house, and you want to predict the price based on the two features. And now, your data, if you draw them-- then there will be three dimensions, x1, x2, and y. And then, you can kind of plot them in this three-dimensional graph. And as I said, the kind of the things you know in a task time, right, that size and lot size, these are called features or input. And in this case, these features are two dimensional. And typically, people call the price label or output. And you are trying to find a function which maps the input to the output. So actually, another heads-up is that, in machine learning, almost every concept has more than one term for them. So you're going to see that some people call these features. Some people call these inputs. And in some other cases, probably, you have other names for additional things. We'll try to be comprehensive. I'm going to tell you what are the different names, but we're going to use one of them. I think in the lectures, mostly, we probably are going to use input because input and output is a little bit less ambiguous. Actually, features, sometimes, could mean other things, as well. And again, now everything is the same, in terms of the mathematical notations. The only difference is that, now, your x is a two-dimensional thing. Let me explain the notation here a little bit, which will be used consistently in the lecture. So the superscript here denotes which example you are talking about. It's the index for the example. And the subscript here denotes that the coordinates of the data. So x superscript i is a two-dimensional vector, and the x superscript i sub of the two-dimensional vector. And also, sometimes, like I have said, the price-- the y-- is called labels and outputs. And sometimes, they are also called supervisions. Or generally, if you say supervisions, that means the set of labels, right? That's why this is called supervised learning, because you do observe some labels in the data set. And also, the data set-- sometimes, people call it training data set or training examples. There are multiple names for it. Any questions? And you can also have high-dimensional features. Before, we only had two dimensions. But actually, in many cases, if you have a house listed online for sale, then you probably know a lot more about the house. And then, you can have a high-dimensional vector-- say, d dimensional vector-- and each dimension means something, right? Maybe the number of floors, the condition, the zip codes, so on and so forth. And you use this high-dimensional vector to predict the y, the label, that you are trying to predict. And in lectures 6 and 7, we are going to talk about infinite dimensional features, actually. So in some cases, you can combine these features into a lot of other features, where you can say I don't use x1 as my features, but I actually use x1 times x2 as my features-- living size times the lot size. I don't think that makes a lot of sense for this application. But in some other cases, maybe you can take the product of your two raw features-- two dimension of the input you have-- and use that as a new feature, right? And we're going to talk about how do you deal with infinite dimensional features, as well. And in some of the other lectures, we're going to talk about how do you select features based on data. So maybe not all of these features are useful. If you use all of these features, then maybe you can overfit, which is a concept we're going to talk about. You may kind of be confused here if there are too many informations available. So you may select something that is most important. So maybe, I don't know, all of this seems to be important. But maybe there are some other features that are not important for price prediction. And there's another concept that I'm going to introduce in the first lecture. We'll talk about this later, as well. So typically, there are two types of supervised problems. So this distinction is based on what kind of labels you have, right? So one type is called regression problem. So these are problems where your label y is a real number. So you are predicting, for example, something like a price, right? So this is a continuous variable. And there's another type of question, which is called classification. And these are cases where the labels are a discrete variable. What does that mean? That means that your labels are probably-- like you have two labels, yes and no, right? So you just have that this label set is just a discrete set with two choices, yes and no. For example, in this case, you can change the question. If you are given a size and lot size, you can ask what's the type of this house, this residence, right? Is it a house or a townhouse? So it's not a continuous prediction problem. It's really just predicting one of the two choices. and you can make this problem more complicated. For example, you can have multiple choices here, not only just two choices. And then, in this case, the way that we can kind of plot the data set-- one way to plot it is the following. So now, you have a two-dimensional graph where the x is the size and the y is the lot size. And then, for every dot, if it's a triangle, it means it's a house. And if it's a circle, it means it's a townhouse. And that's at least one way of how to visualize a classification data set, where the labels are discrete. You just use the triangle and circle to indicate a label of this example. And then, the kind of questions you want to solve-- sorry, my animation. The question you want to solve is that, now, if you give me a house, which is a two-dimensional vector with the size and lot size given as the input, and you're asking what's the type of this house? So whether it's a house or townhouse. And, one other way to do it is that you say you-- OK. Oh, I see. So you can fit a linear classifier that distinguishes these two type of dots. And then, your answer here would be, naturally, house because it sounds like it's on the right side of this-- correct-- this side of the line. So it probably should be consistent with all the other examples on the same side of the line. So I guess lecture about classification problems. And the next few slides, I'm going to talk about some broader applications of machine learning, which we won't necessarily will cover. I think image classification. I think, probably, we're going to have one homework question on image classification. So the type of question is that you are given all of these images, and every image has a label which describes the content of the main object in this image. Of course, in other cases, you may have multiple objects in the same image. But here, let's focus on a simple setting where every image has a single, important object. And then, your label is basically describing what this object is. And you are given this data set. This is actually a real data set created by Stanford people, led by Professor Fei Fei Li's team, which is called ImageNet. This is a very important data set that-- you probably should remember the name of it because this is pretty much the data set that, in some sense, made deep learning take off in the last 5 to 10 years. After the creation of this set and some of the new deep learning algorithms with neural networks in the last 5 to 10 years, we saw machine learning took off. And we were able to make a lot of progress because of the data set. And speaking of the data set, here, I'm only trying to say what's the format, or what's kind of the task, right? So basically, your x is some raw pixels of the images, where you just represent this image as a sequence of numbers. Actually, here is a matrix of numbers. And then, your y is the main object of the image. And you can have other kind of tasks in vision. For example, object localization or detection, right? So given an image, you can ask how do I localize-- find out-- each of the important objects with the bounding box? We are not going to cover anything like this because these are more specific to the Vision applications. So here, the thing is that your y becomes a bounding box. So how do you present this box? You don't have to know this. But if you are interested, the way to present a box is to present a box by the coordinate here and the coordinate here. And these two coordinates-- two points, four numbers-- will describe the box. So basically, a y will become four numbers instead of just one number. And actually, you can have, actually, more complex labels or y's on your other applications. For example, in natural language processing, which is the area to deal with language problems-- so, for example, machine translation-- you can have this problem where you want to translate, for example, English to Chinese. I don't know what happened with my pointer. So your x is the English sentence, and your y is the Chinese sentence, right? All sentences in the other languages. And now, you can see that the y-- even though y is a discrete set, the family of y is the family of all possible sentences in Chinese, right? So y looks like discrete, but y is much more complicated than the house versus townhouse application, right? So you have so many choices of y, like a almost exponential, infinite number of choices. So then, you have to deal with them in some different ways. I think we are going to cover a little bit about machine translation or this kind of question in one of the lectures that we added this year. I guess Chris mentioned that. We are going to talk a little bit about large language models for language applications. But on the other hand, this course only covers the basics of the foundational techniques of supervised learning. So we're going to talk about language applications. But if you really care about the particular applications, how to solve them the best way, then you probably would have to take some other more specific courses for those particular applications. OK. So before I move on to unsupervised learning, any questions about supervised learning? In the translation case, would you say that it a regression or classification problem? So would you say it's a regression problem or classification problem? I think I would say it's a classification problem because the family of y is still, technically, discrete, right, because you still have a finite number of possible y's because I assume you can say, let's say, the number of Chinese sentences is finite, right, even though the number is very large. But this is a good question because you cannot treat this as simple as-- you have to treat it in some slightly different way-- differently from the most vanilla classification problems. Because if you view this as the vanilla classification problems, then you're going to get into other issues, right, just because the set of y's is too big. When will you use infinite dimensional features? When you use infinite dimensional features? So I think I might not have a very clear answer right now because this does depend, a little bit, on some of the other things we're going to teach. But I think, generally, basically, sometimes you don't know-- OK, first of all, how do you create infinite number of features, right? So you have to create them from x, right? So for example I guess I-- I think I alluded to this a little bit, at some point. So, for example, suppose you have these number of features, right? Maybe these are hundreds. So now you have a hundred features. How do you create more features? You're going to use combinations of the existing features, right? And you can come up with a lot of different combinations. So you can have, for example, xd to the power of k. And k could be any integer, right? So that's how you create an infinite number of features. And why do you want to use them? Sometimes, it's because you don't know which one is the best. So you just say I'm going to create all the possible features I can think of, and I'm going to let the machine learning model decide which feature is the most useful. Or how do you combine these features? So that's why we use infinite dimensional features. In most of the cases in reality, you don't have to literally use infinite dimensional features. After you run the algorithm, you found out that some features are more important than others. But before we run the algorithm, we don't know which one is useful. So you let the machine learning algorithm to figure out which one is the most useful. And actually, one of the interesting things is that even the dimension of the features is infinite, it doesn't really mean that your runtime has to be infinite. So there are some tricks to reduce the actual runtime. So even though you are implicitly learning with infinite dimensional features, actually, your algorithm or runtime and memory-- all of these are actually finite. And actually, sometimes, they could be pretty fast, in some cases. These are great questions, yeah. Thanks for all the questions. Any other questions? OK. So the second part of the course will be about unsupervised learning. I think Chris will probably give about five lectures on unsupervised learning. So unsupervised learning, if you still use the house prediction data set as an example, the basic idea is that you know you are only given a data set without labels. You only see the x's, but not y's. So you don't know how these houses in a data set are sold in the past. So what happens is you're still using this townhouse versus house example, right? So if you are supervised and you have these triangles and circles here to indicate what are the labels. But you guys just got here. But if it's unsupervised, you just don't have this part of the information. But you just see this bunch of dots here in the scatterplot. But as you will see-- even if you just see this, where, as a human, once you see this, you somehow tell that, OK, this bunch of points here is very different from this bunch of points here. So maybe there are two type of residences here going on. Even as a human, right, even though you don't see the triangle and circle, you still kind of are able to tell there is something going on there, right? So that's kind of the nature of unsupervised learning. We want to be able to discover interesting structure in the data without knowledge about the labels. So you want to figure out the structure hidden in the data. So, for example, in this case, what you can do is that you can try to cluster these points into groups. You want to divide these points into groups. And you want to say that each group probably has some kind of similar structure. So this probably sounds like very good clustering because, at least, as a human being, you probably wouldn't cluster like this. But maybe a good algorithm probably would produce this. So if you produce this, then essentially you figure out there are two types of residents in this data set, even though you don't know the name of these two residents, because the algorithm wouldn't know townhouse or house-- these two words. But the algorithms know there are two types of things going on, here in the data set. And in lecture 12 and 13, we are going to talk about a few different algorithms for discovering the structures-- k-means clustering and mixture of Gaussians. And there are other kind of applications. For example, I think this is a paper by Daphne Koller's group, who is an adjunct professor, here at Stanford. So here, the kind of application is about gene clustering. So I think the idea is that you have a lot of individuals. And for this particular part of the genes, you can group the genes of individuals into different groups. So and you can see that-- I guess even visually, you can kind of see-- not sure what's going on with my-- OK. Cool. So you can see that there are some kind of clusters here. And it turns out that each of these clusters corresponds to how the individuals would react to a certain kind of medicine. And once you can kind of group people into groups, then you can probably apply the right type of kind of treatment onto each type of people. Now, here is another example, which is probably a little more easier to understand. So the type of question is called latent semantic analysis, which-- I don't expect you to understand what each of those means. It's just kind of a name, LSA. So the idea is that you kind of look at a bunch of documents. And every document has a lot of words, right? And you look at which words show up in which documents for how many types. So each entry here-- suppose you pick one entry-- it means how often the word power shows up in this corresponding column, the document-- Document 6, there. So every entry is how often the word shows up in the document. And if you see this, it didn't sound like there's any pattern here, right? So what's the structure? It's unclear. But if you use the right machine learning algorithm, what happens is that you can reorder or regroup these kinds of words and documents in the following way. Let me see that the video is working. Right. So basically, you permute the documents and words. And then, you see this interesting and sometimes block diagonal structure. Not very prominently, but still interesting enough. And now, you can see each of these blocks has some particular, interesting meaning. For example, here, this group of documents and words is clearly about something about space, kind of like shuttle, right? So shuttle, space, launch, booster. These are all about kind of like space traveling kind of things. So basically, you know that these kind of four words have similar meanings, and these four documents, or three documents are about this topic. So by doing this, in some sense-- at least, in this application-- you can figure out the kind of the topics in your data set, right? So you can figure out, probably, here, there's one topic, two-- one, two, three, four, five topics. And each topic is more likely to associate towards a certain type of words. And every document is, most likely, about one topic. And sometimes, it's about two topics. And then, what happens is that, once you figure out these topics, then you can use some humans to kind of interpret what each of these topics are. And then, given a new document, you can figure out what topic this new document is about. And this, actually, is a very popular tool in many of the social science because, in social science-- actually, even myself was involved in some of the projects in my PhD. So the social scientists, they have some text, right? They maybe have a lot of blog posts about politics, right? And they want to understand, for example, what trends happens in a blog post. So suppose you want to understand that. And then, you have to know what are the topics about each of the blog posts, right? You don't wan to kind of label them each, one by one, because maybe they have a million blog posts, right? So they use this to group the blog posts in certain ways. And then, they can do statistics to understand what happens with all of these blog posts. And this kind of applies to other kind of things beyond politics. Actually, you can even apply this to, I think, many things, like history-- what else? Like psychology, so on and so forth. And this was actually an algorithm discovered, probably, 20 years ago, or even maybe earlier than that. Maybe 30 years ago. And it was pretty popular, still, in social science. Of course, there are even more advanced algorithms these days, beyond this, which we are also going to discuss. So this is actually one of the more recent advancements. I think this was around 2013, 2014-- about seven, eight years ago. So what happens here is that you have a very, very large unlabeled data set which is called Wikipedia. So you just download all the documents from Wikipedia. There is no human labeling, right? They are just raw documents. And what you do is that you learn from these documents using some algorithms. And what you can eventually produce is the so-called word embeddings. So you can represent every word by a corresponding vector. And why do you want to do that? The reason is that these vectors has, basically, are kind of like the numerical representations of the discrete world. And there are some nice properties about these vectors that captures the semantic meanings of the words. So what happens is that similar words will have similar vectors. And also, that's what I mean by the word is encoded in a vector. So similar words would have similar vectors, and also, the relationship between words will be encoded in the directions of the vectors. This sounds a little bit abstract. Maybe it's easier with this figure. So actually, this kind of happens in reality, right? So if you look at the vectors, each point is the vector for that word, right? So Italy has a vector, and the vector, let's say, is this point. And France has a vector, and Germany has a vector. So you'll find out that the vectors for all the countries, they are somewhat kind of similar, in similar directions. So, for example, suppose you have another country, USA. Then you probably would find out the point is somewhere here, nearby. So all the countries have vectors that are in similar directions. And all the capitals are also in similar directions. So this is what I mean by the vectors encode some kind of semantic similarity between the words. And also, interestingly, the directions also encode some kind of relationship. So here, what happens is that, if you look at the difference between Italy and Rome-- right, this direction, this is the difference between Italy and Rome-- and you also do the same thing about Paris and France and Berlin and Germany, you will see that these three directions, they are very similar to each other. They are in these parallel positions. So at least one application of this is that, if you want to know-- suppose you are given, let's say, US, which is a vector here. And you want to know what is the capital of US? Where is the capital of US? You probably should go along this direction to search for a point. And that's likely to be the capital. Maybe you'll find DC or Washington. I guess you'll find DC there because I think Washington is ambiguous, which is a little trickier. Actually, this is an interesting thing, right? So the Washington vector would be tricky. The vector for Washington will be not clear where it will be because Washington has multiple meanings, right? So it's a state. It's a person. So actually, this is-- sometimes, you have this ambiguity. And then, you can have these kind of more complex clusterings of the words. So here, I guess what happens is that you can also use these vectors to cluster the words into groups. So, for example, you have these scientific words, some of which I don't even know. And then, you can use the clustering algorithms for the vectors to figure out what kind of topics or what kind of scientific areas they belong to. And you can also have hierarchical clusterings to deal with certain kinds of overlaps because, for example, mathematical physics probably would be closer to the physics vectors and the math vectors both-- somewhere in the middle of the math and physics vectors. So there are many different kind of interesting structures in all of these word vectors which you can leverage to solve your tasks. I'm a little conflicted here, just because to exactly do all of this, it requires a little bit more things that we haven't discussed. But we will discuss some of this in later lectures. And most recently, in the last two or three years, there is a new, say, trend or-- there's a new breakthrough in machine learning, which is these large language models. I think many of us are very excited about it. Chris has mentioned about that. And at Stanford, you have, actually, a lot of people working on these large language models. And roughly speaking, these are machine learning models for language, and they are learned on very large-scale data sets-- for example, Wikipedia, as I discussed before, or sometimes even bigger than Wikipedia. So you can download a trillion words or maybe 10 trillion words online because there are so many kind of online documents. And you collect all of these documents, and you learn a gigantic model on top of it. These are going to be very, very costly. Even training the single model would probably cost you $10 million, just for one time. So they are very costly, but they are very powerful because they can be used for many different purposes. And then, particularly here, I'm talking about this breakthrough called GPT-3. You're going to hear this name, probably, pretty often. Not very often in the lecture. Pretty often in general-- in the next few years, I think. Or you probably already heard of it. In the lecture, we're going to talk about this in one lecture. So GPT-3 is this gigantic model, and they can do a lot of things. So I'm downloading this example from their own blog post so you can use this GPT-3 to generate stories I think here, what happens is that you give human, some person-- write this paragraph about something. So like some, I guess, mountains or valleys. And then, the machine learning model can just generate some story and some kind of very coherent and meaningful text afterwards. If I don't tell you these are generated by machines, you probably wouldn't know that they are generated by machines. So you probably would guess this is written by some authors. So that's one application-- one way to use this model to generate stories. And you can also use this model to answer questions, right? So here, you give this model this long paragraph. And then, you can ask-- it's kind of like a SAT-- I'm not sure. This is like a GRE question. I'm not sure whether all of you know GRE. So these are just basic question answering about the passage, right? You can ask what is the most populous municipality in Finland-- and this is information you can find in the document-- and they would answer the right thing. So that's another application. And you can also use this to do other things. For example, you can just write in the text, saying please unscramble the letters into a word and write that word. And then, you give this to the model, and the model will just change the orders of the letters to make it a meaningful word. And you can ask, for example, these simple numerical questions-- what is 95 times 45-- and it gives you the right answer. So what's amazing about this is not because it can solve all of these tasks, just each of these tasks. The amazing thing is that you learn on this gigantic, unlabeled data set. You didn't specify what tasks you want to solve, right? The only thing you see is this gigantic data. And then, the single model can be used to solve multiple tasks just by interacting with different things. If you want to solve this task, you just write it in the human interpretable language, And if you want to solve another task, you just do something else, a slightly different kind of like phrasing. Then they can be used to solve multiple tasks. And that's why we call them foundation models-- at least, in a white paper written by Stanford people. So in some sense, they are foundations for-- they can be used for a wide range of applications, sometimes, without a lot of further changes, right? So the model itself can do a lot of work for many tasks. So I guess I'm supposed to stop at 4:30. Sorry. OK. So I still have some time. Any questions? Going back to the multiplication problem, I was just curious if we happen to incorporate the entire internet, doing the math problem [INAUDIBLE] Sorry, can you-- Yeah. Do you know if-- what was the corpus was used for this? But if-- the internet as a whole, is it the case that, for these simple math problems, how were they able to get it together? So if I understand the question correctly, so one concern is that whether this 95 times 45 already show up in your corpus. But maybe you just memorized it. That's one possibility. But I think that's not the case. So, of course, some of the numerical problems, like some of these multiplication problems, show up in the corpus. So you will find one document online about what is 12 times 35. But I think you wouldn't find the documents about all pairs of mathematical equations. So multiplication of pairs of two-digit numbers, I don't think you will find all of them. So there is some kind of extrapolations where you see some-- of course, you have to see something, right, so that you can learn from them. So you probably have seen a lot of kind of numerical operations, all kinds of mathematical formulas in the training corpus. And then, you extrapolate to other instances, right? So you learn from some basic stuff. Then, you can use the model to output multiplications of, for example, longer digits. Does that make sense? Does that answer the question? My other question is sort of like, how did they enter it as no pollution in the corpus? Or is that it's something that, regardless of the pollution, you have enough documents where the math is true? Right. So how do you make sure that-- by pollution, I guess you mean that how do you make sure that, in the training corpus, they are not all pairs of double digits, right? So I think they do run some tests to check that. So of course, you cannot make sure-- is that what you mean by pollution? Or do you mean, by pollution, something wrong about the-- False information. False information. [INAUDIBLE] OK. So that's a great question. So I think, abstractly speaking, the question is about how do you make sure your training corpus doesn't have wrong information for you? All right. So I think, definitely, there is wrong information in the corpus. But I think what happens is that there are probably more correct information than wrong information. And you somehow reconcile between them, and you kind of pick the right thing. So that's, largely, what's going on. But of course, if you are very specific about-- so there's an area called data poisoning. So you can actually specifically change your training data in some special way-- actually, you just change a small number of training data so that your model learns something completely wrong. So that's actually possible. But that requires a adversarial change of the training corpus. So on one side, this is a very bad thing because, if someone does something out of a certain line and you use those kinds of documents to train your model, that's a huge risk. On the other hand, because you have to be adversarial-- so, at least, right now, this kind of adversarial poisoning is not happening very often, just because it's not very easy to achieve them. OK. Any other questions? OK. Cool. Yeah, these are great questions. I like to have more questions. That's great. So the last part is about reinforcement learning. This will consist of, probably, two or three lectures at the end of the course. So the main idea of reinforced learning is that, here, the tasks, roughly speaking, are about learning to make sequential decisions. So I think there are two things. One thing is that you are making decisions. So before, in both supervised learning and unsupervised learning, in some sense, you are making predictions, right, at least, in supervised learning, it's pretty clear. You are predicting y's. But here, you are talking about decisions. And what's the difference between decisions and predictions? Decisions have long-term consequences. So for example, if you play chess-- so you make some move, and that move will affect the future, right? So you have to think about long-term ramifications. And also, this is a sequential decision. So you're going to take a sequence of steps. So when you take the first step, you have to consider what this step will change my game, and what happens in the future. So that's why you can see that. These kind ofreinforced learning algorithms are mostly trying to solve these kind of questions where you have to make a sequence of decisions. So, for example, when you solve go-- you've probably heard of AlphaGo. And another example is, for example, you want to learn a robot. So if you want to control the robot, you have to take a sequence of decisions. How do you change the joints? How do you control? Actually, there are always multiple things you can control for a robot. And how do you control all of them in a sequential way? So here, I'm showing this in the simulation environment. So this is a so-called humanoid, which is a robot that imitates the human. You can control a lot of joints in this robot. And your goal is that you want to make this robot be able to walk to the right as fast as possible. And this is what happens with the reinforced learning algorithm learning here. So it's trial and errors, to some extent. So what you do is you first try some actions. You first try to do something like this. And then, you figure out that this didn't work well. It falls. And then, you go back to say I'm going to change my strategy in some way. So I know that some strategy is not going to work. I'm going to try some other strategies. And maybe I know some strategy is, actually, partially working because at least the humanoid is doing something, right? It does walk to the right for one step. It just didn't keep the balance. You know some part is good. Some part is bad. And then, you try to go back to change your strategy. And then, you probably can walk a little further. Something like this. And then I guess I'm going to fast forward to iteration 80. I think 80 works. I forgot whether I have-- oh. Actually, 80 still doesn't work perfectly. You can see, he's walking in a weird way. And I think iteration 210 is-- it can keep walking. But still, it doesn't sound very natural. You shouldn't expect that the humanoid walk as naturally as humans, partly because there are many different things. So maybe, for the robot, this is the optimal strategy. That's possible. But, of course, I don't think it's the optimal strategy. But it's possible that an optimal strategy for the robot is not the same as the optimal strategy for us. And generally, as I alluded to, the very high-level idea of reinforcement algorithm is that you have this loop between training and data collection. So before, in supervised learning and unsupervised learning, we always have a data set, where someone give you a data set and that's all you have. You cannot say, OK, give me more examples of the house prices. So you have to work with what you are given. But here, in the reinforced learning formulation, you often can collect data interactively. I see some question. Is that a question? Sorry. So here, you often can collect data interactively. So meaning that you-- for example, in the humanoid example, you try some strategy and you see that humanoid falls. Then that's the data you see additionally, right? So then, you can incorporate that new data back to your training algorithm and then change your strategy, right? So you have this kind of loop where, on one side, you try the strategy and collect feedbacks. On the other side, you improve your strategy based on the new feedback. So in some sense, you have a data set that is growing over time. The longer you try, the more data points you're going to see. And that will help you to learn better and better. OK, so that's my last slides about reinforcement learning. Any questions? Does the feedback happen after each step of the decision, or at [INAUDIBLE]? Oh. Sorry? Oh. Or after the integration is complete, then after-- Right. So is the feedback seen after each step of decision, or is it after something else? So there are many different formulations. This is a great question. So the most typical formulation is that you see the feedback right after the decision you make. But sometimes, it's not realistic. For example-- let me see. What other examples? So I'm blanking on what are the best examples to show. But in some cases, you don't have the feedback right after. And sometimes, even, you have the feedback right after the decision. You cannot change your strategy right after the decision so just because, for example, there is a computational limit or you have to really do something physical to change your strategy on a humanoid. Or maybe there's some communication constraints. So there are multiple different formulations in reinforcement learning. I think if you have a delayed reward, I think that's called a delayed reward problem. And sometimes, we also have this so-called deployment round, in the sense that-- so this notion of a number of deployment means that you can only update your strategy for, for example, five times. So you cannot just constantly change your strategy. And then, you can ask this question-- what's the best way to do this? One other example is that-- suppose you are using reinforced learning to control a nuclear plant, right? So you probably don't want to just keep telling-- you run an algorithm, and the algorithm keeps telling the nuclear plant to change their strategy to control them every day. That sounds risky and also kind of inefficient. There are many problems with it. So probably, you are going to say that I have to do some experiments for a little bit, for six months, and then I figure out one strategy that I almost can guarantee-- I can guarantee that this new strategy is working better than the old one. And then, I deploy it and then collect some new feedback. This is a great question. And another thing I would like to mention is that, in many of these problems, they are multiple criterion. For example, with reinforcement learning, if you want to control the nuclear plant, there is a safety concern. So then, you have to care about whether your strategy is safe or not. But for the humanoid, probably, it's fine for the humanoid to fall down, to some extent. But still, you cannot really let it fall down so often because it will hurt your hardware. So in unsupervised learning, there are other constraints. For example, there are constraints about how long is the training time? That's the typical metric. And there is also a constraint about how kind of powerful or how kind of multipurpose these models are, right? How likely they can solve multiple tasks. So eventually, this is a very-- especially if you look at the research community, there are different people care about different metrics, just because all of these metrics have their own applications. So the real kind of scenario is much more complicated than this, in some sense. There are a few other lectures here, about other topics in the course, which are actually in between some of these big topics. So one of the topics we are going to spend two lectures on is deep learning basics. If you heard of the word-- so maybe some of you have heard of it. So deep learning is the technique of using the so-called neural networks as your model parameterization. So this can be used together with all of these tasks, right? It's like a technique that can be used in reinforced learning, that can be used in supervised learning, unsupervised learning, and in many other situations. And this is something that is very important because-- because of deep learning taking off around last seven years, we see this tremendous progress of machine learning-- because of these techniques, a lot of things are enabled by these deep learning techniques. And we're going to also discuss a little bit about learning theory, just for one or two lectures. So in some sense, actually, we don't really talk that much about the core theory. In some sense, the goal here is to understand some of the trade offs of some of the decisions that you should do when you train the algorithms. So what's the best way to select features? What's the best way to make your test error as small as possible? And also, we're going to have a lecture on how do you really use some of these insights to tune an ML model in practice. As the algorithm implementer, what kind of decisions do you have to pay attention to? So on and so forth. So I guess we're going to have a guest lecture on the broader aspects of machine learning, especially robustness and fairness. I guess machine learning has a lot of societal impact, especially because machine learning, now, is working. You can really use it in practice, and it will create some kind of societal issues. Actually, a lot of societal issues. And these are things that we should pay attention to. I'm not an expert in this area. We're going to have a guest lecture-- James Zou, who works a lot on this, to talk about fairness and robustness of machine learning models. OK, I guess this is all I want to say for today.
AI_LLM_Stanford_CS229
Stanford_CS229M_Lecture_10_Generalization_bounds_for_deep_nets.txt
So last time we have talked about covering number. So covering number is a upper bound for the Rademacher complexity. And then our goal is to bound covering numbers because this is a new tool for bounding the Rademacher complexity. And we have discussed what other bounds are linear models-- I didn't show any of the proofs, but there are some existing bounds which are 20 years old actually. And then we also talk about the Lipschitz composition landmark for covering numbers, which is much easier than the corresponding level of a Rademacher complexity. So basically, if you know a function class has good covering number bounds and then you compose it with the Lipschitz function, then you still have a reasonable covering number bounds. So that's the general idea. And then, today we're going to talk about deep neural networks. And we are going to use some of these tools because you can see that a deep net is actually composed of multiple linear models with some of the Lipschitz functions, right-- their activations. So this is the goal of this lecture. So let me set up-- actually sorry, give me one moment. I think I probably have to change the mask because I'm always having the fog. I don't know what happens with this mask. Let's change one. Maybe there's some deficiency with the mask. OK, let's continue. So we have a neural network. So the setup is that we have some neural network that's called h theta. Theta is used to denote a set of parameters and we have r layers. So the network looks like this. So that last layer we don't have any activation, and then you have some activation in the next layer, you get r minus 1. Something like this. So basically if you do the ordering of the math formula-- so you first multiply x with W1 and then you pass through a nonlinearity and then you multiply W2-- and you do this so and so forth, and you have r layers. This is the network. So there are r layers and wi are the weights. And the kind of bound that we're going to talk about is that-- so here is the theorem. Assume xi 2 norm is less than c and consider a family of networks is that h theta with some norm control of the weights. So we consider the operator norm of the weights to be kappa i-- kappa i. And we can show the 2 to 1 norm of Wi transpose to be bi. And then suppose you control your complex function class like this, then the Rademacher complexity will be less than-- up to a constant factor c over square root n times the product of kappa i times the sum of-- this is a complex formula. Let me explain it in a moment-- i from 1 to r 3 over 2. Alternatively as a corollary, I guess this is not necessarily that formal because you have to talk about what exactly this is. You have to have some failure probabilities. But roughly speaking, you are saying that the generalization error is less than O tilde 1 over the margin times 1 over square root of n times c sometimes the product of the operator norm times the [INAUDIBLE].. I guess here I'm using the-- times the norm. This is a little bit-- anyway, so basically the important thing here is that the complexity measure depends on few things. One thing is that operator norm of the weight matrix. And it depends on the operator norm as a product. So in the complexity term, it shows the product of the operator norm of all the weights shows up in the complexity. And also there is this term, which basically you can think of this as a polynomial in kappa i and bi. So this you can in some sense think of this as a polynomial of kappa i and bi, which is not really important. So as long as it's polynomial for us, it's not that important because the product of the operator norm probably will be the dominating term. And the polynomial in bi the kappa i probably are somewhat-- and it will be relatively small. So we don't necessarily have to care about exactly what this 2/3 means. Actually, they don't have any special meaning. It's really just something that comes out of the proof. But as long there are polynomials, we are relatively happy with it. And so basically this is the important term. And this term, if you look at the bound, it comes from the Lipschitzness of the model. So kappa i is the bound on the Lipschitzness of a single layer. And the product of kappa i is the bound on Lipschitzness of the product of all layers. So without any details, I think this term, you can imagine this comes from some Lipschitzness composition, some use of Lipschitz composition. What is the thing right above the x in the expression [INAUDIBLE]? This is assumption? Sorry. Just the symbol that you wrote right above the x. This is i. Oh, you assume that it's true for every i. I think this can be relaxed a little bit. But, again, it's not very important. So you can maybe relax it to be the average of xi is less than c. It's not super important. What is the operator norm? Oh, right, so that's a-- yeah, sorry. So the operator norm is the-- I guess maybe I didn't-- so this is also the spectral norm, also the largest single-- so this is the spectral norm of the largest singular value, if any of this makes sense to you. And also the formal definition is just that the max over x 2. So I guess I called operating norm just because this is the-- if you think about w as operator, then this is saying that, how does this operator change your norm, right? So if you give it a 2 norm vector, then how does it change the norm? Yeah. So, OK, cool. So and you can see that this is kind of like a-- Lipschitzness, this is also-- maybe I should expand this a little bit. So this is also about the Lipschitzness of this, the linear model wx, right, because if you care about the Lipschitzness, what you have to verify? You have to verify that wx minus wy is less than some constant times x minus y. And what that constant should be, so if you prove inequality, then you're going to get the operator norm, a spectral norm of w there. So that's why this corresponds to the Lipschitzness of the linear model. Any other questions? OK, cool, so by the way, I haven't got any questions from Zoom for a long time. So you should feel free to ask questions. You don't have to, but of course feel free to unmute yourself. OK, so how do we prove this? So the fundamental idea-- yeah, so in the next 30 minutes, we're going to talk about this proof. The fundamental idea is that you somewhat cover function set iteratively, so cover this set of functions f iteratively. And iteratively means that you cover more and more layers gradually. And how do you do this iteratively? You have to use the Lipschitzness and sometimes the Lipschitz composition lemma that we have discussed. And also you want to also control-- and also controlling how the error propagates. So that's a high-level summary. It's kind of abstract, but let me tap into the details. So for simplicity, let's also kind of try to abstractify-- so for each layer of f, f as fi. So, basically, fi corresponds to a linear multiplication, a matrix multiplication plus an activation layer. All right, so this is a one layer. And then you can write f. Then you can consider f as this composition of fr with fr minus 1, so and so forth, right? So basically for every layer you have certain choices. You can choose your weight matrix. And then you compose all of this function class. By this composition I guess we have used this notation multiple times. This is really just means that you are looking at fr composed with fr minus 1 composed f1, where each fi is from the family of capital Fi. So this abstraction will allow us to have much cleaner notations. But, fundamentally, you can usually just think of each of the fi's as a layer, right? And what we know is that-- so suppose maybe, let's say-- so for the sake of preparation, so suppose for every f in fi, fi-- in f beta fi in capital Fi, fi is kappa i Lipschitz. This is actually the case for us because we restricted the spectral norm of-- or the operating norm of each of the wi's to be less than kappa i. That means that every layer is kappa i Lipschitz. And the ReLU is 1 Lipschitz. So even you compose with activation, it's still kappa i Lipschitz. So suppose each of these functions is kappa i Lipschitz. Then you know that fi(x). So these are just some preparations-- so 2 norm is less than kappa i x minus y 2 norm. And maybe that's just for simplicity suppose f is 0, is equal to 0. This is also the case, in the real case we care about where you have a neural network. And also let's suppose that xi is less than C. This is also our assumption. So then with all of this, then you know that-- we know a bunch of basic things. So, for example, we know that you can bound, what's this? What's the multilayer application of xi. What's the normal-- what's the boundary on the norm here. So the boundary on the norm can be bounded by-- each time you at most capture a kappa i factor. So you get kappa i times kappa i minus 1 times kappa i minus 2, so and so forth, times kappa 1 times c, right? And we call this ci. And let's define this to be equals to Ci. So basically this is some basic kind of preparation. So under this abstraction, you know some bound on each of the layer. And you know each of the layer is Lipschitz. And what I'm going to do is that we're going to do two things, so for two steps. So, first, you control the cover number of each layer, of each layer. And second, you have a combination lemma, you compose this, like combine them together. So you have a lemma that turns each layer, so you turn each of the layers. So you have a lemma that turns single-layer covering the number bound to multiple layer, multiple layers. And I think number two is the-- number one is kind of easy because for number 1 this is just a linear model composed with Lipschitz activation. By Lipschitz activation, you can just invoke on what we have discussed last time. So, basically, the important thing is that, how do you turn a single-layer covering number bound into multiple layer covering number bound? That's basically the main thing I'm going to discuss. So let's call this-- there is a lemma that does this. So under the assumption setup above, and kind of the relatively abstract setup above, so assume that-- suppose you assume for every inputs with l2 norm less than ci minus 1-- so these inputs are used to define the-- used to define Pn, right, and L Pn, the metric L2 Pn. So this is the inputs vary for which we are evaluating your covering number. So to define covering number, you have to define the metric, define which empirical inputs you're evaluating on. All right, so I'm assuming that for every input of this norm constraint you have a covering number. You have a covering number bound. So you know that epsilon i fi L2 Pn is less than some function of this, and Ci minus 1-- some function of the norm and some function of the target radius. So this is just assumption. This is assuming that-- so basically this is assuming that you have a single-layer bound, single-layer bound. So suppose you have a single-layer covering number bound of this form. And you do have this bound, it's just I didn't give you the exact formula, right? So if you instantiate on a linear model, you are going to get something like this. This will be something like ci minus 1 squared over epsilon square, the norm of the input squared over epsilon squared, right? So that would be what happens when you have linear models. But suppose you have this single-layer covering number bound, then the conclusion is that you can turn this into a multilayer covering number bound. And the form of this translation is not very clean. But it's like this. So there exists the epsilon cover of Fr composed up to F1 for epsilon is equal to the following thing. [INAUDIBLE] Sorry, one moment, let me finish, OK. What's the symbol on the right? It's just above epsilon i and ci minus 1? Sorry, can you say that again? In any expression of y, it is less than something? Sure. What's that symbol? This is g. Epsilon plus g. Yeah, so I'm assuming a generic thing here. But actually you can-- this is for the abstraction. When you really use it for linear models, it's going to be something like ci minus 1 squared over epsilon y squared. So this is g. So there is exists an epsilon cover such that-- with this size such that the log size is bounded by-- the log size of this cover is bounded by sum of g of epsilon i ci minus 1 and i from 1 to r. So basically if you have a log covering number bound of this form for every layer, then you can have a log covering number bound for this thing. And the bound, just the log covering number just add up a sum. But the tricky thing is that it's not like-- the cover size also grows. So the cover size also adds up in some way, which is a little bit kind of complicated. So, basically, your cover size is like multiplied, it's added in some way where you also modify some of this kappas, which are Lipschitzness in some sense. And your covering number is also added in somewhere. So this is the fundamental mechanism for us to turn a single-layer bound to multiple layer bound. Of course I'm going to use this in some way at the end so that we get a final result because you have to choose what epsilon i's are, right? So eventually what you do is that you are going to choose epsilon i's so that you get the desired target radius. And you work out what exactly this formula should be for that particular choice of epsilon. Does that make sense so far? But before doing that, I'm going to first prove this lemma. And then I'm going to do the derivations. So after you have this, this is the core. After that, this is just a choose parameter. So you just choose epsilons in some way that is in favor of you and work out what is the final bound. OK. And in some sense, the interpretation of this lemma is that you somehow-- you can add up the covering number bound, the log covering number bound in this way, as long as you pay some additional radius. OK. So this proof is in some sense, in some sense it's actually pretty simple. But the exposition requires some challenge. It's a little bit challenging. So the fundamental idea is the following. So we start with this data point. We start with this concatenation of n data points, right? So you have n data points. And you map these n data points to a set of points, right? This is the Q that we talk about. I think I need to draw this in a good way so that I have more space. So let's start with-- you start with n points. And you map these n points into a vector of dimension n or maybe actually it's a matrix of dimension n. So you map this to some space. And each of these point here is the concatenation of f x1 up to f xn. And this is the so-called Q, the set of Q, right, that we have to cover, right? And you can use multiple functions f, or you can use f-- any function f1, in f1 to map to a different point. If you choose different f1's, you're going to map a different point. And if you just have one layer, what you're going to do is that you're going to cover this set Q, right? That's what we do for covering number for one function, for one family of functions f1, right? So then what you do is that you-- I'm just basically reviewing what we have done for covering numbers for one family of functions. You create this kind of bubbles, so that covers it. So basically you create these centers. And these are points that are in c. Maybe that's called c1. So let's create a-- that's a c1 is epsilon 1 cover of f1. This is what that means. And now we are going to see, how do we turn this into a cover for f1 composed with f2? So that's the job we are trying to do. And what really going on here is that for every point here in the output space of-- so this is-- maybe let's call this Q1, which is the output space of-- maybe let's call this thing capital X. So then Q1 is the family of outputs where the function has to be chosen from F1, right? What's the composed-- so how about if you add f to another layer? So what happens is that for every point in Q, Q1, you can apply multiple different functions, right? For any functions little f2 and capital F2, you can apply it to map to a new point in the new space, to map it to a new point. So you get a-- for every point here, you get a bunch of possible outputs. And for every point here, you get another bunch of possible outputs, all right? So each of these new points could be your image after applying two layers, right? So now we are trying to apply-- we're trying to cover this new set of outputs, like Q2 let's say. And how do how do we cover it? So the approach that we are going to take is somewhat, in some sense pretty brute force. What you do is you say you want to leverage the existing power for capital F1 in some way. So what you do is you say, you look at a center here in c1. And you look at what are the image of this point after applying a second layer. So you get something like this, all right? So this is the set of the image, of the output of this point. So maybe let's say, suppose this point is called f1 x, which is in c1. Let's call this f1 prime x, which is in c1. And then you look at all the outputs from f1 prime x. So you get this family of points where you apply f2 on f1 prime x, and where f2 can be chosen arbitrarily from f2. And now what we do is that we cover this set by a new epsilon cover. So what you do is you say, I'm going to cover this with a bunch of things. And what does that mean? That really means that you choose a subset of capital F2 and cover-- because here you are ranging over all possible functions in F2. So if you're going to choose a cover point, you just say I'm going to drop some of them. I choose a subset, a discretization of capital F2. So that's basically the approach. And you do this and then you do this for every possible point in c1 and cover them. So basically suppose you have another point in c1 here. And then you look at all of this image and you do another set of cover. And you do this for every possible point in c1. And every possible point in c1 have induceD a set. And that set can induce a cover. And then you'd take the union of all of these variables into-- and that the union of all of these red bubbles becomes a cover for the Q2. For example, suppose you have, let's say, f1 prime prime X here. maybe. I should use a consistent color. Let's say you have a f1 prime prime X. And this is mapped to this set of points here. This is the set of all f2, f1, prime prime X where f2 is in capital F2. And then create a cover for this set so that you can discretize F2 again. And you take the union of all of this right cover, of these red bubbles as your cover for Q2. So any questions so far? So formulate, what we should do is the following. So epsilon 1 and epsilon r are the radius on each layer. These are just-- which are TBD. We will choose this in-- I guess in this lab, they are not TBD. They are just already given to you. Eventually you'll choose something or choose some numbers for them. And then what you do is that c1 is the epsilon cover, epsilon 1 cover of F1. That's easy. And then you say that for every f1 prime in c1, construct this c2, a cover in the second space. But this cover depends on f1 prime. So to cover, to epsilon 2 cover the set f2 composed with f1 prime which is what I wrote above where this is f2, f1 capital X. f2 is ranging in capital F2, all right? So for every set like this where this set is this-- this set is really literally this blue things I drew here, like this blue set. And I choose a cover. And I denote that cover to be c2, f1 prime because this cover depends on f1 prime. And then I'm going to take the union. So I'm going to let c2 to be the union of all of the c2, f1 prime, where f1 prime is in c1. So this is how do I construct the cover for the second layer? So this is supposed to be a cover, supposed to be a cover for capital F2 composed with capital F1. OK, any questions so far? So there are several questions. So one question is that, how good this cover is, right, that's one thing. And the other thing is that how large this cover is. So the size of this cover is relatively easy to compute because you are basically just blowing up the size multiplicatively because for every one in the-- in the c1 you create this cover. So you just multiply basically the cover numbers together, in some sense. And that's easy because you can formulate what you can do is that you can say c2, f1 prime, this is something, the log of this is going to be bounded by g of epsilon 1 c2-- or, sorry, epsilon 2, c2. But this is epsilon 2-- my bad. This is epsilon 2, c1 I think in my notation, epsilon 2 c1 below c1. This is my assumption because my assumption is that as long as your input is bounded by c1 and your cover size is epsilon 2, you have this bound, all right? So that means that the size of c2 is bounded by the size of c1 times this exponential of this g of epsilon 2, c1 because for every point in c1 you have a bound for that corresponding set. So then you just multiply by c1. And that means the log of c2 is less than the log of c1 plus g of epsilon 2, c1, which is equals to g of epsilon 1, c0 plus g of epsilon 2, c1. Actually, I forgot to define c0. c0, just for convenience let's define, my bad, so define c0 to be just the c, the bound on input. So ci's are the bounds on the layers, the activation layers. And c0 is the bound on the input. OK. So basically, the size is added up. The log size is added up. That's easy. And we're going to deal with the covering, how does the covering works at the end. So before doing the covering, completing the covering radius, let's define how to proceed with more layers. So, similarly, for given ck, suppose you have covered k layers. Then now you're constructing a cover for the k plus 1 layer. So what you do is say that, so for any fk prime composed with fk minus 1 prime, f1 prime in ck, you construct some ck plus 1, which is a function of this fk up to f1 prime so that epsilon k plus 1 covers the set fk plus 1 composed with fk prime. So I knew like this ck, the final cover to be the union of all of these kind of sets. And, similarly, you can prove that the log of the ck plus 1 will be less than the sum of all the single-layer covers, epsilon k plus 1, ck plus up to g of epsilon 1 c0. All right, so I've shown you how to cover it. It's just an iterative cover that's kind of pretty brute force, in some sense. And now the question is, why this is a good cover? What's the radius? So basically when we answer the question, right, so for every fr composed up to f1, which belongs to this fr, this set, this is the set we want to cover. So you pick a function in the set. And you want to say that this can be represented by something in a cover with some small distances. So how does that work? So you first, let's say that you know there exists 1 prime in c1 such that rho of f1, f prime, this is less than epsilon 1. That's something you know because c1 is a cover, epsilon 1 cover of the capital f1. So now you have to say, let's say you try to pick something in c2 that can cover f2 composed with f1. How do you do that? You basically in some sense use the construction. You say that-- maybe I should draw this a little bit more. So the first thing is, suppose you have a function here, or you have a point here, which is f1 of x. So you cover it by this point. How do I do this-- as you cover this by this point, right? So now suppose you have a point in the second layer. Suppose you have a point somewhere here, which is the map which is computed from that f1, x, right? You apply some f2 to it. And what you do is you say that you first look at the neighbors in the first layer. So you got this point. And this point, you look at what's the neighboring-- what's the image of this point in the second layer, maybe something here. So I guess here you are applying-- I guess, let's assume you're applying f2 here. So you get f2 here. You use the same f2 on the cover and you get this point. And then after you get this point, you look at the neighbors in the right. So you got this one. So basically this will be the cover for the purple point. I'm not sure whether this makes sense. Sounds good? So, in other word-- more formally, so basically you say that-- so you want to say there exists a function in this cover, right, in this c2, f1 prime. So this is this one. This one is that right point I think. This is that right point. So it's in this cover such that rho of f2, f1 prime is closed to what? Is closed to this-- oh, the blue point. What's the blue point? The blue point is f2 composed with f1 prime. This is less than epsilon 2, all right. I guess maybe let's write this as f2 prime composed with f1 prime so that just to make it look-- that's also what my cover is doing. So suppose 0 of f2 prime composed with f1 prime. So your cover has this structure that you will first apply f1 in your cover, you use the f2 prime in a cover. So suppose-- what I'm seeing here-- OK, sorry, sorry, my bad. So you have this function in this cover such that-- so this is of the form, let's say, f2-- I want to make this too complicated. But I think let's say this is of the form f2 prime composed with f1 prime, which is in this cover. But this one actually implicitly depends on f1 prime as well. But let's ignore that notation. So you've got a rho of f2 prime composed with f1 prime, which is close to f2 composed with f1. But this point is not what you really want to cover because we want to cover f2 composed with f1. So what you care about is that rho of f2 prime composed with f prime, the difference between this and f2 composed with f1. So this is the thing you really care about. And you can see that there's still some differences because the differences come from that this is at 1 prime, but not at f1. So that's why you do a triangle inequality. You say that the target is less than rho of f2 prime composed with f1 prime. So use this as the intermediate term, right? So this one is less than epsilon 2. And you are left with this thing that where you only differ in the first layer, right? That's the difference. But this difference is kind of propagated. In some sense, if you look at this figure, this figure is a little bit kind of tricky. So this is the difference in the first layer. But once you apply this f2, right, so you have a bigger difference. So this is the difference in the second layer. And this difference can be like a blown up a little bit because even though you apply the same function, you may blow up the differences a little bit. So that's why you have to use the Lipschitzness to say that this is less than epsilon 2 plus kappa 2 times epsilon 1. kappa 2 times rho of f1 prime, f1. And this is less than epsilon 2 plus kappa 2 times epsilon 1. That's how you bound the covering, the radius for the second layer. Any questions? And then you can similarly do all of this for k. So there exists a function fk prime, which depends on f1 prime up to fk minus 1 prime. And in this set ck, let's write this as fk prime composed with k minus 1 prime composed with f1 prime, and such that this is a cover, such that the distance is less than epsilon k, less than epsilon k. That's the definition of the cover. And then you have to see why this is a good thing for the original one. Recall that this is not actually what you really care about. You care about the fk composed with fk minus 1 up to f1. You don't care about the prime. So you care about this. This is the thing that you really care about. It also shows this is small. And how you do this? You expand this into multiple terms. So you say that this is less than-- so I guess you-- the first thing is you first compare with-- this kind of telescoping sum is pretty actually useful in many cases. You first compare with this. And then you compare-- you just gradually peel off more and more terms. And this is low prime here. And, until, finally, you got-- everything is kind of low prime, basically. And, eventually, you get-- what you care, there is no prime at all. So this is just triangle inequality. And now, you bound each of these term. The first term, by definition, is less than epsilon K. And the second term you see that these two are the same. And this part is also the same. And the only differences come from the difference between this fK minus 1 and fK. The f prime K and-- f prime K minus 1 and f K minus 1. So because of the cover, that gives you epsilon K minus 1. And then you also have to blow up a little bit because of the fK composed on top of it. So you also have to pay a Lipschitzness of fK, which is kappa K. Kappa K. Sorry, my K and kappa looks probably almost the same. So, then you have epsilon K minus 2 times kappa K minus 1 times kappa K. So on and so forth, until you have-- In the last time, the only difference comes from the first one. So you pay epsilon 1, because they are epsilon cover. f1 prime is from the epsilon cover. And then you pay a lot of Lipschitzness, like kappa K, times kappa K minus 1, up to kappa 2. And if you take K to be R, then you get the eventual thing, right? So the eventual theorem is that your radius for the final covering is something like-- Where is it? The radius for the final covering is something like this, right? So that's eventually what you got. Any questions? I'm still a little unsatisfied with having to add epsilon [INAUDIBLE] to our examples. [INAUDIBLE] which is commonly assumed. And then we're mapping the [INAUDIBLE].. The sets that we've covered. For example, why isn't epsilon 1 times-- appears to be kappa 2, kappa 3, kappa 4 up to kappa k. Why won't that cover it? Right. So I guess the question is that why-- Why you only have to-- Why don't you only at proof require this? This is because-- no. Suppose-- let me try whether this works. Suppose your function class is at f1 composed with a fixed function. Maybe it's the other side. So f1 composed with a fixed function that is called f2, maybe, composed with f3. So and so forth, fr. And all of these are fixed. Then you only have this term. But you also have to cover the possibilities for the second layer and the third layer, so and so forth. So that's why you have to pay the other things. OK, cool. So, now, we are done with this lemma. And now let's go back to the proof of the theorem. And a proof of the theorem, as I kind of alluded before, is pretty much just the kind of annoying speculation in some sense. There is a way to do the calculation in a simpler way, but I'm going to first show you a zero knowledge proof. So, basically, I'm just going to tell you that I'm going to choose my epsilon to do this, and it just works out. And then I'm going to show you some way to kind of-- at least what I would do with this. If I write a paper, I'm going to show you the first proof I'm going to show, which is just choosing some epsilon y. So let's start with that. So, basically, everything is about choosing epsilon y, right? So you first-- of course, you first know that this O is equal to O tilde of Ci minus 1 squared, bi squared over epsilon y squared, because this is linear model. This is linear model. Composed with one Lipschitz function. Right? So recall that, each of the Fi is a linear model composed with the one Lipschitz function, a fixed one Lipschitz function. And for the linear model, the covering number, the log covering number, is supposed to be something like the norm of the input. And this is the-- Bi is the Wi transposed 2 to 1 norm, right? So the norm of the parameter, and divided by the radius. This is what we have shown last time, like, we didn't prove this, but this is the lemma we had last time, about the log covering number of the linear models, right? So we plug in this, and then, basically-- So, basically, you have two quantities. So one is the log covering size, which is the sum of Ci minus 1 square, Bi square over epsilon i square. And also, you have another thing, which is the radius, which is sum of epsilon i. I'm writing this as epsilon kappa. i plus 1, up to kappa r. From 1 to r. Right. So you basically have these two things that you want to trade off. You want to find the balance dependencies between them. You want to make the log common size to depend on the radius as best as possible. So you just choose some epsilon i. And so you care about the best, kind of trade off all the dependencies. So this is epsilon. So what you should do is that you should say, I guess, if I give you a zero knowledge truth, I'm going to choose epsilon i to be ci minus 1 square, bi square over kappa, i plus 1, up to Kappa r 1/3 times epsilon, sum of bi 2/3 over kappa i 2/3, product of kappa i 2/3. All right. So if I choose epsilon i to be this, then I will claim that sum of epsilon i, kappa i plus 1, up to kappa r-- this will be indeed equals to epsilon. And why is that? I'm going to do the derivation for you, but I don't feel like you should really need to verify it on the fly, or you don't necessarily have to verify it later. But just for the sake of completeness, let me do the calculation. This will be-- you just plug in epsilon i here. So you get, I think, ci minus 1 2/3, bi 2/3. This come from this two terms. And then there's something about this and this. And also this thing, right? So you can organize those things into-- I guess I'm only-- I guess I'm treating these as a constant for the moment in this derivation. So I got this multiplied by this. We've got 2/3 of i 2/3. By the way, if you don't want to verify this, just maybe bear with me for a second. All right. Epsilon. I guess, one other thing is that ci is also a function of kappa i, because we recall that ci is the norm bound for the layers, which depends on kappa i. One question? Oh, yes. So the i that the sum and the product rule, that's different from the epsilon, the i [INAUDIBLE].. The i here? In the [INAUDIBLE], yeah. This is the same i. Sorry. Yes, you're right. You probably should use a different index just for the sake of-- Yes. I think you might be right. This is probably-- You know what I mean? Ideally, you probably need to use a J just for the sake of completeness. So, yeah, but this one, you average out this part. After doing the sum and the product, the i is gone in the second part. So, anyway, let me do this tedious thing. So recall that ci is the norm bond and ci is defined to be-- I think, ci is defined to be some product of kappa i. And so I I guess, let's put bi in front. And then you've got ci is kappa 1 2/3 up to kappa i minus 1 2/3. This corresponds to ci minus 1. And then you get kappa i plus 1 2/3. That's from here. This is kappa plus 1-- i plus 1. Sorry. And then, you still multiply the same thing here. And then you simplify this to the first sum to-- I guess, you can see that the only missing term is kappa i. So this is equal to bi 2/3 over kappa i 2/3 times product of kappa i 2/3. And from 1 to r. And times this thing. And now, let's deal with this thing. You can see that this one cancels with this one, and this one cancels with this one. So you get really equals to epsilon. And the log-covering size is equals to-- That is equals to the sum of the-- What I'm doing here is equals to this. Basically, the sum of-- OK, let's first write the trivial thing. This is ci minus 1 square, bi square over epsilon i square. And you plug in epsilon i here. So you get this gigantic thing. Maybe let's call this thing z. So you've got 1 over z squared times this minus sums of ci minus 1 square, bi square. You plug in this to the minus 2. So you get ci minus 1 square bi minus 1 square. Minus 2/3. And then kappa i plus 1 up to kappa i 2/3. And there are some cancellations. So this will be equals to-- very sorry. I think I jumpped a step in my notes. So, OK. I need to-- So sorry. I think you plug in the definition of ci minus 1. And you get bi 2/3 over kappa i 2/3, times the product of kappa i 2/3. And now, you use the definition of Z, which is this gigantic constant. And, eventually, I think you get-- let me now do that, carefully. But, eventually, you get this. Or epsilon square. OK. So I guess maybe this is a good demonstration of why I shouldn't do this. After I even I verify this with my notes, which has almost all the steps, is kind of tricky. But anyway, so before we're talking about how to do this better, I guess let's first agree that this is done, right? Because now you see the log-covering size is bounded by this epsilon square, something over epsilon square, and that's what we wanted to have. And then you apply the Rademacher complexity, the tool from covering number, provide the Rademacher complexity, recall that if you have log-covering number, it's r over F square, then this means Rademacher complexity is something like square root of R over n, right? So this is what we discussed last time. And if you apply this small tool, then you get the Rademacher complexity will be this one. Square root of this one and over square of that. And then you are done. OK? So we are done. But I think I want to kind of share how to do this a little more easily without going through all of this pain. This is a small trick. It's purely a mathematical trick. I don't know how many of you know it. Maybe you all know it, or maybe you all don't know it. But, anyway, let's talk about it. So, basically, the question is, you care about-- This is the question. You care about the trade off between these two. And you care about the trade off between these two. So what you could do is that, if you abstractify it's kind of like-- so abstractly speaking, this is about the trade off between something like-- maybe let's use some different numbers. Alpha i square over epsilon i square, versus sum of beta i, epsilon i. Something like this. That's kind of the game you are dealing with. And how do you do the trade off? So what you would do is that you use this so-called Holder inequality. The Holder inequality is that you only have mathematical ways to write it. For example, you can write a in a product with b is less than the p norm of a times the q norm of b, when p and q satisfies this. And for example, when p-- And you could also write it like this. The sum of ai bi is less than the sum of ai to the power p to the 1-- this, something like this. This is just exactly the same thing. And then guys, when p is 2, this is the Cauchy-Schwarz inequality. And we need something slightly different. We need p is 3, or p is 3/2. Then you got some of ai cubed 1/3 times sum of bi 2/3. It's lower than sum of ai bi. So in some sense, all of this inequality is trying to kind of deal with-- has this kind of form. And I guess maybe what should I-- which one should I do first? Look, I'm not sure whether I'm lost on you. So I guess what eventually I want to do is the following. Maybe let me just give you an overview. Eventually I want to do is just that, I want to do this. And let's say that this product is the sum of beta i epsilon i. This is larger than sum of alpha i beta i to the 2/3 and 3, 2. So if rather do this, then you kind of like cancel out. So maybe-- sorry, maybe let me do this first. So we care about sum of i squared-- of epsilon i squared versus sum of beta i epsilon i. And this is your epsilon, and this is your covering size the covering. And what you would want-- what you can do is that you can say this times this squared. Just forget about this. Just let me do it formally. So if there's an inequality that shows that this is larger than sum of alpha i beta i, the 2/3 over 3/2. And this is essentially Holder inequality. Let me justify this in a moment, but suppose you believe in me in this. Then you say that-- and suppose you also believe that this is achievable. Suppose we believe that equality is achievable, which I will justify in a moment. So if equality is achievable, it means that there exists epsilon i's such that sum of alpha i square, epsilon i squared is equal to this quantity 3/2 over sum of theta i epsilon i squared. And recall that this is your epsilon squared, and this is your log covering number. Now you get log covering number-- you can-- so you get that. You can choose epsilon i such that the log covering number is less than this, which is equal to this quantity over epsilon square. And this quantity is what you are looking for. Right, this is the R thing that you are looking for, which is the-- something like this. And you don't have to do any of this verification, right? That's it. And basically you just have to plug in alpha i and beta i, and you just verify. And that's it. So does that make sense? So basically you cancel out the epsilon I's. You try to find the best epsilon i by proving the best inequality. You want that-- you also want inequality to be achievable. So-- so for example, another situation that this is useful is-- for example, you probably have seen this kind of form. Like you have a parameter eta. You have eta plus B over eta. You want something like this. And you can choose your eta arbitrarily-- or arbitrarily. So how do you do it? Many people tell you that you just find out the minimum eta by doing some kind of like taking a gradient, right? So and then you find out minimum, right? That's fine. But-- so but my way to do it is that just to prove that eta plus beta eta-- this is larger than 2 times square B. This is Cauchy-Schwarz, or AM-GM, whatever you call it. And this inequality is achievable. You can attain the equality. So that basically, the best thing for this is to square B basically. Basically if you know what this equality is attainable, then you can choose the existing eta such that eta plus B over eta is 2 square B, and then you get rid of eta. You get the best bond you want. So the same thing-- it's the same logic here as well, where you prove inequality so that you can cancel out the parameter that you want to choose. And then-- and if that inequality can be attained as a equality, then you know that you are getting the best parameter. And you don't even necessarily have to compute what epsilon i are. Of course, if you're writing a paper, probably, you still want to compute epsilon i, and do the zero knowledge proof, right? That's why all the papers show you this kind of things. So because-- so how do I-- I have to do a lot more argument to show that this-- but in your mind, you probably should do this in later version. This later version-- at least this is what I did in my mind when I do any research like this, right? Because this is so fast, so that you can get a better estimate on what's the bound you can have, right? And in some sense, this is useful in many cases because one of the ways to make your theoretical research faster is that you have a lot of modularized small steps which you can do very, very fast, right? So one of the way I found that people can get into this very messy calculation is that-- every theory-- if you prove something hard, right, so you have to use a lot of pages. So your eventual product is something like 20 pages proof, or maybe more than that. Sometimes there are 70 pages proofs, or 100 pages, right? So at least I think-- at least in when I do those kind of like proofs, I never really kind of like-- if I change one part of it, I never have to redo that 100 pages calculation to know what's the final outcome. So basically after a certain point, I already know that this part, maybe these two pages, are the most important thing. And I also know that, how does this page translate to the final outcome? And I've already done those kind of very fast. I have a kind of very fast data structure so that I know that if these two-- if this part can be improved by a factor of 2, then what does that mean for my final outcome? And that part is kind of like, are they abstract enough so that you have this very fast conversion, and then you can iterate very fast. So, and-- so the flip side is, this is opposed to another model, which is that you change your proof in some part, you have to redo all the other parts. And so that would be much slower. So this is about this kind of tricks that I realized. So if you can do something small, like these kind of abstract things very fast, then you can iterate faster in your research. Anyway. So far, it makes sense? And if you really care about why this inequality is true-- why this inequality is true-- I think I was trying to justify why it's true, because you can just use Holder inequality. I guess, if you apply the Holder inequality, you get something like this. And this is still not exactly like this, I think. So actually, this is exactly like this, right? Because you have to choose-- you can choose your ai to be-- you just say ai cubed maps to this, and bi maps to this. If you just want to verify, right? You just have to change it, right? So-- but again, if you have to verify this by matching, it is still too slow to me. So what I do is I also memorize other different versions of this whole inequality so that I can do it faster. I think the version I memorized in my mind is that-- at least one version of the Holder inequality in my mind that I memorize is this, which is something like sum of ui square becomes 1/3 times the sum of vi 2/3. It's larger than sum ui vi 2/3. Something like this, which is even closer to here, right? Because in some sense-- and sometimes the way you memorize it is that if you have a bigger component, exponent here too, then these two will go to the vi. How do I say this? So basically you put the-- so like-- so why this is ui to the power of 3, right? So why this is ui to the 2/3? This is because here is you have a square, and then you have a 1/3 outside. And the reason why here you have vi to the power of 2/3 is because you inside you have vi, the linear term, on the outside you have the 2/3. So if you know that, and you know that if you have a square here, then you can cancel this epsilon i because epsilon i will be squared and here you have epsilon i squared, so they can cancel each other. I'm not sure why this make any sense. It takes probably some practice. If you see this enough times, you know what kind of inequalities you can put. Anyway, I guess I probably should wrap up this discussion. Any questions? OK. I think-- let's see. 10 minutes. OK, so I guess I'll use the next 10 minutes to motivate what I'm going to discuss next. Go ahead. What part is this from? This inequality? Yes. Because the inequality can be achieved. So that's why you know it's the best choice. Oh, you mean the final box? Yeah. OK, yeah. Maybe let me discuss that. I'll answer that in the next 10 minutes, yeah. Right. OK, so-- so basically, now-- next, we're going to do something more better than this. So-- and actually, it turns out the proof is actually cleaner, to some extent. Because-- in some sense, because it's capturing the right quantity. So-- so next we're going to have generalization bounds that depend on the actual Lipschitzness. And I'm going to argue that the Lipschitzness we had before was only upper bound, right? Before we had this-- right, before we have this, right? So before we have these bounds, but there you have essentially a dominating term times other terms, which is just a polynomial in the norm, which is not very important. And this one, this is only an upper bound on the Lipschitzness. All right, and it's a pretty worst case upper bound, because you have to-- if you really want your network to be-- to achieve this Lipschitzness, you have to actually kind of construct something that is kind of like somewhat kind of like special. And even this is achievable, right? This worst case upper bound can be achievable in certain cases. Still, you want to find out the network which is probably better than this, empirically. So that's why we are going to have-- so basically, kind of the high level goal is they want to replace this product of the spectral norm by something that is more accurate. And there are many-- several motivations to do this. So I guess one thing is that-- and this relates to the limitation of this bound. So one thing is that this wi operator norm has to be larger than 1, or even you can arguably say, this is even larger than square root 2, to make sure fx is not too small, right? Why this is the case? This is because if you look at every layer, let's say hi is the i-th layer. hi is the i-th layer. Then hi plus 1, the true norm of it, is the true norm of this. You apply this last layer. And if you do a heuristic, you say that suppose you believe that this activation, this ReLU activation, kills half of the coordinates, all right? So it is 0 on top of the coordinates. Suppose you have that kind of like-- Then it means that after a ReLU, your norm will reduce by 1 over square root of 2, because you kill half of the coordinates. So of course, this is very heuristic. This is just a belief, like assumption. And suppose this is the case, and then you say that this is less than square root of 2 times the opposite norm of wi times hi 2 norm. So then you can see that each time, you can only grow your norm of hi by this factor. So if wi operator norm is less than square root of 2, then you are shrinking a norm over layers. So your norm, every layer will become smaller and smaller and eventually they will converge to 0. So your output will be very small. So that's why you have to make sure that the output norm of wi to be somewhat big. It cannot be like too small. In the most optimist case, I think you want the optimum to be larger than 1. But in a kind of more typical case, you need it even to be larger than square root of 2. So you are in the case, right? So this means that-- so in some sense, this means that the product will be big. Good. So-- and another thing is that the motivation too, I think, this is something I mentioned, right? So this is only a worst case upper bound. It's very worst case of the Lipschitzness. And in practice, so the Lipschitzness on the data points-- the Lipschitzness on the data point. x1 and up to xn might be better. All that Lipschitzness on the population distribution, or maybe on the data points, or maybe on the-- or an X from P, from the population distribution, could be better. So at this point, doesn't capture that. And another thing is that, it turns out that we're discussing this in the later lectures. So it turns out that SGD prefers flat local mean. This is something we widely believed, and in certain cases, we can prove this. And the flat local mean is, roughly speaking, we will show this-- we will justify this in later lectures. But roughly speaking, this is the Lipschitzness of the models. All the empirical data. So you can see that this is not a Lipschitzness. The worst case of Lipschitzness on all the points is the Lipschitzness on the empirical data. So which further justifies I probably want to have a bound that depends on the Lipschitzness on the empirical data, but not the Lipschitzness in the worst case. So-- and in some sense-- and also another thing is that-- another remark is that, it's OK to have a generalization bound that depends on the empirical data. So, OK to make the generalization bound depend on empirical data x1 up to xn. And sometimes, this is actually nice, because suppose the generalization is less than some function of the classifier and x1 up to xn. This is still useful because you can still use this-- use it as an expensive regularizer. So there's no problem that your generalization bound-- there is no problem for our generalization bound to depend on empirical data. You probably don't want a generalization bound to depend on the population data because you don't know how to recognize it anymore. But if it depends on empirical data, it's fine. So basically, concretely, in the next lecture, I guess, we will prove that-- next lecture, we'll prove that the generalization error, or the test error of theta, is less than some function of the Lipschitzness of f theta of x1 up to xn, and then the norms of theta. And its function is a polynomial function, which doesn't have anything exponential in it. OK, I guess I'll stop here. Any questions? And interestingly, the proof for the next lecture is actually easier than today, I hope. I don't know how you think about the proof today. It's pretty brute force. So in that sense, it's actually not very hard. But it's pretty messy. I guess I will see you next week.
AI_LLM_Stanford_CS229
Stanford_CS330_I_Advanced_MetaLearning_2_LargeScale_MetaOptimization_l_2022_I_Lecture_10.txt
Hi, everyone. My name is Yoonho Lee. I'm a TA for this course. And it's my first time giving a lecture. So hopefully everything goes well. Yeah, we're going to be talking about the second edition of advanced meta learning topics. I think on Monday, Chelsea talked about memoization and task construction in meta learning. And today, we're going to talk about large scale meta optimization. As a quick reminder, homework three, the one on language models is out this Monday and it's due in a week. So we're going to start with kind of a big picture question about meta learning and why we should even do it. So we can think about learning methods as being on the spectrum between hand designed priors and data-driven priors. And there's kind of a continual shift downwards towards more data-driven stuff. So for example, a long time ago, people used to do directly modeling image formation. And as a slightly more data-driven way, they constructed hand coded features. And extracted those out of actual data points. And then people started doing end to end learning of the features themselves. And by, for example, fine tuning from pre-trained ImageNet features, we-- the pre-trained networks contain a very good prior about what natural languages are like. And it's more of a data-driven fire. And the reason we keep going downwards is that more data-driven approaches are more scalable. So in a sense, if you keep throwing more real data at them, they get better and better at encoding real priors and being effective in downstream tasks. So really, one of the pitches for why we should be doing meta learning is that it's even more data-driven than end to end tuning of a network. And because the thing is we directly learn the learning algorithm themselves. And a question that we should stop and ask is we go downwards because we want to scale better, we want to have something that may start out not so good. But as it sees more data, it gets better and better. And we should ask, do meta learning methods actually work at scale? So for the meta learning algorithms that you've seen in the class like MAML, or pro typical networks, can you actually give them more data and have them work better? And really, the answer is kind of no. So if you like become-- if you were lured into this course from the promise of like being very data-driven and being able to learn from all the data that you have, then it doesn't really scale. So in today's lecture, we're going to talk about why that's the case and what we can do about it while still staying in a metallurgy setting. So the plan for today is we're first going to motivate large scale meta optimization, what it is and why we should do it. We're going to look at some applications of this. And then we're going to look at two approaches in particular that can handle large scale settings. And by the end of the lecture-- we won't have time to go into deep detail into most of this stuff. But at least the goal is to scenarios where large scale stuff makes existing meta learning approaches fail. And broadly understand some techniques for handling those types of scenarios. Yeah, so I think we can roughly summarize a lot of the meta learning approaches as doing direct back propagation. So this is the black box model that you learned in homework one, I believe. And it's just the network that can take all of your support data and your query data and outputs predictions. And in the same way, MAML also-- MAML and all the optimization-based approaches here to you just-- it's one big computation graph that takes in your source and query data. And then in the end you back propagate. And PyTorch kind of automatically does the modeling for you. And the same story is with non-parametric methods. So the commonality here is that all these methods are first constructing, task learning, computation graph and then back propagating through the whole thing. And this is kind of a general recipe that you can apply to any learning computation graph that you come up with. And it's simple in the sense that auto grad does all the work for you. So it's good that it's done automatically. But the core issue with this is that your memory cost scales with the size of your computation graph. And there are learning settings where you would like to have a bigger computation graph, which is what we're going to talk about today. So to give you a rough sense of how big computation graphs are in general, this is from a meta learning paper. The details don't really matter. The standardly use four-layer CNN has about 10 to the 5 parameters. Some works use bigger networks, like wide resonates. Those have a bit more and resonant 12 has about 10 million parameters, which might seem like a lot, but the computation graph only involves at most one feed forward, one back prop and then another feed forward which you don't have to multiply this by a lot to get the whole computation graph. And this is a very toy example from the official PyTorch tutorial. This is for a learning I think C4. And the network size is a bit less than 10 to the 7. And we trained this for five epochs, five epochs with C4 is about 4,000 steps. So if you try to calculate the size of that entire computation graph, it's very big. I think it's about 100 gigabytes. I may be wrong, but yeah it's definitely bigger than whatever GPU you have. So in these sorts of scenarios, if we want to consider learning algorithms at this scale, what can we do because we can't directly apply direct back propagation? So again, the three big meta learning approaches that we've covered in the course so far, they can all be summarized with this big-- not so big F learn, which is the inner loop computation graph with some sort of meta parameters theta. And we construct this and back propagate through the whole thing. So a question for you is when might this F learn be kind of too big to apply direct back propagation? How would you like us to answer this question? [LAUGHING] In terms of number clusters, and so strings, [INAUDIBLE] Just in any sense. What is kind of an inner loop learning algorithm that you would like to use but we can't directly back propagate their little thing? It's pretty common. It doesn't fit on my GPU. Excuse me. The model doesn't fit in my GPU. Yeah, that's definitely-- yeah. If you have a model that's bigger than your CPU, it's-- yeah. Yeah, this occurs maybe too commonly for you to give any answer that really works. So yeah, F learn is too large when we have a big network and/or in many gradient steps. So even if your model is small, if you take too many gradient steps the whole computation graph is the size times the number of gradient steps. So it's too big. Another possibility is that if your inner optimization includes second order optimization, so if you have to back propagate through second order optimization, then that is another scenario where it can be too big. So for example, if your inner optimization is meta learning itself and you want to meta meta learn how to meta learn better, which I don't recommend you work on this, but if you wanted to do that, this would be too big. And you couldn't use direct back propagation. And when we consider bigger F learn than we get to consider a lot more interesting meta parameters theta. So I'm going to show you a bunch of examples of meta parameters. So first of all, we have things like MAML where to summarize again, we learn the initial parameters. So this whole thing is the outer loop objective. And in the inner loop, we do a gradient step with respect to the loss, the training loss. And as you've seen in homework two, you can also learn the learning rates, which was a component of the computation graph before, but we can include it as the metaphor janitors and directly learn that so that your final performance is better. Now the point I want to make is that really any components of your computation graph can be a metaphor. So for example, like building on the learning made example, you're optimizer can also be something that you meta learn. And yeah, really what is an optimizer? It's something that takes in your gradients, takes in your current parameters and gives you something else. And this doesn't necessarily have to be hand designed. To get even more strange, we can actually meta learn the loss function. So by loss function I'm referring to things like the cross entropy loss. And the way you would do this is what a loss function is it takes in your predictions and your true labels and gives you a scalar. And that's back propagate from that. So for example, a way you can learn a loss function is have a-- have an L, phi, is that what we call it, the network that takes in-- It has two inputs. You input your predictions and the ground truth labels. And it predicts something. And then you treat this as you would a regular loss function. And you back propagate through it and do everything else. And that's possible. You can also directly learn the data set. So the thing that you feed for through your network doesn't have to be your original data. The way this can work is you can-- so for the case of images-- Yeah. You can just have a four dimensional tensor that acts as your data set. And you can directly apply your augmentations or anything else as just treat this like your data. And you can also optimize this so that learning is better in the downstream. A there's even ways to-- Can you give an example of when you would want to learn a loss function because it seems like you would have to have a real loss function that you actually are trying to minimize, and you're trying to learn this intermediate loss function to get there, right? Yeah, so we always need a loss function in the outer loop. But in order to minimize the-- let's see, what's up with 36-- so every last function that we use anyway is a proxy. So if you think about classification. What we really want usually is accuracy. And that's not a loss function that we can use because it's not back propagatable. And so we use cross entropy loss as a proxy for that. But it may not be the case that cross entropy is the best thing to optimize to get high accuracy. So this kind of searches in that space, And you wouldn't still have to back propagate it from that accuracy in order to find the loss that maximizes the accuracy? So we can't back propagate through the accuracy but we would minimize this objective, which incorporates the accuracy. Does that make sense? No, but it's OK. [LAUGHING] Yeah we learn an L so that running gradient descent on L results in bigger accuracy. But when you do that, don't you have to take the gradient of the accuracy with respect to this L. Oh, OK. That's the confusion. So that the point I'm getting at is we don't necessarily have to back propagate through this and there are ways of optimizing for things that you can't differentiate through. And that's the stuff I'm going to talk about later. The other was [INAUDIBLE] Oh, yeah, there should be an L here. Yeah, that's a typo. So if we're doing the loss function, could we also learn how to regularize? But if we learn a loss function, then we can learn how to regularize it so that the outer loop works well, but it doesn't overfit to the-- Yeah, absolutely. Learning a regularizer is a variant of learning a loss function where your total loss function is an original loss plus something else. Oh, I should be repeating questions, yeah. How would you redefine the search space for the loss function. So is it just your-- do the research across different types of loss functions, or are you trying more of a formula for a loss function that could be really novel-- How would you define this? Yeah, the question is, how do we define the search space for loss function? And that's really up to you in a sense. You can do a search over a finite set of candidates. You can just make your search space the set of networks that have the same input/output structure. When we're trying to optimize for the data set, does that mean that we're trying get better [INAUDIBLE] the dataset [INAUDIBLE] code examples, or is it a completely-- example that's completely determined by-- Could you repeat that? Sorry. Yeah, absolutely. When we're trying to optimize for the dataset, what does it actually mean? I don't get how we can optimize the dataset. The question is, what does it mean to optimize the data set? And yeah, I agree that this is kind of a foreign concept. So what you can do-- this is the simplest way to do it, and you can just directly parameterize a four dimensional tensor with the interpretation that each dimension in the batch dimension corresponds to an RGB image. And you feed this forward through your network as if it were an actual image, and you back propagate. I mean, you do gradient steps with respect to the loss, and then you get final parameters. And we optimize for downstream validation loss or performance. So it's like we're trying to find an example, which will perform the best for our current order? We're looking for the examples so that if we try to minimize the loss for those examples, we would get better performance later on. Is this the less extreme example of [INAUDIBLE]?? Could you do learning augmentations instead of just creating the density? Oh yeah, absolutely, you can have the-- question is, can you learn instead augmentations? Yeah, you can start with actual images and learn mild augmentations to the images so that you get better performance. I think a couple of teams are working on that as a final project. Yeah, I'm going to move on in the interest of time. So we're going to move on to-- in this large scale optimization setting, what are some applications that people have-- previous papers have looked at? So one application is hyper-parameter optimization. People don't usually call this meta learning, but you can view it as in the same rough frame. So by optimizing hyper-parameters-- Yeah, first of all, when we're optimizing hyper-parameters, the inner loop is basically a big learning SGD chain. And of course, we can't directly back propagate through that. And by optimizing hyper parameters, existing works have shown benefits over random search, better LSTM hyper-parameters. And in the same framework, you can also optimize the hyper-parameters of a data augmentation network, which starts to look a lot more like parameters than hyper-parameters. And things that you can optimize in this way are things like dropout fraction, learning rates, yeah, things like that here. The weight that you give to regularization terms. So hyper-parameter optimization is one application, another is data set distillation, which is closer to optimizing data sets as we've talked about before. And here, the method in this specific paper, it matches the gradients of-- the gradients with respect to your synthetic data, to the gradients with respect to real data. So we're making synthetic data points so that it results in similar gradients as the real data set. And in this way, you can actually compress existing data sets like 10-way classification problems with a lot more images per class into single images per class and get pretty good performance. It's definitely not as good as the original data sets, but you can get 99% accuracy on MNIST after training on those 10 images and so on. Yeah, everything makes sense, right? Yeah, you can also learn optimizers. This kind of builds on the learning rates and parameterizing the optimizer as a neural network. So one very simple way to parameterize an optimizer is as a network that takes in, for each parameter, the current gradient and the momentum, which is a moving average of your previous gradients and outputting an update. This paper uses a more complex architecture, which takes in gradients, momentum, the second moment, and also takes in the current training and validation loss and tensor shape and gradient norm. So by looking at this in a meta learning setup, your optimizer can be-- it can take in a lot more information than traditional optimizers do, which usually, just to use like the current first and second order moments of the gradients. And their learned optimizer works at the scale of big ResNets for many training steps. So here they're optimizing a ResNet for 10,000 training steps, which is obviously too big to apply direct back propagation through. And they even use it to train itself. So what this means is, on a set of tasks, they train an optimizer so that it gets quick at learning these tasks, and then they totally reinitialize and then use that optimizer to optimize the outer loop loss, which is pretty meta. Another thing you can do is neural architecture search. Yeah, this. Here we'll see a way to parameterize a neural network. This is one way to do it, other works use other ways. Here what they do is they have an RNN that outputs the parameters of your neural network. I mean, yeah, the parameters of its size. So it outputs things like number of filters, filter height, width, stride for a CNN. And when they apply this to an RNN, it produces this very big thing which no person would come up with from first principles, but this seems to work better than things like LSTMs or GRUs. And this was in 2017, so this is a long time ago, but at the time, they achieved state of the art results in terms of error rates on-- I think this is ImageNet. I'm not sure. So yeah, through these applications-- we don't really have time to go into depth on any of those, but hopefully, you're kind of convinced that large-scale meta optimization is possible. And now we're going to look at some approaches to doing that. And I'm going to talk about two approaches. Some of the works I've talked about before use one of these two, and some use other approaches, which I'll briefly touch on. But yeah, that's what we're going to talk about. Are there any questions? I have a question about learning to learn optimizer from the paper. So is it that the optimizer is being learned simultaneously as the main training is happening, or is it that we need some offline data to train this optimizer first and then use this optimizer, so-- Yeah, that's a good question. The question is, is the optimizer learned alongside training? And I think, in this paper, the answer is yes. And you train a little bit, update the optimizer, and then train a bit more and so on. A question about the computing. So you're like-- so it's like a [INAUDIBLE] every time you see a new [INAUDIBLE],, every time the optimizer is trying to predict, every time you see a new kind of gradient, every time [INAUDIBLE] is trying to build like the end of it in a new position every time. So how is it able to do predictions and unseen things every time for optimizers? Because the cleaning is like a random part where it doesn't have experience of navigating to the new false landscape. Right. Yeah, the question is, how does it basically generalize to places in the landscape where it hasn't seen? And the answer is-- so first of all, let's say our network learned the identity function of the gradient. So it just passes forward the gradient. That works well by itself, and it generalizes. So there is a pretty simple solution for the meta-optimization that generalizes. And because we're explicitly optimizing for the outer loop loss after you take gradient steps, do you generalize? That's what kind of the secret sauce that makes it learn generalizable updates, if that makes sense. Do you have a [INAUDIBLE] really have to control for getting out of the [INAUDIBLE] the same kind of stuff we don't actually it's a unique experience because the way I think people do have a problem [INAUDIBLE] it when they're trying to out and see what happened and then fix it. So that is [INAUDIBLE] without seeing what happens. Everything is unseen for the optimizer. Oh, my previous answer may have been confusing. So it does reset. So after you train on a couple of runs, we go back. We completely reset the inner loop parameters and then learn again from there. So, yeah, whatever timestep it arrives at, it's probably seen that before. OK. Yeah, let's move on. Yeah, so we're going to talk about a couple of approaches to large-scale meta-optimization. And, yeah, before laying out the direct approaches how to do this, we're going to kind of visualize what happens with unrolled computation graphs. So that means when you optimize some set of parameters continuously, we can unroll that computation graph into kind of a sequence of parameters. So here, let's say we start from the parameters by 1, and we do a gradient step and get by 2. And we keep doing that. So these blue blocks are your parameters, your inner loop learner parameters. And then we finally get the validation loss at the final timestep. So our goal is to modify something about this inner loop learning system so that the red node becomes low. And from this viewpoint, we can view a lot of the things that we talked about as possible meta-parameters as follows. So the initial parameters-- it's there. And your red dot, the final validation loss, is there. So, yeah, that's what makes the backpropagation hard to-- yeah, through the whole training loop. And for learned losses, I think you mentioned regularizers or optimizers. The meta-parameters that you're optimizing are-- they appear in the mapping from a parameter to the next parameter like this. And for data set distillation and learning augmentations, the meta-parameters appear as the input to your network. And for the architecture, it's kind of embedded inside your parameters. It kind of interacts in a way that can't represent with arrows. So kind of the points that I'm getting at is you can view all of these as suffering from the same core issue, which is that you have to backpropagate through this very long chain to get from the red to the green. So we can't apply direct backpropagation. So, yeah, we're going to talk about truncated backpropagation. And, yeah, from the visualizations from before, really why we consider large-scale meta-optimization and these specialized approaches is that we just can't fit all of these parameters into a GPU. So truncated backpropagation is quite a simple algorithm. It uses a time length. So let's say 3. And as you feed things forward, as you optimize, you just detach everything before three steps ago. And that makes it so that you can apply direct backpropagation to the green nodes that are nearby. But you have to just ignore everything from before. Yeah, so when we're considering time lengths like this, what could happen if we use two shorts of a T? Like what would be the trade-offs? [INAUDIBLE] Yeah, the answer-- yeah, you said backpropagation is faster, but we just don't learn things from earlier. And, yeah, that's exactly correct. We basically can't learn any long-term dependencies. If we have a timestep of three, then we only optimize things that are beneficial in the short term while ignoring everything, every way in which the green nodes can help the red node down the feature. Implementing this is quite simple. This is for an RNN. And it's slightly different from that visualization, but the main trick is that as you feed things forward, you just detach things when they become too old. And your GPU is happy with that. So for truncated backpropagation, it's very simple. So here also as in direct backpropagation, autograd handles everything as long as you detach things when you have to. The problem is that first, it's a biased estimator, so you're not getting the true gradient of green with respect to red. So that bias can harm performance. And more specifically, we cannot take long-range dependencies into account, which when we're meta-learning with big computation graphs, really what we want is to learn something that kind of keeps on being useful as you keep taking steps. And this is kind of undesirable. One point that can be good and can be bad is that by using different T, you can have a trade-off between correctness and memory cost. So in the limits of the maximal T, you get totally unbiased, totally not ignoring long-range dependencies, but you need a lot of memory. And with shorter T, you ignore all the long-range dependencies, but it's easier to compute. There's a similar approach also, very good to, say, the inner loop of MAML. So this is a substitute for RNN, or this was a similar approach for MAML? Oh, yeah, the question is, is this specific to RNNs? And no, I should have been more clear about that. This works in any inner optimization, inner sequential optimization loop. And the blue things can be parameters where moving to the right would be a step of gradient descent. So there's a broken, say, MAML so that you keep computing gradients every three timesteps? Or do you just forget the [INAUDIBLE]?? Yeah, you would detach everything from before. So let's say you take 100 gradient steps. At the 100th gradient step before you backprop, you would detach from like the 97th gradient step back. So you would still use the parameter values, but the gradients just wouldn't flow past that. You called detach on those variables. [INAUDIBLE] How would you detach with the previous parameters like the gradient steps with the masks? The question is, how would you detach the previous parameters? Yeah, just calling detach should suffice. And you could-- [INAUDIBLE]? How would you describe what really [INAUDIBLE]?? Oh, got it. Yeah, so the question is, how do you keep track of these variables? Yeah, you would need to keep something like a tuple or a list to it. Yeah. Could you explain again what you mean by long-range dependency in the inner loop setting? Oh, sure. The question is, what do I mean by long-range dependencies? So let's say instead of 5 steps it's like 100 steps, there's going to be a way that the input at the first step influences the output at the last step. But by truncating somewhere in between, the gradient doesn't flow from here all the way to the beginning. So we kind of ignore all those long-range dependencies in terms of timesteps. Yeah, let's move on. And, yeah, we're now going to talk about gradient-free optimization. And this just directly avoids the issue of not being able to backpropagate through the graph because we can do optimization without using the gradients. So the algorithm that we're going to talk about is evolution strategies, though there are other gradient-free optimization methods that aren't this. It's roughly inspired by biological evolution where the whatever has the highest survival rate just keeps expanding. And, yeah, nature doesn't backpropagate, so, yeah, we're just going to directly use that trick. Is this like genetic algorithm? Yeah, the question is, is this like genetic algorithms? I think it's exactly the same thing. That might be false, but, yeah, they're very related, at least. So how evolution strategies works is roughly like this, and we'll walk through this step by step. So first of all, we initialize the parameters. So in this visualization, imagine that your parameter space is two-dimensional. And any parameter combination is a single point in this. And the loss surface is so that the lighter regions are better losses. So what we do is we initialize the distribution of parameters, mu and sigma. And then we just sample a bunch of parameters like this. Yeah, so let's say-- yeah, let's say that is the reason that we want to arrive at. And our initial parameters are something like this. So we've sampled 7 points. And after sampling a bunch of particles, we evaluate and get the top n with where small n is smaller than big n. So in this case, let's say we had seven particles, and we want to pick the top four. That would be something like this. So we evaluate this and get the top four and then we just plug in the average and variance of the top population. So now we would sample from this distribution in the next generation. And yeah, we just keep doing this. We sample again from here. Maybe we get something like this. Take the top four. And the population just keeps converging to this without ever having to backpropagate or need the gradients of this loss service. What about this-- like, get stuck in local minima pretty easily? Like, is it pretty exploitative as opposed to exploration? Yeah, the question is, won't this get stuck in local minima? It definitely can. I would say that's something that you can also say of stochastic gradient descent, though. So any kind of local search method can get stuck in local minima. For this, what people usually do is they add an exploration term, so you kind of increase your variance whenever you're too sure so that can mitigate that. But, yeah, it definitely can be possible. So if this region was like a lot better, it's definitely possible that you're just stuck here forever. Yeah. So we're going to walk through a very simple example of this where we optimize the learning rates instead of this 2D surface. So for optimizing the learning rates, how this would work is you initialize your average learning rates and noise probably to a reasonable value for a learning rate. So yeah, something like 0.001 is probably reasonable. And we just sample a bunch of learning rates from that distribution. And then the inner loop is where you get to do basically anything you want. And here, using each learning rates, we initialize a network and then run SGD. And then we evaluate all of those runs and then pick the top end with the best accuracy. And then with those top candidates, we take the average and variance of the learning rates, and then we keep repeating again. And as we do this, we converge towards better learning rates for whatever datasets or architecture that you are considering. When you say run the SGD, do you mean running for a small number of steps? So it is not like training. The question is, is SGD a small number of steps? It can be anything you want. I'm considering actual training, so very long, many epochs. And the reason we can do that is that we choose the best members without having to hold the entire computation graph in memory. Yeah, here's a conceptual question. What would happen if we try to, instead of learning meta-learning the learning rates, use this exact algorithm to optimize the initial parameters as in MAML? It'd be very inefficient because the number of parameters is very large. Right. Yeah, the answer was that it'd be very inefficient because the number of parameters is large. Yeah, I definitely agree with that. I would phrase that as, because your parameter space is really high dimensional, we would basically never observe a good outer loop loss because just by applying high-dimensional noise, we would never arrive at any sort of good parameters. So this would never learn. It would learn in the limit of infinite time, but, yeah, it'd be very sample inefficient. So the advantage of evolution strategies is that-- oops, yeah, question? Very good. It's high dimensional. That's why you can't do it. So I wanted to ask a question. Is there any way to divide the dimension spaces and work on optimization for some dimensional spaces and then maybe somehow merge it for the point optimal for the whole world? Is there any approach they can start? Yeah, the question is, because we have a high-dimensional space, can we divide the dimensions and optimize separately? And then to accomplish to the work to get the final opt-- Right. So I think that can work if the loss surface is factorized in the sense that if you optimize this set of parameters, it has no effect on this. But-- [INAUDIBLE] account for interdependency as well in the merging step? Is there something, any research on this video? Not that I know of. Yeah, you would have to be relying on some sort of independence conditions, so yeah, I'm not sure how that would work or aware of related works. So do we usually use these evolution strategies to the inner loop or outer? Oh, this is all happening in the outer loop at the moment. So in the example here, yeah, the inner loop was just SGD. And we were using evolution strategies in the outer loop, but, of course, you could use evolution strategies in the inner loop. If we use it in the inner loop, then how can we fully launch with our outer loop because you don't have a set creative decision? So the question is, if we use evolution strategies in the inner loop and we want to do direct backpropagation in the outer loop? Yeah, to my knowledge, there's no way to backpropagate through evolution strategies. So you would have to use some outer loop optimization method that doesn't require gradients. So you could do inner loop evolution strategies and outer loop evolution strategies. That would work. Go ahead. [INAUDIBLE] what's the difference between computational cost in evolution and the normal gradient? Could you repeat that? What are we comparing? So for each iteration of update in the parameter we used, what's the difference between the computational cost of evolution and normal gradient descent? So the question is about comparing the computational graph of evolution strategies and direct backpropagation? OK, so in terms of computation-- yeah, evolution strategies just requires a copy of your parameters that can be updated. So it's like your network times 1, whereas for a direct backpropagation, it's your network times the number of steps, because you have to keep everything in memory so that you can backpropagate. So in that sense, this is constant and direct backpropagation. [INAUDIBLE] toward some [INAUDIBLE] because storing gradients and some-- like, some are for each parameter. You are just retrieve-- you're losing [INAUDIBLE] over the-- we are just reading, right? You have stored it just once. So you're just essentially creating a copy of the architecture. And for each parameter, you're actually storing each gradient. So why do we need to store it every time you need to go back? Let's say if I can remove gradient for the last year. I stored it. Then last second, I just need to retrieve the work, right? So it's just a reading cost for the thought, not actually writing costs, but to the metrics. So is the question that-- So just asking [INAUDIBLE]. Yeah, for direct backpropagation? Yeah. Yeah. If you can backpropagate for solutions. Yeah, I think you're talking about gradient checkpointing, which is-- yeah, that would save memory cost at the expense of time cost. And if you start using that, then backpropagating through a long chain would be feasible. But the time cost would make it pretty quickly unfeasible, because you would have to store parameters. And then you'd have to basically go back and-- Repeat everything you've done. Yeah. [INAUDIBLE] Right. Yeah. And here, they could be done independently. Like all the parameters could be done independently and very, very conservative. Right, Here, it can be completely in parallel and you don't have to store anything. Yeah, right. Thanks. Yeah? Is there a proper ratio between the number of candidate with simple and the dimension of the parameters to optimize using the evolution strategy? So you're talking about large and D, which is 2 here, right? The number of big N and the dimension of alpha. OK, yeah, the question is about the proper ratio between big N and the alpha's dimension. I don't think there are very strong guidelines. But it definitely can't be the case that your loss surface is very high dimensional and very curved. And you use N that's much smaller than that. So if you have a million dimensional parameter space, you can't use N equals 10 to explore that space. Does that answer your question? Yeah, I'm just wondering how do evolution try to use computer in general to [INAUDIBLE] of learning, like having a Gaussian process that kind of predicts, supports the loss of each carbon emission [INAUDIBLE]?? Because I know we got processes you need to store basically all the samples you've taken throughout history, whereas this one I guess doesn't need to do that. It doesn't need to [INAUDIBLE] Right. And do you want to compare with respect to time, and space, or performance? I guess when you choose one or the other-- because I know for Gaussian processes, depending on the sample strategy, you could find a global optima. But also it becomes much, much more expensive as dimensions becomes larger. So yeah, when would you decide to use evolution strategy? Yeah, that's a good question. The question is, when would you decide evolution strategies or Gaussian processes for these sorts of settings? And, yeah, honestly, I don't really have that good of a sense of when what would be better. So for Gaussian processes, your prior in likelihood have to be very well suited to whatever problem you're using. And we generally don't have something like that for high-dimensional data. Like if our task in the inner loop is large-scale image classification, I don't really know how we would do that with Gaussian processes. But if you have a really good prior in likelihood model for Gaussian processes, then that can be better. And storage-wise, I think something like this would scale a lot better because we don't have to store or keep track of all the data points that we have. Yeah. Yeah, so the main advantage of evolution strategies is that the memory cost is constant with respect to the number of gradient steps that you use. Basically, whatever you do in step 2, its size doesn't matter to what you need to approximately differentiate with respect to that. And another advantage is that you can very easily parallelize this. So as soon as you sample your different particles, you can run everything completely separately. And all you need is the final output. So if you have a lot of parallel computers, then this can be well suited. And you can even consider things in the second step that are non-differentiable because we don't rely on gradients at the moment. So any sort of sampling or discrete operations in the inner loop is very compatible with evolution strategies. The main disadvantage of evolution strategies is that it struggles with when the parameters, the mu or theta, whatever you want to call it, is very high dimensional or when its loss surface is very complex as in the example of optimizing the initial parameters with evolution strategies. There are a couple of other approaches to large-scale meta-optimization, though these are a bit less commonly used. But the method is pretty interesting. So I'll very briefly touch on them. And there are papers on the bottom that you can read if you're interested. There's implicit differentiation. What this does is it leverages the assumption that you converge to an actual optimum and differentiates through that optimality condition to get-- yeah, you get the full meta-gradient without storing anything. But the assumptions that go in may not be satisfied in all cases. So it's not perfect, but it seems to work sometimes. And another approach is forward-mode differentiation. So backprop is basically the chain rule where you leverage the fact that the output is one-dimensional. So given all the terms in the chain rule, if you start multiplying from the back, you always have a term that's dimension times 1. So it's always sort of linear. But if you multiply from the forward, you get these terms that are quadratic in the output sizes or parameter sizes, which costs a lot more compute. But this has the advantage of not having to store everything. So because everything that you want to multiply is there at each step, you can just-- yeah, you can forward prop as you go forward without having to store everything. So it's advantageous when you don't want to store everything. But, yeah, there are cases where this too is too costly. Yeah. Yeah, that's roughly -- that's what we have today. So we've talked about large-scale meta-optimization. We motivated the problem and why existing meta-learning approaches kind of fail in large-scale settings. And we talked about some applications and two approaches, truncated backpropagation and gradient-free optimization. And, yeah, the goals were that we know scenarios now where existing approaches can fail because of scale. And we understand in a broad sense some techniques for this problem setting. I hope that's been accomplished. Yeah, we have some time for questions if you have any. , . Does anyone try something like greedy algorithms that we first optimized for what we use only three inner steps and then find a result. And then we fix what we have for the first for inner step and then optimize for the following three inner steps so that when we optimize for what the second three inner steps, what we need to propagate to the first three in the inner step because we already fixed what we got. So you're talking about a two-stage method where you do three steps first, and then completely disregard what happened, and then do three steps again. So we don't need to backpropagate through all of the computation graphs? Yeah, I think what you're talking about can be viewed as a truncated backpropagation in a sense. Yeah, you're choosing to ignore the future influence of your first three steps, right? But that influence is something that can still exist. So in the ideal case, we would want to track this influence. But, yeah, you're making the simple fine choice to ignore that. I think that's just exactly truncated backpropagation. Like truncated backpropagation assume that the result from the bigger steps also apply to the first few steps. But if we start from the first and then optimize for the following steps, next we always make sure that we won't get a degenerate solution because we can always make the following step do nothing. Could you repeat the degenerate solution part? Like when we're doing our truncated backpropagation, we might-- for example, in the slides, we thought the result from the last three steps to the steps, that is detached from the inner optimization. But if we start from the beginning and then optimize for the following steps, we always can-- those following steps do nothing, so we have exactly the same result as we only do the first few steps that we just optimized. And so is the setting that after the three steps, do you remember the parameters of the last third step? Or do you re-initialize something here? Because if you remember it, there is some sort of long-term influence. And if you re-initialize, you're just doing two tasks sequentially. And in the latter case, we can just consider this a two-task set-up. Yeah, we can take this offline later. Yeah, if there are no-- That is a good practical question, which is, if you're doing model development, and you're thinking, well, maybe I should optimize my initial parameters, maybe I should optimize my learning rates, but then you're also changing the model as you're going along, can you use these strategies? Or is it better to just do the simple thing every time? And under what circumstances do you apply learning rates and stuff like that? Yeah, the question is when you're doing model development, does it make sense to keep your pre-optimized learning rate after you change the model? I think that would very much depends on how much you change the model. If you don't change it by that much, I'd assume that the optimal hyper-parameters are quite similar. So using the previous solution as like maybe the initial thing for your hyper-parameter optimization can be a reasonable thing to do. If you make really big changes, maybe starting from scratch would be better. Yeah, I think it very much depends on how you set things up.
AI_LLM_Stanford_CS229
Stanford_CS229_Machine_Learning_I_Supervised_learning_setup_LMS_I_2022_I_Lecture_2.txt
So hello. Welcome to 229. So we're starting a block of three lectures that I get the privilege of spending some time with you and kind of walking you through the building blocks and basics. Before I get into the plan for those three lectures, I want to make sure we understand a couple of logistics. So I posted something on Ed that kind of explained why I was setting up lecture in the way I am. You are not obligated to read that. But if you're interested, go ahead and read it, super happy to take feedback and discuss any of that. One of the things that I liked about the pandemic was that more people were asking questions during class. And I think part of that was because people were using the anonymous feature on Zoom quite a bit, and I wish we still had that. We don't in this class for various reasons. So what we're going to do instead is we're going to have this Ed thread that I just set up that says lecture 2, And feel free to fire away questions on there. I may not take all of them. I reserve the right to skip them. TAs may jump in and answer some and I'll try to follow up on anything that's there. But it's really helpful to me that you ask questions and happy to talk about whatever you want, really. Maybe relevant to the class is helpful, but pretty much whatever you want. Second thing, there are a couple of downloads that I put up before my lectures. I put up two things. One is a handwritten note of what I'm going to talk about, which are the same notes that I use. I modified them a little bit and then also a template in case you want to follow along. Again, you don't need any of this stuff. You can just sit, watch it on video, watch it here, ask questions, do whatever you want. But it's just so that you know the material that's there and that things like data that I want to show you and look real. I can cut and paste that in, and you can have it in front of you while I go through it, OK? All right, so that's the logistics I will use. I'm going to try and use the iPad. I like using the whiteboard feel. So this is a good compromise because it slows me down. If I get excited, I'll start talking all kinds of nonsense. So this will focus me a little bit more on the class, and you'll see how long I last, all right? So what we're going to do in this first three sections of the class, first three lectures is kind of build up increasingly sophisticated machine learning models. And what you're going to see is that they are very, very similar to a model that you probably already know and love, which is linear regression. If you don't know linear regression, don't worry. Today's lecture is effectively going to be talking about linear regression with slightly fancier notation and some little bits around the algorithm, but it's basically just fitting a line, OK? It's really hopefully going to be something that you've seen and you can grab on to. And then what we'll do in the next lecture is we'll generalize this from regression, which is the kind of traditional fitting a line to classification, and that'll have a couple of twists. We choose our notation a little bit carefully. And what that allows us to do is show that that way, that we're looking at classification. And we'll talk about what classification really is, allows us to do a much larger class of models, which are called these exponential family of models. And they're going to kind of rear their head throughout the course. So we're going to see a precise definition that allows us to have a huge number of statistical models and kind of treat them in one way. So we don't have to understand the details of every little model. We have an abstraction of how to find its parameters, how to do inference on it, let's get a prediction out of it and kind of understand it and understand these algorithms. I'll try to highlight for you as we go through there which of these pieces actually carry over to what I would call kind of modern and industrial machine learning. Feel free to ask questions. Effectively, the way we solve these algorithms or we solve these underlying optimization problems is exactly the way we run everything from how images are detected to how search works in various different corners of it, to natural language processing to translation. Weirdly enough, this abstraction kind of carries over for all of that. And the underlying workhorse algorithm, which we'll see is called stochastic gradient descent. And so we'll try and introduce it in that absolutely simplest setting, OK? And so that's the idea. It's going to be building parallel structure for the next kind of three, so linear regression, classification, and then we're going to go through this generalized exponential family. And they will have a very parallel structure. If you go back to the notes, you'll be able to pull out oh, this is the solving part. This is the model part and what we're going to do there. All right, then Tengyu takes over, teaches you a bunch of awesome stuff, neural nets, all the rest of that stuff, kernels. Then I come back and teach you unsupervised, and there again, is a different structure there. But it's very, very similar and graphical models and the rest make an appearance there, OK? So today our plan is to get through first to some very basic definitions. We'll be a little bit pedantic there. But that doesn't mean you shouldn't ask questions, means if you don't understand something, you should, and I haven't done my job. So just fire off a question in any form you like. Then we're going to talk about linear regression, which as I said, is fitting a line, except where we fitting high dimensional lines eventually. So we're going to want to abstract that away. We'll talk about batch and stochastic gradient descent, which are two algorithms in machine learning as Tengyu talked about. We're not great with terminology. This algorithm was called incremental gradient descent in the '60s. It's been around forever. Our incremental gradient methods actually wasn't-- even it's not even a descent method formally, doesn't matter. The point is these are old things that people have been using for a long time. And weirdly enough, it's what we use every day. It's as I said, this is like a workhorse algorithm that you're going to see. And then I'll very briefly cover the normal equation because I think it's a curse on your homeworks and also give you some practice with vector derivatives. So you do need to know the vector derivatives stuff to make your life easier in this class. You'll have to compute, occasionally compute a gradient or computer derivative. And this is a place where you kind of know what the right answer is, so when you compute these derivatives, it's an easy place to check yourself. But I wouldn't say that normal equations are the most important thing you'll learn in this class. It's just solid. You should know what they are. It's not hard. OK, all right, great, so let's talk about supervised learning. All right so this next section as I mentioned, is going to be all supervised learning. And it'll all follow kind of the same general schema, right? And what I mean is we're going to try and have some what we call prediction function. And basically, all that's going to be is a function h, which we'll use this notation consistently that goes from some set x to some set y, OK? Before defining this formula, let me just give you a couple of examples. So one idea is that x could be the set, some set of images, all right? So we could look at images, at a bunch of images, and we could ask does it contain a cat, right? That was actually a very important machine learning problem at one point in time. People still work on it, right? What's the object that's in this image? That would be a prediction, right? Where your y's here would be a set of labels that say things like cat, dog, things like that, OK? It could also be text, right? So we could look at text here, and we could ask questions that maybe we arguably should do better on in machine learning like is it hate speech, right? And so we ask here, this x here, these are all examples of data types that we want to work on. And these are all labels or y's that we're talking about, OK? Now, we'll look at as Tengyu showed in his lecture, we'll look at house data. Now historically, house data has been one of the most common machine learning and statistical tasks. It's in every stats 101 course. So you may have seen this before. I kind of hope you have. And when we look through it, I'm going to point out the real data that you can use to try this out in a competition like Kaggle. There's a Kaggle where you can download house prices from Ames, Iowa and try and guess how much they should sell for, things like that. People actually make money on that, by the way, not everybody, sometimes hard if you follow the news, right? Zillow tried to sell houses and estimate them and flip them, and they lost a bunch of money. Blackstone, if you care about private equity, managed to make money doing that, right? They bought houses, and they were able to predict how much they were going to sell them at. So maybe trivial as it seems, these are actually problems that people care about. OK. Anyway, so we need an abstraction, so we have this x and we have this y. We need something else to make this a supervised problem. And we talked about it yesterday. We're given a training set, OK? So what is the training set? Well formally, it's just going to be a set of pairs. This is just introducing notation. You have an x1 and y1. OK now, comma all the way xn to y. Now, xi here is going to live in x. It's some encoding of an image. Maybe it's the bits that are in the image. That would be a reasonable encoding. Maybe it's RGB values that's in there. If it's text, maybe it's the ASCII characters or Unicode characters that are in there. It's some bag of bit, OK? Now, we're later going to abstract this away and almost always work in a vector space. We'll talk about where those vector spaces come from. But that's kind of where the data actually lives and y i is going to live in some set, and those are going to be our labels. Oops. All right, all right, so now our do, given that information, is we have to find a good h, x OK? Often, we call it h because it's a hypothesis. All right now, that notion of good is going to occupy a fair amount of what we worry about over the next couple of lectures. What does it mean to be good, right? In some intuitive sense, because I have these examples of x's and y's, one reasonable thing I should expect is I kind of get them more often right than random chance, right? That's kind of a very basic idea of what would be good. You show me an image. It has a cat in it. I get most of the cats right. Now, you've used enough machine learning to know we don't get it right all the time, right? And it's still useful. So we'll have statistical notions. We'll try to get it right kind of on average. Now, more advanced things, like just recency bias because Tatsu was talking about it in the class before on the board, you could also worry about how well you do on some groups versus other groups. Some groups, you know you're predicting really well on. But other groups have qualities, and you're not predicting as well on that. You could worry about that and say I want to do, my prediction I only care about as being as well as I do on any one of these predefined groups. So you can have multiple notions of good. We're going to stick with the simplest in basic, which is like how accurate am I at the task in this? But this mathematical framework can accommodate all of those. When you actually write it down, the tweaks that I just mentioned to come up with those different, what they're called loss functions is really, really kind of straightforward mathematically. They'll kind of go through the same thing. So all I want you to take away from this is we have a training set. That's what's provided to us. These yi's are going to be supervision. They're in some set. Our goal will be to find a good h among all the possible functions. And by the way, the class of functions from one space to another is enormous, right? So we're going to have to restrict that in some way. And that's kind of the setup for supervised learning. OK? All right now, this here, we will often refer to as the training set or the training data that's there. And what we're really interested by the way in which is probably a little bit counterintuitive the first time you hear it is, we're not doing strictly machine learners. We're not doing strictly what's called interpolation. We're not just trying to predict back on the x and y pairs that we have, we're going to try and worry about how well we're going to do on a new x and a new y. So why does that make sense? Imagine someone shows up with an image. Odds of that, they just took it with their, phone right? My phone is just littered with pictures of my daughters. So if I take a new picture of my daughter, and probably the label should be the same as the last 1,000 pictures I took. But it's going to look a little different, right. So when I show that picture, I don't care how well I did on the last picture that I took of her. I care how well I did on this picture, right, on those x and y pairs. And that's a little bit weird. And that means that implicitly what we're going to assume here is that these x's and y's you should think about is drawn from a large population of images that are out there. And we want to do not-- we're sampling some piece of it. And we want to do well on those images that are going to come in the future. That's why we think about it as a prediction. So it may not be great to just return the label of every x and y we've ever seen, right? We have to in some way kind of generalize, is the technical term to those new images. OK? All right, so the reason we call this a prediction is we care about new x's that are not in our training set. Right, now, if you look at that, and you're mathematically minded, you're like, how the heck do you say anything about that? And hopefully, you got a clue there. If it doesn't make sense yet, don't worry. We're going to make some assumption like we randomly sampled from all of the images and how well do I do on another randomly chosen image. OK? That's what we're going to do. In some way, the set you train on, though better be like the set that you evaluate on, that you take your predictions on, or you're out of luck. If you train your model on pictures of my daughter and ask to know about cars, I don't know how well it's going to do, right? So there's clearly some link here. Now weirdly enough, although I say that, one of the big trends in machine learning that's going on right now. And in fact the course that I co-taught with Tatsu and Percy last quarter was about these large models that we just trained to predict kind of everything that's on the web. And they seem to do pretty well on things, so just want to highlight there's a really strange notion of good. You spend your whole life trying to think what good is if you're a machine learner. OK, a couple more things, as I said, I'm just going to go off on tangents if no one stops me. All right, so if y is discrete, this is just terminology. So it's a discrete space. We think about this as classification. OK, that's the terminology. You can think the simplest version is yes or no. Does it contain a cat, yes or no, binary classification. You could also have a bunch of different classes. Is it a car, a plane, a truck? What model of car is it? Those are classifications. They are enumerated sets. The other thing which you're probably familiar with from calculus, and we'll talk a little bit about today is when y is continuous. And this is called regression, regression, OK. All right, so this is an example of something that's discrete, this cat. And the house price, this is going to be an example of regression. And that's what we're going to look at today. In lecture three, we switch, and we start to look at classification, which has some subtle differences. OK, awesome. All right, let's look at some data. Any questions about the setup or kind of higher level questions about what it is, what goes on here? All right, sounds good, OK. so let's look at some real data here. I'll try and get it all on the screen. So I'm going to look at this house price data. As I mentioned, this is the Ames data set, which follows a very famous data set just for historical reasons of Boston house prices that you can go look at and download. You can download it in one line into Pandas if you want, happy to put information online about how to do that. This is real data of real houses and Ames. And so what I'm showing here is these are their real IDs. I just randomly selected some to kind of make the picture pretty just be honest. And then here's their sale price, right? So this is their actual sale price in the data. And this is their lot area. This is kind of like some notion of square feet that's actually present. This data set, I think, has something like 93 columns inside of it. I've just selected a small set of them. We'll come back to that a second. Now, one of the things that I did here is the first thing you should do when you're encountering a new set of data, and I cannot emphasize this enough, is look at it. The number of times that people, especially engineers and industry take their data and start running fancy stuff on it I'm like, well, did you look? I still remember when I was running a machine learning team at an unnamed large company. And they were like, why are you sitting in the cafe just labeling data, just looking at data sets for days. It's like, I don't know what's going on. I want to figure out what's actually what people are actually doing on this data set, and it's really important, OK? So when you're doing your projects, first plot it. So here's a plot, right, x-axis square feet, y-axis price. And clearly, there's some general upward trajectory trend here. We're going to be more precise about that in the next slide, right? You get bigger houses, they cost more. Maybe as you can think about it, that's not quite true. If it's in a really desirable neighborhood, it costs more, and if it's in a less desirable neighborhood, maybe it cost less. So there are clearly other factors, those are going to be called features in a minute. But this is our first model, OK? So let's look at one other feature. So you can also look at the number of bedrooms, right? So you see here a plot. These are categorical values. That's why I put them in there. I mean, they're kind of continuous in some way. You can still treat them as numbers, so that's fine. And you see there's some spread among three bedrooms and among four bedrooms, and the price is the y-axis, right? OK, awesome, all right, so what would we want here, going back up for a second, what do we want, actually? We want to get a function. What's our hypothesis go from? It goes from lot area, and it predicts price. That's just notation. OK? This is what we're after. So you show me this data, and my goal is to produce some h, OK? Now, as I talked about, there are lots of functions that can take in a lot of areas and return sale prices. It could scramble it. It could do whatever it wanted. It could go look up from an oracle, whatever it wanted to do. There are tons and tons of functions. We're going to look at a simple restricted class of functions in just a second. But I just want to put that in your head. This is actually a pretty hard problem. So we need some representation for h, OK? So how do we represent that h? Now, we're going to look at a class of models, which is called linear, although if you're a stickler, you'll realize right away that they're affine. I'll explain why I allow myself to cheat like that in a second. OK, so here's a model that we could use. OK, x1, OK. so the idea here is you give me the variable, right, x1, which in this case would be the square footage of whatever you have. And then I will multiply it by some theta. And this theta is going to be a weight, we'll call it, or a parameter of the model. And this is how I'm going to form my regression, looks like a line, right? So far, so good, right? Now let me see if I can show you a line. There's a line that does it. OK? That's basically that line through the data that we just looked at, OK? Now, I want to actually come one more second. How does this actually map on to this? Oops, scroll down. Sorry for the bad scrolling. Here, I'm going to go to 0. Remember my h is going to look like x equals theta 0 plus theta 1 x1. Well, what does it look like just so you make sure the picture is clear. This here is theta 0, right? It's where I am. It's the response at 0. And then this gives me the slope, right? This is of slope theta 1. And then when I go to predict, what do I do? I grab a point. Let's grab this one. I project its value onto the x. And this is where I predict its price would be, right? This is the price of this one. Does that make sense? All right, awesome. OK, so this looks like a relatively simple model. But if you look at it at this scale, not so bad, honestly, right? There's some kind of linear trend there. There are some errors, or what we call residuals. In a second we'll try and minimize those, these errors. But this is like our first predictive model, OK? And as I said, it's something that you're hopefully quite familiar with just in fancier notation for the moment. All right, awesome. So now, I'm going to go, sorry for the skipping, I'm going to go and say, OK, how do we generalize this, right? So imagine we had our data set. We had x1, x2, so on. And we have a bunch of features. And I'm going to use my features from my notes, but hopefully this doesn't cause you any panic. I have size. I have bedroom. We have a lot size. And as I mentioned, in the actual real data set, there's like 80, 90 of these things, and I have price. And remember, price is my target. This is my y. And these are my x's. So this is, I'm just going to put numbers here. Don't worry about them. I don't know why I wrote these in my notes, but these are the ones I used just for the sake of consistency. So write this, 45k, 30k, 400, The thing that I care about is that this is my notation for the first data point and the second data point. And this is x1 1. This is x1 2. This is the second feature, OK? All right, now, I called this a linear model, right? But if you're a stickler, and you took a bunch of things, you're like, no, that's an affine model. You have this theta The way that we get around that is we're going to assume that theta is 0 for every model, x0 for every model is identically 1, OK? So that's just a convention. Don't stub your toe on it, that is xi 0 equals 1. And I claim you should convince yourself for one second that means that what is linear in this new set of features is my old affine models, right? I'm just putting a OK? All right, that allows me to just simplify my notation, OK? So what's the model the class of models that I'm looking at here? Well, they're linear models again with that terminology. And they're going to look at theta 0 times x 0, which we know is 1 plus theta 1 times x1 plus dot, dot, dot dot dot theta, going to call it d times xd. OK, and this equals sum j goes from 0 to d theta j times xj, all right. And remember, I'm just going to write it again, x0 equals 1. And nb means know well. All right, now, this allows me, now I have a very high dimensional problem. Now, high dimensions don't work like low dimensions. I won't go into a whole thing about it. But high dimensions are very fun and interesting spaces. You can build really interesting machine learning models by taking your data, doing what's called embedding it and then training a linear model on top. And that actually, in some areas, is actually state of the art of what we know how to do. So those models have potentially hundreds of features that are underneath the covers. For us, these features right now are going to be all human interpretable. They're going to come from the table. So when you give me the row x1, I fill in this value here with 2104. I fill in this the x2 value and so on and as I go. So I just fill in the values as I go. That's how I form my prediction, a little bit more notation. All right, now, if you don't remember, I'm just going to introduce vector notation here. These are column vectors. They're going to look like this. And this just going to save me time and space and fill with things. OK, x1 is going to be a vector Oop, sorry, sorry about that. I wanted to start at 0. x1 0, x1 1 and so on. And remember, this thing is 1. We've said many times, and this is whatever the value is up there 2104, OK? In general, this is going to be the size feature, the bedrooms feature, and so on, clear enough, right? These are the parameters. And these are the features. Right, so why be so pedantic about this piece? It's because we're going to use this in several different guises. These parameters are going to mean different things as we change the hypothesis function over time. And we just want to make sure the mapping is clear. So just make sure the mapping is super crystal clear in your head of how I take a data point that looks like this and map it into a feature vector that looks like that. That's all that I care that you get out of this. And then we have some different vectors. And yi is going to be the price in our example, his price. Now, recall this notion wasn't, we didn't pick this by accident. This was a training example. This pair, xi yi is a training example, all right? This is the i-th training example, just the i-th one in the set. OK. So far so good? Now, I'm going to create a matrix here capital X that's going to have one row for every example. So on X, so there are n of those characters in my notation. And so where does this matrix live? Well, they're n rows, and I recall because of my convention that I added a extra dimension, which I always made 1, it's d plus 1. And I'm just highlighting this and being pedantic because I don't want it to bite you when you realize why they have d plus 1? Where did it come from? It's the 1. And this is someone who said this you've taught this course many times, someone's going to get bitten by it. I'll say it many times, OK? It's uncomfortable when it happens. So this is now I can think about my training data as a matrix, awesome. OK, so now, we have a bunch of notation. I have basically bored you to death with 100 different ways to write down your data set. But I haven't answered the question that we actually cared about, which is, how do I find something that's good, right? How do I find an example of something that's good? All right, so now let's look at here. So why do we think this line is good? You remember this from how you fit it. You think it's good because it makes small errors, right? If it were all lying on the line, right on top of the line, the distance from any point to the line would be 0. And we think the line was pretty good if we could kind of minimize those errors, OK? And this is the error. This is the residual. Now, for computational reasons and historical reasons, we'll look at the squares of those residuals in just a second. Don't worry too much about that. You can do everything I'm telling you with the absolute value of the things, right? You don't want to do the signed value of them because what is a negative error mean, right? You should pay a penalty is the intuition whenever you make an error, OK? All right, so let's look at this. All right, so we're going to look at our h. And I'm now going to write it sub theta. j goes from 0 to d theta j of xj, OK? So now, picking a good model, I can actually make some sense for. What do I want? Well, I want somehow that h of theta x is approximately equal to the y when x and y are paired, right? If x and y come from a new example, you show me a new image. It has a cat or not that label may be opaque to me, but it exists. I want my prediction to be close to that y on average. Or for house prices, you give me a new house, I predict its price as close as possible. I may not get the exact dollar, but it should be penalized a lot if I'm off by $1 million maybe, but not if I'm off by $10, right? That's kind of the intuition here, right? So how do I write that down? The idea is I'm going to look at this function, J, which we're going to come to a couple of different times. And that one half is just normalization. I'm going to look at my data. And I'm going to say take my prediction on the i-th element, yi and square it, OK? Now, this is our first example of a cost function. And I wrote it in a really weird way. But I want to come back to why I'm doing it this way. This is also called least squares. You've probably seen this a bunch of times, and that's OK. And if not, don't worry. We'll go through it. There's nothing mysterious, OK? So let's unpack it. So this thing here is the prediction, says, you give me a point xi, what's my prediction on xi? Some y, and that says it should be close to whatever the training set said yi was. Remember what we're given? We're given xi and yi pairs that are together. Image, cat, house, information, all of its description and the price, we should be close, OK? We're penalized more for errors that are far away. I could give you a big song and dance about why this is appropriate. And indeed there are lots of statistical song and dances about it. But really, we're doing it because it's easy to compute everything that I'm going to do. You just want something that's kind of sensible, right? You should be penalized more the more wrong your guess is, right, roughly speaking in this example. Now, what does it mean to pick a good model? Well, our model is now determined solely by those theta j's, right? If we knew the theta j's, our model would be completely determined. That was the trick I pulled on you when I said, oh, how are we going to represent our hypothesis? We're going to represent it in this class. That means now, we reduced from all the crazy functions that you could have ever dreamed up, any computer program that you could ever have written that was functional to the class of functions that are represented by these weights. The wild thing is there's a lot of functions you can represent that way, OK? And we'll see that over the course of the class, especially when you start to get really high dimensions OK, cool. So which one am I going to pick? Yeah, please. So [INAUDIBLE] this least squared cost function, I'm understanding that we're cost function what is important to us. That's the gradient. So why do we need [INAUDIBLE] to constantly [INAUDIBLE] Awesome question. Yeah, very advanced question. So the question is hey, you wrote this one half there. It seems unnecessarily and potentially confusing. Why would you pay the cost to do it? And the reason is when I take the derivative in a minute, it will cancel out and make my life easier. But there's no-- the other point that you made is, and I love the way you said it. This is exactly right. We don't care. I wouldn't call it that we care only about the gradient, but we only care about the minimizer for the loss function. So if your loss function costs 10 or costs 100, doesn't matter. What you care about is what theta minimizes it, and you got that concept exactly right. So I hope that makes sense. When we're setting up the cost function in some ways, sometimes we give it an interpretation almost to debug it to understand what it's doing. But really, all we care about is what is the theta when we minimize over all the thetas of j theta, this is what we're solving for, right? So we basically want to solve this j theta. Now, as we'll see in a second, for linear functions, we can do this. For more complicated sets of functions, it's not always clear that there even exists a minimizer that we can reasonably find, right? So there could be these wild functions that take bumps and other things. I'll draw one for you in a minute when we talk about solving it. But for linear functions, what's amazing and why we teach the normal things, you can prove what h theta is in this example. Wonderful point, OK, but that's the central thing. We're going to set up these costs so that we get a model out. We've restricted the class of what we're looking at to something that's relatively small where we can fit the parameters. Then we just have to minimize. OK, awesome, right, this is what I mean by optimization, by the way, is solving this equation. I haven't told you how we're going to solve it yet, but hopefully this is good. Now, just for leading a little bit ahead for and also to kind of stall in case anyone wants to ask a question, what we're eventually going to do is we're going to replace this j with increasingly complicated potentially functions that we're going to look at, one for classification, one for other statistical models. But we're going to do almost everything that comes after this part to all of those models. So once we kind of get it in this form where it's like a prediction and some penalty for how poorly it's doing, we may use different cost functions. Everything that comes next we'll be able to do for all of them. That's why we set up all this kind of elaborate notation for fitting a line, right? It is still, by the way boggles my mind how much machine learning you can do by just fitting lines, just higher and higher dimensional lines. But we can talk about that some other time. OK, awesome. All right, so how are we going to solve this? Now, there are many ways to solve this. If you've taken a linear algebra course, you're like oh, I compute the normal equations, and then I'm done, least squares, or MATLAB or NumPy, and you're like oh, I do least squares solve, or whatever it is, backslash, whatever you want to do. We're going to solve it in a way that sets us up for the rest of machine learning. Because machine learning will deal in functions that aren't quite as nice as linear regression quite a bit. And in fact, the trend has been when I first got into machine learning in antiquity, we were all about what are called convex or bull shaped functions just roughly. And we were really obsessed, were we getting the right theta, right? We're like statisticians. At large scale, can we get the right theta? Is there one individual theta? Modern machine learning doesn't care. We don't even know if we get the right answer. We don't even know how. There was a paper I was reading from DeepMind thisz morning that was like, oh you should run your models longer. No one noticed, right? How do we not know when to run the models longer? We don't. That's the world we live in. So how does this work? So imagine to this is our cost function. OK? Now just as an aside, I want to say the linear function doesn't look like that. So don't think about. The linear function looks nice and bowl shaped, OK? The reason that's important, as I was just saying is a local minimum, this is a local minimum. So is this. So is this, roughly speaking, is global when you're convex. If that doesn't make sense to you, don't worry about it, OK? For convex, we'll come back to that point later in the course. But I just want to say don't think of this function I'm drawing here as what happens with least squares. We're just optimizing a J for right now, OK? All right, so how are we going to do it? We're going to use a very simple algorithm. We're going to start with a guess, which is going to be theta 0. How did we pick this guess? Felt good, randomly, set it to there are entire machine learning papers, by the way written, I've even written something which I'm not sure if I should be embarrassed or proud of, that talk about how you initialize various different parts of the model, OK? For us, though, it won't matter for least squares and some of the other models we're studying because we'll be able to get to the right solution. All right, so now imagine for the moment, I found you a model. I found you an initial model. Well, it's clearly from looking around, imagine I'm just looking, I'm the point. And I'm looking clearly, I, can go down from here, right? So the natural greedy heuristic is compute the gradient. What does the gradient look like here? It looks like this. Oops, I can make and do this fancier. You see that? Good. I compute the gradient, and then I walk downhill, sound good, all right, tells me to go downhill from here, right? Whatever shape I'm at, this gradient will also tell me what to do. Now, there are some problems, right, just as an aside, what if I were right here, oh, doesn't tell me what to do. Don't worry about that. It's a local maximum. I'd be toast. But here, it tells me to go downhill. Now, once I go downhill, how far do I go? Again, feels good, I pick a value. It's called a step size. So my next value is going to look like this. t plus 1 is going to be defined to be 5t minus sub alpha theta J theta t. Now, my notation is a little bit weird here. Imagine it's one dimensional for the second. OK, compute the gradient, go in the opposite direction. That's all that's going on. This thing here is called a learning rate. Embarrassingly, I think I have won awards for papers that are about learning rates. But they are not very well set. So you just kind of pick a value. For deep learning people now have all kinds of what they call adaptive optimizers if you look in the literature about how to set these values for you. You don't want to set it too big or too small. There is a theory about how to do it for linear things, but don't worry. For you, you just kind of pick a value. Just imagine what could go wrong? What happens if you pick it too big? Well then you kind of shoot off over here, right? You pick it too small, then you make little bumps like this, right? You don't make enough progress. It's not too hard to think about what should happen here. And then what happens? Well, I get a new point. This is my theta 1. And as suggestively done here, I iterate. I compute the gradient, and I bounce down, and then hopefully I get closer. Please. What's the denominator? Oh sorry. That is just-- this is my notation for the gradient with respect to theta. This is a partial derivative with respect to theta. So imagine it's one dimensional, and I'm just setting up for the fact that I'm going to use multiple dimensions. It's literally just the gradient with respect to theta, the derivative in this case. Now, what I'll do is I'll compute that J for all 0 to d characters. And that gives me my high dimensional role. OK. Please. Is this [INAUDIBLE] Yeah, so right now, I've just shown it, I've just shown J as an abstract function. I haven't decomposed it as a sum. That's a great point. Let's come back to that in one minute exactly what happens when we have a data point. It's going to be my next line. Other questions? Is it clear? So I did actually a fair amount of work there and tricked you just so you're clear. I went from one dimension to d plus 1 dimensions by just changing this the subindex and did them all by themselves so make sure that that sits OK with you, right? Please. [INAUDIBLE] gradient of chasing on graph. Yeah, so how can we understand it on a graph? What do you mean by on a graph? Like on this graph in particular? Awesome. Yeah, yeah. So just imagine the one-dimensional case carries what you need to deal with. So you're in a particular basis, right? Meaning you have theta1, theta 2. So imagine I'm standing in two-dimensional space. I can look down one axis, and then I have a one-dimensional function. Then I have a gradient there. That gives me the vector in this direction. Then imagine I turn 90 degrees orthogonally. I look 90 degrees there. I get another one-dimensional function. I compute its gradient. Now the gradient is actually, if you look at the derivative, it's actually all those vectors put together, one after the other in component. But that's exactly right. Yeah. So yeah, but you're asking exactly the right questions. So just picture it as the tangent to the curve, if that helps you in high dimensions. If not, don't. Yeah. Cool. Wonderful questions. OK, so what do I hope that you understand? Here's some rule. You have the intuition that what it's going to do is it's going to bounce slowly downhill. OK. Now if you start to think about high dimensions, and I think this is why the question came, starts to get a little weird. What does it mean in high dimensions? You can imagine something that looks like a saddle. If you know a saddle. Then you're like, oh gosh, what's going to happen when I get to the top of the saddle? Clearly I can go off the sides and get a little bit smaller. Right, that would be good. Maybe it goes down and stops. But I can get stuck on the top of the saddle, too. And weirdly enough, it's called a saddle point. Don't worry. OK? Sound good? All right. We're not worrying about convergence. Right, notice this algorithm has a very clear error mode. Here, we found what looks like the global minimum. But what if we started here? We go bounce, bounce, bounce, and we'd find this one. Now, how do you stop this algorithm? You stop the algorithm when this update becomes too small, OK? And you set that tolerance. Maybe you set it to what's called the machine precision, like 10 to the minus 6, 16. Or you set it to Or you want a quick solution. But the point is no matter what you do, you're going to get stuck here with a descent method. Because it's going to go downhill and get stuck here. And you're going to miss this much better solution. That won't happen for linear regression. We won't talk about why at this exact moment. We can prove it in a little bit. But for things that are bowl shaped, every local minimum is a global minimum. Then we're in good shape. That's why we cared so much about these things 10, We care about them occasionally now. Less than we used to. OK. All right, so let's compute some of those derivatives. Getting back to the earlier asked question, which was, hey, what does this mean for a sum? OK. All right. So remember RJ had a very particular form. So we're going to compute the partial derivative with respect to some sub j of j theta. OK, so this is the derivative here. Whoops. The derivative with respect to the j-th component. OK. Now, we take the sum. i goes from 1 to n. I'm going to put the 1/2 inside, because I can. And then this is linear. And we'll come back to what that means in one second. I just did a little bit of work here, not much. I just rewrote the definition of j, which is this sum. And then I took the partial derivative and I pushed it inside because it's linear. OK. And we should know that gradients are linear. OK. Now, when I do that, I get something actually fairly intuitive. And this makes my heart sing. Times partial derivative theta j h theta of x. OK? I canceled the 2 with the about the cooking show preparation. And that is standard, by the way. Now look what I have here, which is kind of nice. This thing is basically the error. But it's signed. Tells me which way I'm making a mistake. Kind of too high or too low, right? That's all that thing is. This is the misprediction. Or the error. OK? Now I have the derivative with respect to the underlying function class. Now why did I bother to write it out this way? Clearly I could have skipped a step of doing this and jumped right to the end. But this is going to be general for almost all the models we care about. That's why I did this. OK? So what is it in this specific situation? We'll recall h of theta of x was equal to theta0x0 plus theta1x1 plus theta2x2 plus-- computing the derivative of this is pretty easy. It's just-- oops. Theta j h theta of x is xj. Right? Please. The second line, while you have those scripts on the right. On the right. Superscript over x on the right. On the second line. Here? On the right. Regular. On the right here? Yes. Oh, this should have a superscript. Oh, I'm so sorry. Great catch. This is at that data point. Yeah. Wonderful catch. Thank you. Sir, I seem to generalize what would cause your hitch. Either would you like reporting on the equation or some trigonometry. Could be whatever you want. All I care about is this is the error times the derivative with respect to that underlying model. This is a very basic version of what looks like a chain kind of rule. And we're going to use that like nobody's business. So if you didn't know the chain rule before this class, you will definitely know it by the end, because we use it non-stop. But yeah, this is just set up for that. That's why it's generalizable. It's the error, which is totally generalizable for any model that has to do with prediction times how you compute the derivative. What's the change of the underlying model? We'll be able to generalize that. And in this case, it's just xj. All right? So now, right, getting back to this, what is our whole rule? It looks like this. Theta j, theta j t minus alpha sum over all the data. Answering the earlier question. At this point, we're doing what's called batch gradient, which we'll come back to in one second. Minus yi times xi j. Now notice I'm going to try and do some highlighting here. I hope this is OK for people to see. And I apologize if you're colorblind and this doesn't help you too much. But these are the same. OK, hopefully these are distinguishable colors, these j's. And then the i's are the other index that's going on. And these are the data points themselves. OK? So I look at every data point, and I'm doing the j-th component of each one. Right? Now by the magic of vector notation, here's what I can do. I just write this as this. h theta xi. This doesn't change. This is a vector equation. OK. OK. So this is basically looping over all the j indices at once. OK. If you're unfamiliar with vector notations, one of the reasons I'm doing this quickly is I will do it secondhand throughout the course. It's not deep. It's not like it requires a lot of stuff. Just requires a little bit of reps. Kind of repeat on them. Please. [INAUDIBLE] It's the same rate for every theta [INAUDIBLE] Wonderful question. So alpha u will typically set for an iteration, right? When you take a step, typically you can change it across steps. So one thing is here I've said alpha does not depend on t, the iteration step. But in general, it usually does. You usually decay the learning rate over time. So that's just what's done in practice. And that's done for really good things. What you don't typically do is have alpha depend on the data points itself, because then it's almost functioning like a free parameter, at least in classical machine learning. But in both optimizers, one of which was invented by our own John Duchi and other folks, you actually do change the alphas for every different coordinate, which was I think his first paper was out of grad and then out of delta. So people do things like that that are a little bit more sophisticated. And why they do those, I'm happy to explain offline. But right now, just think of alpha as a constant, like it's small enough that it's not going to lead you too far astray. Like if it were too big, you'd jump too far. And maybe you could do a little bit better. But maybe not too much. In fact, there's a very basic rule, which is called gradient descent rule, is actually very widely used. Very, very widely used. With just one alpha. Wonderful question. And those are the right questions to ask. Like, how does this parameter depend on what's around it? Start thinking like that as you go through the course. That's really, really helpful to understand. OK. So far, so good. So at this point, we know how to fit a line. Which doesn't feel like a huge accomplishment maybe, but I think it's pretty cool. And we fit it in this obfuscated general way that's going to allow us to do more models I claim, but I'll verify that in two classes. This vector equation here is just showing you like all the things that we computed. This is specific to the earlier point to the line, right? This gradient here is this guy. Those are the same. That's why this model popped out, OK? Awesome. And we'll come back to that in a minute. OK. Now, a topic that is practically quite important for machine learning is, and it was hinted at earlier, is-- and I'll copy this equation-- is, what do we do in practice? So one thing that we may not like about this equation is this thing is huge. In modern machine learning, we'll often look at data sets that have millions or billions of points, right? Well, it's not uncommon to run models where you're like, every sentence that has been emitted on the web in the last 10 years is a training example. Or every token, right, every word. And it would be just enormous right at that point. It'd just be a huge thing. So even doing one pass over your data is potentially too much. OK, now that's a really extreme and crazy version of that. That's a really extreme and crazy version of that. But you can also imagine situations where you're looking at hundreds or thousands of images, and you potentially want to look at fewer. So we'll come to how we do that in a second. Sorry, yeah? Is superscript above the first data set? Oh, it's t and t plus 1. These are the steps. Remember we started at theta 0 superscript. And then we moved from 1 to 2 to 3 to 4. And so this is just the recursive rule that takes you from theta t to theta t plus 1. So theta t is just whatever current theta we're on. Exactly. So you just imagine it as a-- it's a recursive way to specify we're at particular t. And here's how we evolve to t plus 1. Exactly right. You got it perfectly. [INAUDIBLE] Exactly right. So theta t, when we go back to here-- oops, sorry. I hope that's not dizzying. I wish there were a way to skip without making you sick. Is this vector. It's just a particular instantiation of those vectors, one for every of the d plus 1 components. Yeah, please. [INAUDIBLE] Yeah. So we will take steps, as I said, until we converge typically. Or we can take a fixed number of steps. I'm eliding that because for this particular problem, I can kind of give you a rule of thumb. I can point you at a paper that tells you how to set alpha. In general for machine learning, as I was kind of very obliquely referring to, we don't actually know how to tell that we've converged. And part of the reason is if you knew your model was this nice bowl shape, then you can actually prove that the closer you get to the optimum, the smaller your gradient is getting. And you can predict kind of how far away you're going to be. For a nice class of functions. For nastier functions and the ones that we're going to care about more, you can't do that. So it doesn't make sense to say that you found the right answer. And so I don't emphasize that. For these models, I can give you a beautiful story. Happy to type it up online and tell you. But in general for machine learning, honestly we just run it till it feels good. Like, oh, the curve stopped. It stopped getting better. And that was this DeepMind paper that said, hey, for these really large 280 billion parameter models. So their theta has 280 billion parameters in it. They're like, we didn't run it long enough. If we kept running it and it was better. And everyone who works in machine learning for long enough in the last five years has a situation where they forgot they were training a model. Hopefully you're not paying for it on AWS or GCP or something. And then you come back a week later, and it's doing better than you thought. And that is a very strange situation. So I don't have a great rule for this. For your projects, it will be clearer. I'm telling you the real stuff, though. Awesome. Please. This equation [INAUDIBLE] So we will only use it in the forward direction of going t to t plus 1. But you could imagine that it's reversible if you wanted. [INAUDIBLE] Oh, wonderful question. Yeah, yeah. So in the sense that if you shoot past-- let's go back here. So if you're here and you shoot past-- your step is kind of too big for the gradient, you kind of trust it too much, then the next iteration, the gradient will point in this direction, right. And so you'll step back. So it will actually have this ping pong. You actually want that to happen. It turns out the optimal rate-- I mean, I can bore you with this for days-- the optimal rate is actually when you're doing that skipping, for whatever reason. Yeah. But it's more intuitive for people to roll down the hill. Yeah. Wonderful point. You got it exactly right. Please. So is it possible for the update to be 0 even if h theta of xi is not necessarily yi three times? Yeah. So it's not possible for it to be exactly 0 everywhere. But it's possible to have gradients that are not giving you any information. Yeah, wonderful question. Absolutely wonderful question. And it's because it's a linear system. Right, so it's not full rank for the linear algebra nerds. Yeah. Wonderful question. Please. So let's say you have this functional thing, right. But you flip it so theta 0 is equal to 0, but on the other side. Would you only get the local minimum over there and not the actual-- Exactly right. Yeah. And that's what I'm saying. We used to worry about that quite a bit. Now we just say it's good. I wish I could tell you something better than that. But we'll get into why that's true. But yeah, when your function is in a good class-- and good here formally means convex and bounded in some way-- then you will provably get to the right solution. We'll talk about those conditions later. The reason I de-emphasize them now is because modern machine learning actually works on functions that look like this, not on the other class of functions. And so that's less important for students. And then you would rightly say, you told me all this stuff. I memorized all these conditions, and then I got into the workforce. I'm like, none of them worked and no one uses them. Like, yeah, that's true. And you're exactly right. And so people worry about initialization. Where do you start so that you are guaranteed to get a good model. In fact, there are a couple of awesome theory results. I'll take one from my group, one from Tengyu's, that said for certain class of these nasty non-convex models, if you initialize in a particular way, you would be guaranteed to get the right answer. Actually, I'll show you one in week 11, a simple version of that. Where if you initialize cleverly, there's not a unique answer, but you'll get the right one every time. Or sorry, class 11, not week 11. Yeah. [INAUDIBLE] Yeah, people will try random initialization. The problem is the trend is for models to be really expensive. So you run huge models. So any one run could cost a couple million dollars. I was looking at Amazon's GPT-3 service. I think it costs $6 million a month to run. So do you want to try to run it multiple times? If you got money, go ahead. But you want to try and do other tricks. People used to do a lot more random restarting. Now it's really sad to say this is the state, but we've evolved folksonomies. If you train these models, you know, what are the right parameters and what is everybody else using? And not everyone tries and explores everything, let alone how long you tune it, what optimizers you use. We all use the same stuff. But we don't have great formal justification for it. Maybe I'm exposing too much. It's not as bad as it sounds. There actually are principles in this area. I'm just telling you the plates that are broken, because they're more interesting to me. Yeah. [INAUDIBLE] Oh, wonderful question. We're going to come back to that. So the solution is, do I want to-- there's a phenomenon that a lot of people know about in machine learning, which is, if I take my model and I exactly fit my training data, maybe it won't generalize well. It'll fit to some error or some noise in the data, and this is roughly overfitting. We cover that in lecture 10. In lecture 10, at least when I taught it last, I also taught about something which is in modern machine learning. We realize that actually sometimes that concern is overstated for some models. And there's a wonderful paper by Misha Belkin that said you can actually interpolate, perfectly fit your data and optimally generalize for some classes of models. So that tradeoff isn't as clear for modern models as it was for old models. Maybe I should stop telling you about this stuff. But yes, in general, overfitting is a problem. You can overfit a model and believe your training data too much. Yeah. But this area is fascinating. I can obviously rant about it for weeks, so. Wonderful questions. Yeah, yeah. This is absolutely great. OK, so what do I want to tell you? So I don't want to tell you normal equations. I thought that was pretty clear from the beginning. So you read about those. If you want, I'll type up notes. Andrew's notes are great on this point. But I do want to tell you this one little bit with my last couple of minutes about batch versus stochastic mini batch. Because it actually is relevant and useful. OK. So when we last left off, we were looking at this equation. And we noticed this problem, that n was really big. And I just hopefully told you, n is really big, and so is d. The number of parameters is really big. So this is expensive. I wouldn't want to look at all of my training data before I took my first step, because probably my initial guess is not that good. That's why I'm training a model. If randomly initializing the model, which is something people try to do, gave me good predictions, I just use that. So obviously, I want to take out as many steps as I can. So here's what I'll do. I'll use mini batches. So what does mini batching do? OK. I won't get too formal. But basically what I'll do is I'll select some B, let's say at random. OK, I'm being vague here what random means. I wrote a bunch of papers about this. You can either pick-- randomly select them, or you can shuffle the order. And in conventional machine learning, we shuffle the order for a variety of reasons. And then I pick B items. So B is going to be much smaller than n. OK? All right. And then I update. I'm going to call it this, because this made me make it more clear. i equals 1. Or actually, I'm going to write it a little bit strangely. I apologize for this notation. Notation's better in my notes, but I want to write it this way, because it's easier to say. xi. yi. It makes it more clear what's going on, I hope. OK, what's going on? So I select a bunch of indexes, B. And then I just compute my estimate of the gradient over them. OK. I could even pick B to be size 1. Just pick a single point, as someone was alluding to earlier, and take a step. Now, what are the obvious tradeoffs here? On one hand, if I pick a step, that step is really fast. Right, if I pick a single element. It's super fast to compute relative to looking at the entire data set. But it's going to be noisy. It's going to have low quality. I may not have enough information to step in the direction I want to go. On the other hand, if I look at the whole data set, it's going to be super accurate about what the gradient is. In fact, I'll compute it exactly up to numerical issues. But it's super slow. Now, what people do is they tend to pick batches that are on the smaller side. Right, and you pick them as big as you can tolerate. And I won't go into the reasons for this underlying hardware. Happy to answer questions about it. But basically you pick batches that are kind of as many as you can get for free. Modern hardware works kind of in parallel. So you'll grab and look at-- expensive than looking at one, OK, on a modern platform. Now I'm using these noisy proxies. And you may think, am I still guaranteed to converge? And then the answer is effectively yes. And under really, really harsh conditions. In fact, I'm very proud of something that my first PhD student and collaborators, that Ben Recht and Steve Wright wrote about, this paper called Hogwild!, which is very stupid, has an exclamation point. But also got a 10 year Test of Time Award for saying that you can basically run these things in the craziest possible ways, and they still converge. These stochastic sampling regimes. OK, I won't go into details about that. My point is this thing is actually fairly robust. This take a bunch of error estimates and step them. And in fact, almost all modern machine learning is geared towards what's called mini batching. If you download PyTorch or JAX or TensorFlow or whatever you're using, odds are it has native support to give you mini batches. OK. And that is basically just taking an-- oops, taking an estimate of this piece here, and using that noisy estimate. And why might that make sense? Well, imagine your data set contains a bunch of near copies. If your data set contained all copies, then you would just be reading the same example and getting no information, right? If instead you were sampling that same example, you would go potentially unboundedly faster. And if you think about what we're looking at, when I told you images, like the images on my phone for my daughter, there are a lot of pictures of my daughters. A lot, OK? I'm a regular dad. I take lots of pictures. So that means there's a lot of density. And so machine learning operates in these regimes where you have huge, dense, repeated amounts of data. OK? All right. So this is going to come back. We're going to see this next time. We're going to see it in particular when we start to look at various different loss functions. We're going to generalize how we do prediction to classification next time. And then to a huge class of statistical models called exponential family models. To go back to the top. I skipped just to make sure you know what's here and what I skipped. We went through the basic definitions. We saw how to fit a line. We went through batch and stochastic gradient descent of how to solve the underlying model. We set up a bunch of notation. This is going to be one of the dryer classes where I'm just writing out all the bits of notation. And we saw how to solve them. Those will all carry over to our next brand of models. The normal equations, if you run into problems, blame me. I'm happy to take a look through them. They're relatively straightforward and the notes are pretty good. But I'll look at Ed if you run into any problems there, and happy to answer questions. With that, thank you so much time for your time and attention, and I hope to see some of you on Monday.
AI_LLM_Stanford_CS229
Stanford_CS330_Deep_MultiTask_Meta_Learning_MultiTask_Learning_Basics_I_2022_I_Lecture_2.txt
So the plan for today as I mentioned in the previous lecture, we're going to try to start with the basics. And so this means that today we'll be talking about methods that were starting to be developed in the ancient times of the 1990s. And then starting I think next week, we'll get into more newer stuff. That said, even today, we'll start to talk about a case study of a real world problem that's quite relevant today, and is from a very modern research paper. And by the end of the lecture, the goals is to really try to convey what multi-task learning is, and what the key design choices are when trying to build these multitask systems in practice. Awesome. So we'll start off with some notation. So we'll start off with something like a neural network. It could be a convolutional network like the one shown here or some other neural network. And we'll denote the input as x, and the label or output is y. For example, our input might be an image like an image of a tiger here, and the label might be a classification of that tiger, it might be an image classification problem, or maybe something more interesting like trying to classify what to do if you see something like this. And alternatively, instead of being an image, it could be something like the title of a paper. In this case, you probably wouldn't use a convolutional network, you would probably use something like a transformer if you're trendy, or if you're a bit older, maybe an LSDM. And the label could be something like the length of the paper. And so in this case, if it was linked to the paper this might be more of a regression problem rather than a discrete classification problem. Now, we'll refer to the parameters of the neural network typically with theta, and so this will correspond to all of the parameters of that neural network. You can think of it as a vector that basically flattens each of these weight matrices and upends them into a single very large vector. This may have millions of values in this vector. And we can then refer to the function represented by this neural network as f, which will give us a distribution over y given the input x parameterized by theta. So this should follow fairly standard notation that you may have seen before. Now in single-task supervised learning, we will be given some form of data set, which has input output pairs. So a number of examples of images like the tiger and labels like the options on the right. And then we will define a loss function that tells us how good is that model at performing that task. And our goal will be to find the parameters that minimize that loss function. So try to find the parameter setting of one of these neural networks such that we do well on a classification problem for example. And so a typical form of this loss function might be something like negative log likelihood. This would look something like this, where we're measuring the likelihood that f assigns to a given label given x. And then negating that because typically loss functions are things that we minimize. And then trying to minimize the negative probability of the label given the input. And this kind of log likelihood loss, this is equivalent to something like a cross-entropy loss or a mean squared error loss that you may have seen in other machine learning courses. So that should mostly be review for folks. Now, how do we go from single-task learning to multi-task learning? For that, we need to figure out actually, what is a task? What does it mean to be a task? We defined what a task is in the previous lecture, but we're going to cover a task more formally this time. And in particular in the context of this course, we'll think about a task as a set of three things, a distribution over x, a distribution over y given x, and a loss function. And so you can think of these as the distribution that generates the data. And the reason why that we want to define it something like this is that we want to be able to say whether or not the network is doing well at a task. And doing well at a task is a little bit different than doing well on a data set. And in particular, you could have something that does really, really well in a training data set, but doesn't actually generalize to other examples for that task. And therefore by defining a task as the data generating distributions, and the corresponding loss function for that task, we can then capture notions of how well it's actually doing on that task in general. So we can have corresponding data sets that are sampled from these underlying distributions like the training set and the test set. And note that here I'm using i to like index into the task. So for task i, it has these two distributions and this loss function, and it has data sets that are sampled from these distributions. In practice, you won't have any sort of access to these ground truth data generating distributions. You will generally only have access to the corresponding data sets. And then also in the future slides, I'll generally use di as shorthand for di train just because typically we will refer to the training data set a lot, and it can be convenient to drop off the tr. OK. So that is how we'll define a task. And now let's look at some examples of different multi-task learning problems. So generally, a multi-task learning problem is one where our goal isn't just to solve one task, but to solve a set of tasks. And the tasks could vary in different ways. So in something like multi-task classification, the loss function will probably be the same across all the tasks. It will probably just be the cross-entropy loss. And for example, this could be something where we want to be able to recognize handwriting in different languages. So each language will correspond to a different task. And you could have a data set that looks like. This is actually the Omni glide data set, and you have different alphabets or different languages. And so in this case, you're going to have a different p of x distribution, a different p of y given x distribution because the characters themselves will look different, and also the label given those characters will be different for different languages, but the underlying loss function will be the same because it's all still a classification problem. As another example of this kind of multi-task classification problem, you could also have a personalized spam filter where different tasks correspond to different people. And different people will receive different kinds of spam. And they will also have different preferences for what is spam and what's not spam. And so it will also in this case have a different distribution over x, and a different distribution over y given x, but yet again the same loss function. So that's multi-task classification. We could also consider a scenario where both p of x and the loss function are the same across these tasks, and the only thing that differs is y given x. In a case like this, you could think of face attribute recognition as an example of this where one task is maybe to detect if someone has black hair or brown hair or blond hair or white hair. And a different task is to predict their eye color. In this case, all of the images are the same, you just have different distributions over labels. Yeah? Would be imaging task qualify for multi-label classes base on the different normal distribution or it's not possible? So the question was, can image not be considered multi-label learning? And I guess the thing that really differentiates multi-label learning here and something like image classification is at least the way that image classification is framed. There's only one label that is correct. And the thing that's different here is that you actually have a different set of labels. And so for example, someone can have both brown hair and brown eyes or they can have blonde hair and brown eyes, for example. And so you actually have different sets of labels itself. And that's like the key differentiating factor. And so in general, it's something like image classification would be considered a single task problem. That said, you can also frame things like image net as multi-label problems because oftentimes there are actually more than one thing in an image, and you may actually want to classify all the things in the image rather than just one thing. Another example of multi-label learning is something like scene understanding where you have images of lots of different 3D scenes. And one task is to predict the depth, one task is to predict key points in the image, and another task is to predict the surface normals in that scene. And so this is, again, an example where actually all the images in the data set are the same. The only thing that's different is the different labels. So these are a couple instantiation of multi-task learning problems. There's also scenarios where the loss function might vary as well. So in both of these examples, these are settings where the loss function isn't changing. But you could also have scenarios where, for example, some of your labels are continuous and some of your labels are discrete. And you use a mean squared error loss function for the continuous and a cross-entropy loss function for the discrete labels, or you might have multiple metrics that you care about, and you want to optimize those objectives simultaneously. Cool. So I'll pause here. Is there any questions on the setup before we get into actually solving these multi-task problems? Yeah? So from last time, you said the tasks should share some structure. So does structure mean like they should share either p of x or p of y to the next part, the loss function? Like if they are different, that means they don't share any solution? Yeah. That's a great question. So the question was, last lecture, we were talking about how the task should share some structure. Does that mean that they should share the loss function or share one of these three things? And what I meant by a structure in that first lecture is something a little bit more abstract. So these are three very concrete things. And you can actually have tasks that differ in all three of these things, but still have a lot of common structure. And so I guess structure is something that's a little bit more abstract. We can think of-- we'll come back to a little bit more to what structure is when we get to some of the Bayesian perspective on it. But I guess as one example, you could imagine something like the per language handwriting recognition. These tasks have a lot of shared structure intuitively in the sense that recognizing characters is implicitly about recognizing the shape of the handwriting. But you could also have a version of this that has different loss functions. Because maybe in one case one of your loss functions is more continuous because maybe some of the digits are-- some of the characters are actually like numerical values rather than characters. And something like that, you may actually have different loss functions while still having a lot of shared structure. Cool. So now let's get into actually learning networks that solve multi-task learning problems. So the first thing that we generally need to do in multi-task learning problems is to tell the neural network what the task is. And we'll do this with what I'll call a task descriptor, which we'll denote as zi. And we'll pass this into the network in some way or another. And so the function will no longer be modeling y given x, but actually modeling y given x comma zi. And so let's kind of actually better understand what this task descriptor might be. So say that you're a very diligent grad student, and you were assigned a bunch of papers to review, and you want to understand how long it's going to take you to review these papers. So you maybe one of your tasks might be to take the title of the paper and predict the length of the paper. The second task is maybe you want to get an initial summary of the paper before reviewing it. And so you the second task is to predict a summary of the paper. And maybe the third task is you're getting maybe a little bit too lazy, and you just want it to write the paper review for you. So we have three tasks here. And the task descriptor, in this case, could be a few different things. The first thing is it could just be a one hot encoding of the task index. And in this case, what this means is that we just have a one hot vector. A one hot vector is just something that looks like this, where we're encoding an integer in vector form where, for example, the integer 1 will be denoted with a 1 in the first position and 0 and the other two. The integer 2 will be denoted with a 1 in the second position. And for the third task, the task descriptor would be something like this. So this would be an example-- the simplest possible way that we could tell the network what the task is. So this would be telling it that the task is the first one, the second one, and the third one. But we can also do something a little bit more creative. So we could, for example, give it a language description of what we want it to do. So we could tell it, give me a summary, tell me the length of the paper, or give me a review. And so z could actually be a natural language string of what the task is as well. So some other examples, it really could just be whatever metadata you have about the tasks. So it could be-- not really in this example. But if you have an example where different tasks are different people, they can correspond to different attributes of those users. If you have an example like this one where you have some sort of natural description of what the task is, and you could condition on that. You may also have some domains where you have a more formal specification of what the task is, and you could also try to pass that into the network to tell it what the task is. Cool. So now that we've defined-- now that we've told the network what the task is, we can also formulate the objective. So the basic way to formulate the objective is actually what we covered in the previous lecture is we just sum up the loss functions for each of the tasks. So Li here is computing the loss function of the network on the training data set for that task. Then we'll sum up over all of the t tasks that we have, and try to minimize the loss function-- minimize the parameters of our neural network over the sum of loss functions. Yeah? So the task descriptor that you're mentioning, is that very much the same as completing the model, like the recent work that's been done? Yeah. So you can think of the task descriptor as a form of prompt. You typically wouldn't think of something like this as a prompt. But yeah, it basically can be a prompt or it could be something a little bit more basic. Yeah? It's another part of the di you say? Obviously, you say di are same across all the task. So di is a set of xy pairs. And so what exactly were you asking? I mean, the di, if they process more all the same across all the task? So you'll have different data sets for different tasks in general. So as we talked about before, each task will be defined by its own data generating distribution. And then the training data set is drawn from that data generating distribution. You may have cases where p of x is the same across tasks. And so in that case, all of the x's in your data set may be identical. But then even in those cases, the y's will be different for different tasks. And in general, these training data sets will be different across tasks. Does that answer your question? Since just now, you say that the multi-task has-- look I said, across all the task, but-- You're asking if the data set is the same across all the tasks in a multi-task learning setting? Say that based on previous thoughts, you mention up all these. So I mentioned here that, for example, the x-- the images might all be the same across tasks. So in this example in scene understanding, the images will all be potentially identical across tasks, but the labels will be different. So the labels for the depth task correspond to the depth annotations. The labels for the normal task correspond to something like this, and so forth. Yeah? You mentioned a task descriptor. There are often data sets where you don't get the task description. What happens then? Yeah. So you might have a data set where you don't have the task descriptor. In that case, you can just go with the basic one hot encoding assuming that you at least have some sort of differentiation between the tasks. It's also possible that in some cases, you might be able to tell what the task is just from the input. If p of x is different across tasks, you might just be able to tell without a form of task descriptor. But in general in multi-task learning, we're going to assume that we at least have some separate data sets for each task. Yeah? Where does the task descriptor insert for like parts of the network? Is it with the original x or is it pass on or something? Yeah. So we'll talk about that next. Cool. So I think that we can just transition into the next part. So this is the basic setup, but there's still a lot of design choices to make here. We need to design the model, the objective, and the optimization process. Like what was just asked, we need to figure out how do we actually pass an zi into the network? We also need to figure out if we should use this vanilla objective, or if we can use something a little bit more sophisticated, and also how do we go about optimizing this objective. So we'll talk about each of these design choices one by one. And we'll start with the model. So we'll start by trying to think about, how can the model be conditioned on zi? And what parameters of the model should be shared across tasks versus separate? Cool. So now let's assume that zi is a one hot index like what we talked about before. I have a question for all of you, which is, how should we choose to condition the network on this one hot vector in a way that will try to share as little as possible? And what I mean by this is we want to have as few of the parameters for different tasks be shared. We want to basically get as close as possible to training completely separate neural networks on each task. Does anyone have any thoughts? Yeah? Are you good at loss functions? Can you explain a little bit more? Yeah. The question is, if we assume that zi is just a one hot vector for each of the tasks, how should we condition the network on zi such that the network is sharing as little as possible, such that we are as close as possible to just training completely different neural networks? Yeah? I guess we could just have a switch statement for each of the different conditions. So if it's one hot 1, then just do a separate effort from 2 and 3. Yeah, exactly. So in particular, what we can do and what was suggested is you can basically just have different neural networks, t different neural networks, and basically, have a switch statement, have z basically modulate which of those networks you use to make a prediction. And so formally, what this would look like if z is a one hot vector is you would pass-- say y1 is the output of the first network, y2 is the output of the second network and so forth, and all of these have separate parameters. Then we can compute the output as basically just switching between these different outputs. And so if the task is the first task, then you would output y1. If the task is the second task, then you would output y2, and so forth. And what you get as a result of this is that you basically-- this is still a single neural network, but you get completely independently trained neural networks within this single neural network. Was there a question? No. OK. And so essentially, this corresponds to having no shared parameters across the tasks. Yeah? Which means that we will need to learn-- like when it does that, that means to have best performance across all the script with as many shared parameters as possible so that the combination complexity goes down? Yes, exactly. So in general, this is not a great way to go about doing multi-task learning. It's one extreme. And on the other extreme of the problem, we could share a lot more. We could share basically everything. And so what we could do instead is to have a single neural network. And basically just for example, concatenate z with one of the layers. And in this case, if you just concatenate z to one of the layers and then have everything else be normal, then in this case, basically, all of the parameters are shared across the tasks with the small exception of the parameters following zi technically will not be shared across tasks, but all the other parameters of the network will be shared. So you have these two different extremes. And there's becomes this choice of what should you actually do in practice. And so an alternative way of viewing this form of conditioning on zi is basically splitting the parameters into shared parameters and task specific parameters. So if you split it, the objective and shared parameters and task-specific parameters-- we saw one example where everything was basically task specific, we saw another example where everything was shared. You can basically formulate-- rewrite the objective as something like this, which is exactly equivalent to the previous objective. The only thing that we're doing is we're now saying that we've basically split up our parameter vector into these two parts. And the thing that's useful about writing it this way is that it really shows us that the task-specific parameters for task i are only optimized with respect to loss function i. And so those parts of the network will only see data from one of the tasks rather than all of the tasks. Yeah? Is there a constraint in multi-task learning that you're learning all the task at the same time? Like for example, in the MLT space, if you just train a large limit model, it's somewhat task-specific toward the end. But when you train for each specific task, you can do those sequentially. So is that muilti-task learning when there are contributing all at once? Yeah. That's an awesome question. So the question was, is there a constraint that in multi-task learning that we're going to be learning all the tasks all at the same time? And in general for the purpose of this lecture, we're only going to be considering things that we're learning at the same time. In the next lecture on Monday, we'll start talking about transfer learning where we learn one task and then learn another task. And then the last lecture in the course, we're going to also talk about lifelong learning where we are learning a set of tasks in sequence, one task then another tasks then another task and so forth. In general, a lot of the underlying ideas underlying multi-task learning are also applicable to the setting where you're not learning everything at once. And one thing that is somewhat nice about looking at things like this is that you could imagine training on a few tasks. And that gives you maybe some good shared parameters. And then additionally, training some task specific parameters separately after the fact. And so things like that are often very reasonable to do. Yeah? This multi-task or doing multi-task learning as understand they are delaying structure and delay feed on double task. Like one of the big method scale is like to speak about that it something which is sharing in public with this movement. Do we hope an optimal method to just capture the shared structure, and then basically include it or possibly start with the shared structure then do this condition only, just to split it initially? Like do we have some that kind of result on that? Yeah. So the question is, it seems annoying to have to manually break this up and figure out what should be shared and what shouldn't be shared. Can we just have it-- like have an algorithm figure it out for us? And so there are some approaches that do something like that. In general, it's somewhat of a chicken and egg problem, because if you have something else to choose what to share and what not to share, then that thing that's choosing will probably use data from all the tasks. And so that's going to be shared across all the tasks. So in general, there are going to be some manual choices regardless. But there are some techniques that do something like that. Yeah? Another problem question. Do we know anything about comparative rates of how quickly these tasks are learning? Yeah. So the question is, do we know anything about the comparative rates of different task learning? I'll briefly talk about that when we start to talk about the optimization process. So the next thing that I want to talk about and specifically thinking about breaking things into shared parameters and task specific parameters is that in the previous two slides, we saw one extreme of sharing nothing, and we saw one extreme of sharing everything just based on how we conditioned on zi. And one of the things I think is interesting here is it suggests that basically choosing how to condition the network on zi is equivalent to choosing how and where to share parameters. And so in general, choosing how to condition on zi is actually a very delicate choice because you need to be careful about how much you should be sharing versus how much you shouldn't be sharing. So we looked at two extremes of conditioning. I want to go over two other common choices in conditioning, well actually, sort of three other common choices. So we talked about just concatenating the zi with the activations at one of the layers. And so this looks something like this, where you take the input or the activations at one layer. You take your zi. You concatenate them together. And then pass that into the network. Another thing that you could do instead of concatenating is adding together a representation of both of them. So you could pass the zi through a linear layer and pass the input through a linear layer and then add them rather than conditioning them, and then get the resulting output. Now one thing that you might notice here is these are two options that seem somewhat different. But it actually turns out that the concatenation base conditioning here and the additive conditioning here are exactly equivalent to one another. And so I'm curious. Maybe one thing-- I want to try something new. So typically, I ask people why they're the same thing. Rather than just asking you, I want you to take maybe one minute to think about it, and then one minute to talk to your neighbor about it. And then after two minutes, I'll ask you-- we'll ask you to share why you think they're the same thing. So you can think about it for a minute. I'll tell you in a minute to talk to someone, and yeah. OK. Cool. Let's come back. Does anyone want to share what they came up with? Yeah? So in the concatenation base approach, the linear layer that you've contaminated input through is effectively the start of being truly aware of the number two of instances with holding operation over [INAUDIBLE].. Yeah. Awesome. So in particular, if we have some input x and some task descriptor z, we're going to-- if we concatenate them, we'll do something like this and have a weight matrix right here. And you can instead think of this weight matrix as having two parts, W1 and W2. So this is just the left half of the matrix. This is the right half of that matrix. And this is equivalent to W1 times x plus W2 times z. And so one thing that's important in the second figure is that actually the x and the z first going through a linear layer before you add them together. And this is exactly kind of the additive conditioning version. And you can visually see this right here where the red matrix corresponds to W1, the blue matrix corresponds to W2. Yeah? So if you have a nonlinear, do you fall in each when you're weighing the additive conditioning, or would you have more expressive power? And would also be additive conditioning be better than that concatenation base? So you're saying if you had a nonlinearity basically here. So if you had something like a nonlinearity here, basically? Yeah. So something-- once you start adding nonlinearity is it does get more expressive. And yeah. So if you did do have some nonlinearities on these separately, it would be a little bit more expressive. You could also imagine-- yeah. So in that case, it would be a little bit more expensive. Yeah? Is there any computational trade off between these two? Is there any computational trade off between the two? In general, I think that-- I would guess that-- So I guess that on modern computers, this is better because we have very good matrix vector multiplication modules. But I haven't tested it myself on hardware. And I would guess it's probably not that significant compared to other layers of neural networks like convolutional layers. Yeah? So for the nonlinear thing, shouldn't that concatenation base would be better? Because it would then explore the correlating terms as well between x and z. I guess-- It could be like you brace from x and z. So if we [INAUDIBLE] it would also explore the x and z together, while the code really does it. Yeah. I guess in both of these cases in practice, you probably won't just be like having this literally be the output of your network. You'll probably also continue to pass that through fully connected layers and so forth. And so in general, when we talked about adding nonlinearity there, I'm guessing that wouldn't have a huge effect in terms on the expressive power because this is a part of a larger network. Cool. So that was concatenation and additive conditioning. Two other choices that are quite common. One is to use a multi-head architecture where you have some shared bottom layers that are like all the tasks you pass on the input do the same exact layers. And then for different tasks, you have different heads or different sets of task specific layers. And a generalization of this that you can consider is multiplicative conditioning. So before we saw additive conditioning, you can also multiply the task descriptor or representation of the task descriptor with the activations. And so what this will look like is basically just replace the additive-- the addition operation in the previous equation with the dot product, or sorry, with an element-wise multiplication. And something like this is going to be more expressive, at least per layer, than adding. And it could actually represent things like the multi-head architecture. And the reason that we can see that is-- if you remember the kind of very first example that we looked at where we were essentially multiplying the zi with the outputs, the sort of multiplication can sort of gate the network and create, basically modulate which layers are used for which tasks. Cool. And so in general, multiplicative conditioning is going to generalize having independent networks and independent heads. Yeah? So for better display, would you say attention is sort of labor task specification? Attention is sort of like a task? If so far attention is basically like dot products, then yes, something like attention can be viewed of as task specification. Any other questions? Yeah? So [INAUDIBLE] in general they collect data. Where can we all apply the specific layers so that all the network [INAUDIBLE]?? If you're looking at [INAUDIBLE] part, everything seems lack of value. Would the network will still be here, right? Yeah. Where are you multiply will matter? Although that said you could also imagine-- like even if you do this sort of gating at the very beginning of the network, you could imagine-- well, I guess, yeah. Even with a fully connected layer, the architecture will generally still matter. And so it can represent certain kinds of gating. But yeah, the architecture still matters. Yeah. [INAUDIBLE] every layer of exactly what the network itself figure out how much of the structure [INAUDIBLE] So the question is, if you do the multiplicative conditioning on every single layer, does that let the network figure out what to share and what not to share? To some extent, yes. I guess I should also mention that even if the network is sharing all the parameters and it's getting gradients from all the tasks for all the parameters, you could also imagine it zeroing out certain parts of the weights, so that it represents something more like independent networks. And so the network can arrive in a setting in where some things are being used for only one task even without multiplicative conditioning like that. Cool. So we covered really the basics. I mean, the basics are really either concatenation/additive conditioning or multiplicative conditioning. And you can have more complex choices. There's a lot of papers that consider more complex choices. Although even just the basic approach typically works pretty well. Now unfortunately, figuring out how you structure the architecture and how you condition on the network is a lot just general neural network architecture tuning, which is that it often is fairly problem dependent, and it's a little bit more of an art than a science. And it's oftentimes guided more by intuition or knowledge of the problem as opposed to having a really rigorous set of guidelines for exactly what you should do. That said, we'll talk a little bit in some of the coming slides about some things that can help guide that process. Yeah. This little condition increase depend on the type of descriptor that we're using. So if you're using a natural language descriptor, you can condition on that differently. And also do you have any insights on what descriptors might be better. Is the one hot better because it's more explicit or could not language be better because it's more nuanced and detailed? Yeah. So there are two questions there. One is, what descriptors should you use? And the second is, the way that you condition the network on that descriptor differ based off of the kind of descriptor that you have? So in general, for the first question, the more information you give to the network typically, the better. If you give it just a one hot vector, these one hot vectors are orthogonal to each other. And so you're not giving it any information about how these tasks might relate to one another. And if you instead give it a language description of the task, where one of the tasks says write me a story, and other tasks write me a poem, another task is translate between these two tasks, between these two languages, then that will give it a little bit of information about the similarity between tasks because write me a story and write me a poem, those are similar sentences. And so naturally, the task should be a little bit more similar as well. So generally, the more information you give it, the better if you have access to that information. And then in terms of conditioning, I think that my general advice would be that multiplicative conditioning is generally the way to go because it gives you more expressive power. And in practice, we've seen things like attention and multiplicative conditioning through like feature wise modulation to be at least one of the approaches that you see the most in what people do. And just a follow up question. Has there been any work done on task embeddings, like similar, how we we have word embeddings, if you could have a task embedding, so it's not just like one whole perspective [INAUDIBLE]? Yeah. So the question is, is there any work on getting a task embedding? And I should note that if you have a weight matrix that's going after a task descriptor, then this is going to convert that one hot vector into a dense vector. And so in that sense, the first weight matrix that comes after that one hot task descriptor is going to give you an embedding of the task. And it's essentially, if you learn multi task learning from scratch, then it's going to learn these task embeddings from scratch. But it would be interesting if we could develop things like the notion of word vectors, but for tasks. And one thing you could do is if you have a natural language description, encode that into a sentence encoding and use that representation as your task descriptor. Yeah. The questions is, how do you implement the multiplicative gate? So if you have four tasks, you just add a soft-max gate layer over dimension for probably the last layer? Yeah. So the way that you can implement the multiplicative gate is going to look a lot like this. So if say the say the dimensionality of one of your activations is like D dimensional, then what you'll want to do is take your one hot vector also multiply that by a weight matrix, so you get another D dimensional vector. So once you have to two D dimensional vectors, then you'll just do element wise multiplication. And so you'll replace this plus sign with an element wise multiplication operation. Cool. So we've talked a lot about the architecture of the model. Now let's talk about the actual objective. So earlier on, we formulated this vanilla multi task learning objective. But in many cases, we may want to weight the tasks differently. So we may want to formulate an objective that looks like this, where we are going to assign a higher weight to some tasks compared to others. Does anyone have any thoughts on how we might choose the weights? Yeah. Perhaps, so how many times is the data for each task? So stuff that we see more, it'll be [INAUDIBLE] so it's not imbalanced? Yeah. So you could have something where you change the weight based off of the amount of data that you have. Maybe if you have a lot more data, it could actually make sense to downweight. If you have less data, it may make sense to upweight. One thing I'll mention here is by formulating this objective as a sum over tasks, this is already going to somewhat normalize over the amount of data per task. Because rather than if we instead summed over the data points that we had, then that would assign higher weight to task with more data. Yeah. You can weight it by the magnitude of the loss of the task. So like one loss function could end up the large values into the small values. And you don't want the [INAUDIBLE] dominate the lost terms, so you can [INAUDIBLE] Yeah, absolutely. So if you have some loss functions that are much higher in magnitude, then you may want to downweight those and upweight loss functions that have a lower magnitude. Yeah. Sure. I'm just using the vanilla MTL. And you see that the networks really struggling on a specific task, then you weight task higher. Yeah. So you find that the model is doing poorly on one task, then you could try to upweight that task. And we'll actually cover a method that will do that automatically later on this slide. Any other ideas? If one tasks is like the most important, you have a really high weight. Yeah. So if there's some tasks that you care about more than others, like in some cases, maybe there's actually only one task you care about, and you just have these other auxiliary tasks that you are hoping might help out, then you could upweight the tasks that you care about the most. I think some [INAUDIBLE] certainly makes you want to do this. Yeah. So you could. In some ways, treat the eyes as hyperparameters as well. Although when you choose those hyperparameters, you need some overall objective to tune them with respect to. And for example, that overall objective could be the vanilla objective. But you may also have cases where the vanilla objective is not suitable when the magnitude of the losses are different, or when some tasks matter more than others. One more. Maybe you could use the wiggle function to help the system learn what it has. When someone get stuck in a local, minimum, or maximum, you have a less weighted objective that the error accrues. That will check it out so you don't take the little more weight. [INAUDIBLE] in subsequent one. Using it to solve some of the [INAUDIBLE].. Yeah. So if you run into some optimization challenges, it could be that actually changing the weighting aids in the optimization challenges. For example, maybe if you have a task that seems to be stuck, maybe if you start placing all of your weight on that task, it will help push it out of that local optimum or something. Or maybe if it's stuck on that task, maybe you should actually stop optimizing and revisit it later. So there are a number of different approaches that you could take. The first thing that I have listed here is actually just based on some importance or priority, which would be some manual selection. But there are also various heuristics that you could use to choose these weights as well. And the other thing that I'll mention is you don't have to have these weights be fixed throughout training. You can actually have them vary at different points of training. For example, if you have optimization challenges, or if some tasks are doing worse than others in the optimization. And so in addition to some of the things that you have also suggested, another heuristic that some prior work has looked at is encouraging gradients to have similar magnitudes. That said, there's a pretty large body of work on different heuristics that have looked at, different ways of approaching this. In general, the vanilla objective or manually chosen weights is generally one of the strongest approaches that you can take. But it's worth acknowledging a lot of work that on certain problems you can see improvements. Now, the other approach that I want to mention here, which actually came up before is you could optimize for the task that is doing the worst. And in particular, you can formulate this as a minimax optimization, where at each point in training, you pick the task that has the highest loss and you update the parameters on that task. And this is exactly going to basically try to normalize or equalize the task to some extent. And this is relevant when you think that all of the tasks matter equally. And so in particular, what this will look like is if you have task one, task two, and task three, and you plot their lost value. And for example, for task one, your loss values down here, for task two, your loss values up here, for task three, your loss values right here, what this will do is you're going to estimate these loss value at your current iteration of training. You'll notice that this one is doing the worst, and then you'll start only optimizing on task two. And then after you start optimizing on task two, you'll reevaluate this. Chances are hopefully this has gone down a little bit. Maybe one of the other ones has gone up a little bit because you weren't optimizing on that one. And then you'll start optimizing for this one. At the end of this process, in general, you should end up with loss values that are more similar across the three tasks compared to if you would only optimize the sum of them. Because if you optimize the sum, it might just prioritize-- it might minimize the tasks that are easiest rather than trying to maintain equal value. And this can be especially useful in fairness settings, where maybe different tasks correspond to different users, or different demographics, or different subpopulations or different geographic regions. And in those settings, you want to have similar loss values for those different subpopulations, because you don't want to have some customers that are getting a really great experience in some customers that are having a really terrible experience. Or likewise, some people that have a great experience versus a terrible experience. Yeah. Would this be harder to optimize since it becomes [INAUDIBLE]?? Yeah. So this in general, becomes a harder optimization problem. Yeah. So there are a range of challenges with this kind of approach. Another thing that's somewhat challenging is you need to compute what is the worst loss. And the way that you would optimize this exactly is every single iteration, you would compute what is the worst one and evaluating the loss function on your entire data set may be expensive. But there are ways to approximate which one is the worst one by keeping like a running average or something like that. And in practice, it's not something that is too hard to optimize, especially if you have a relatively small number of tasks. You have a lot of tasks. It gets a little bit trickier. Yeah. [INAUDIBLE] Do you normalize the losses first before getting the max so that you're in the same scale? Yeah. So in practice, it is good practice to normalize your labels. Yeah, normalize your labels and make sure your loss functions are all on the same scale. If you don't do something like that, or if it's difficult to do that, what this is going to do is prioritize the loss functions that are the most difficult or that are the highest in magnitude. Yeah. So my question is a bit opposite or onsite. But if you want to-- so let's say if your objective is on a primary task, you want optimize our primary task, many of times, we use multi-task learning to improve the single task performance, right? So now my question is, how do we find our, or in particular, how do we configure the optical function to decide on what auxiliary does would be helpful for primary operator? Yeah. So if you really only care about one of the tasks, for example, then, in that case, actually, treating WI as a parameter makes a lot of sense. Because your outer objective is I want to do as best as possible on task one, and I want those other tasks to help me. And so you can manually figure out what WI leads the lowest loss function on the validation set for task one, or you could apply automatic hyperparameter optimization techniques as well to do it for you. Yeah. I was wondering. So this type of idea of trying to minimize the maximum loss, it reminds me of L1 versus L2 or L infinity norms. So it's possible to have people tried to changing the exponent for the loss instead of just doing weight times loss because loss squared. And it's to L2 norm pushing everything down. And L infinity is kind of similar to the min-max. Yeah. Actually, that's a great question. I haven't come across other things that aren't L2 or L infinity or L1. Like something that does some other exponent basically. But something like that could be interesting to think about. And one of the challenges with this objective is sometimes it can be a little bit too pessimistic and can place a little bit too much focus on the worst case. And something that is somewhere in between L2 and L infinity may actually mitigate that challenge. So that could be interesting to explore. One last thing that I'll mention. If you're interested in digging more into this, this looks a lot like what's called Distributional Robust Optimization, or DRO. And so if you want a keyword to learn more about it, you could take a look at that. And the math gets quite deep. There's all the nice guarantees that you can get about this optimization. Great. And then lastly, we'll briefly talk about the optimization process itself before going into a case study. And I'm just going to go over the standard optimization process that we might do for this objective. Because in general, it works pretty well. And so the basic version of this is we'll sample a mini batch of tasks. So I mean, if we only have three tasks, then we can just sample all three. But if you have a very large number of tasks, you might just sample a subset of those tasks. Then we will sample data points for each of the tasks that we sampled. So this is going to be another mini batch. And then we can compute the loss on that mini batch. So we have each of the tasks in our batch of tasks, and then we have the data set, the mini batch data set for that task that we computed. And this will correspond to a mini batch loss function in the multi-task setting. And so once we've computed our mini batch loss function, then we can compute the gradient of that loss function and back propagate that into the neural network and apply the gradient with your favorite optimizer. So you can use vanilla stochastic gradient descent perhaps with momentum, or you could use something like atom, which is often used in practice. So this basically just corresponds to stochastic gradient descent on the multi-task objective. The thing that's probably the most different from that is we are going to be sampling a mini batch of tasks. And so this ensures that the tasks are sampled uniformly regardless of the quantities of data. And so if you have a lot more data for one task than another task, this is going to make sure that those two tasks are still sampled at the same rate. Of course, if you care a lot more about the task that has more data, then you may actually want to do something a little bit different than this first step. The only thing that came up is that you may actually have loss functions that are at different scales. And if you have a regression problem, even if your loss function is the same, if its mean squared error, if your task labels for different tasks are at different scales like one varies from -5 to 5, another varies from -100 to 100, then your loss function is going to be scaled correspondingly. And so it's good to try to normalize your labels, so that your loss functions are at the same scale. Cool. So actually, before we go into the case study, there's a few different challenges that I want to bring up. And this is going to affect some of the design choices as well. So one challenge that comes up is negative transfer between tasks. And what negative transfer means is that sometimes if you try to train with multi-task learning, the resulting model actually does worse than if you were to train completely independent neural networks. And as one example of this, you can formulate a multi-task version of the CIFAR-100 data set. And you can compare the performance of a multi-head architecture of a cross-stitch architecture, which was an architecture proposed by some prior work and also just independently training the models. And what we see here is actually the independently trained model is getting 67% accuracy, which is more than 10% higher than the multi-task learning approaches. So why might this be the case? This could be the case because of optimization challenges. So there might be some interference between the tasks. They might be trying to use the representation in different ways, or the test might be learning at different rates. And if one of the tasks is more or less converged, and one of the tasks is still learning, that may make it difficult to keep on updating the network. It could also be not an optimization challenge, but just a challenge of limited representational capacity. Oftentimes, multi-tasking networks are doing more. And so they need to be larger than if you were to just train on a single task. So if you have a negative transfer, the natural thing to do is just try to share less across tasks. And so you can see if you're having a negative transfer by just trying to train independently. If you're seeing the independent training is doing better, then you can just try to make it more like independent training. And so we saw a few different ways to try to do that before by, for example, having a multi-head architecture that explicitly has different parts of the network that are not shared at all between the tasks. Yeah. Is there [INAUDIBLE] methods to define how advances may just [INAUDIBLE] Yeah. So the question is, are there any unsupervised methods for telling if there's going to be positive or negative transfer between the tasks? I guess I'll show that in a little bit in a minute. But in general, without the labels of the task, you certainly can't. It's certainly very difficult to tell if there's going to be positive or negative transfer. Ideally, it'd be awesome if you could just have a description of the task. And of each of the tasks, and then something that tells you like, are these going to work well together or not? Unfortunately, I think that something like that is nearly impossible, because it's not just going to depend on what the task is, but also the nature of the data set, the nature of the model that you're training, potentially also the nature of the optimizer as well. So in general, these things are very hard to tell a priority. Yeah. If you are getting negative transfer and you see that using two separate models sometimes better than using than assure architecture? Why would you continue to try to use assure? Are you still hoping to get even better performance than the sector models? Yeah. It's a great question. So if you're seeing that independent training is doing better than your current multi-task model, why not just go with the independent networks? It's very reasonable to stop there and go with the independent networks. Although, there are definitely scenarios where you could get better performance with a different architecture. For example, if you started by trying to share everything, there are certainly scenarios where training and multi head architecture will do a lot better than both sharing everything and training independently. Yeah. There's a way to tell whether the task is inherently incompatible or was it my mistake that I did not work out morning with the architecture? How do I tell them? So the question is, is there a way to tell if there's like if the task are just incompatible versus, did I mess up I guess? In general, I think it's a trial and error process. It's very similar to trying to tell a priorrity if there's going to be negative transfer or not. Yeah. [INAUDIBLE] could it be possible to pre-train on the other task and then train the [INAUDIBLE]. Yeah. So another way that you could possibly share less is to basically pre-train on one task and then fine tune on maybe the tasks that you care more about or something like that. And actually, this ties in a little bit to what I had on the rest of this slide, which is that you don't actually have to either share parameters or not share parameters. It can be a little bit more flexible of a decision. It could be something like pre-training and fine-tuning. Or you could have something, which is referred to as soft parameter sharing where you actually have separate parameters for the tasks like a pre-train versus fine to network. But perhaps, you just have some soft constraint that encourages those parameters to be similar to one another. And the way that you could implement that is take the same exact objective that you had before, where you have some task specific parameters. Perhaps even the entire network has to have specific parameters. But then try to tie them together with a loss that encourages the tasks that the parameters of different tasks to be similar to one another. And so there is actually much more of a continuum than just sharing versus not sharing. Some of the benefits of this can allow for more fluid degrees of parameter sharing. And things like fine-tuning are also an example of that. It's also worth acknowledging though that this has some limitations because this introduces yet another set of design decisions and hyper parameters. You need to figure out how to weight that loss function that ties them together. And this is also going to be more memory intensive because you have to store separate parameters. Cool. The second challenge that you might encounter is almost the opposite of negative transfer, which is that you might not actually be sharing enough. So if you see that you are training your multi-task learning problem, but you're actually overfitting a lot on your problem, multi-task learning is a form of regularization because you're giving it these auxiliary loss functions that should help learn representations. And if you're seeing that you're overfitting on your tasks, then it could actually be beneficial to try to share more than what you're currently sharing. And now, the last challenge that I'll come to which is related to some questions that came up before is that generally, if you have a number of tasks and you want to determine if you're going to see positive transfer or negative transfer, and trying to understand should you train all of them together, can you figure out which ones are going to be complementary, in general, I think this is somewhat of an open problem. And the bad news is that as I mentioned, there's no thing that will just tell you, will it work or not at the outset? There's no closed form measure of how similar to tasks are, how complementary they are in practice. And the reason for this is that it depends not just on what the tests are, but on the data set on the optimizer, on the architecture, and so forth, and it could even depend on where you are in the optimization process. So for example, one task might be to pick up a fork and skewer, and another task might be to pick up a fork, in that case, if you're early on in the optimization, then picking up the fork is the first step that you need to learn. And so they might be very complementary early on in the optimization. But then later on, they may actually be not very complimentary because one of them actually needs to do something once you've picked up the fork, whereas the other thing doesn't need to do anything. The somewhat better news is that there are some ways to try to approximate task similarity from a single training run rather than trying to brute force and see what happens when you train different sets of tasks together. And so here's one example of something that tries to basically do a single training run of a single multi-task network. Analyze how those tasks are similar to each other by looking at the gradients, looking at the optimization process, and then ultimately figuring out which tasks to be grouped together and which tasks should be trained separately. OK. So to recap, most of the lecture, we talked about what a task is as these data generating distributions, we talked about how each of these tasks have data sets, how for the model architecture we could have multiplicative conditioning, versus additive conditioning, and that multiplicative is a bit more general, or a bit more expressive. And then you can also try to share more or less of your network depending on the transfer that you observe. We also talked a little bit about the objective and the optimization about choosing task weights, and as well as stratifying your mini batches, so that you have a similar amount of data per task in your mini batch. Question? Is there like a data science relation to the success of these approaches? So if you have less data, would it make more sense to train like separate models for each task as opposed to a more complex multi-task architecture? Yeah. So in general, if you have less data per task, then multi task learning has more potential to be beneficial because it's a form of bringing in additional data basically. The data from other tasks is being brought into the optimization process. Whereas if you have a ton of data for all of the tasks, then chances are you'll probably do well if you just train from scratch on those tasks. Yeah. Is there a reasonable way to describe how much structure to not share, which is similar to the usual information? But how are you going [INAUDIBLE] and quantify? How well you can create one bit for another? Is there something analogous for that? Yeah. So the question is, is there a way to quantify the similarity between tasks and with something like mutual information? And yeah. So I usually cover this a little bit later, but I'll see if I can try to cover this in the next lecture. Basically, there's a way to formulate, to basically think in the language of graphical models, and think about what are the statistical dependencies between the data sets and the strength of those statistical dependencies will translate into the similarity between the tasks. Awesome. So in the remaining 10 minutes, I'd like to get into a case study of where people actually use multi-task learning in a real world problem. And in particular, this is a paper from some folks that work at Google. And their goal was to make recommendations for YouTube. And so they are basically trying to figure out what should you put in this right column right here. So a very real problem probably even something that you may have encountered yourself. And it's pretty cool because the paper actually goes into a lot of detail in how they actually try to solve this problem. So before we get into how this is a multi-task learning problem, let's get into the setup. So as input, they have information about what the user is currently watching, and they also have some features about that user. So they have features about the video and features about the user. And once they have that input, they're going to generate a few candidate videos, and then try to rank those candidate videos. And then ultimately, once you have a ranking, then you'll serve the top ranking videos in that right side panel. The candidate videos are going to be pooled from multiple candidate generation algorithms. These candidate generation algorithms are going to use things like matching the topic of the query video, looking at videos that were most frequently watched with the query video, and also other approaches. The focus of this paper isn't on candidate generation, it's really on the second step of once we have a few candidates, how do we rank the ones that we think are going to be the best options. Cool. So in this ranking problem, the input is, again, the information about the query video, information about the candidate video, and also information about the user and other context. And so in particular, those input features are shown in the bottom yellow boxes. So features of the creating candidate video and features of the user in the context. That's going to be passed into the neural network as input. And then there's the output. So the output of this model is going to try to output measures of engagement and satisfaction with the candidate video. And so intuitively, if we want to be able to-- in this case, their goal is to try to figure out things that have higher rank. And so things that have, if you're able to predict the engagement and satisfaction you'll be able to rank the videos. And so more concretely, what engagement means is they're going to be predicting binary classification tasks like whether or not they click on the video, and also regression tasks that relate to the time spent watching the candidate video. And satisfaction is going to correspond to things like clicking the like button on the candidate video, and also the rating that they give to that video, which I believe is from surveys. One thing that's interesting is once they have the model output, their training model to output these things from the input features. That's what the machine learning problem is. To get the ranking score, they manually weight a combination of these different predictions and tune the weights of all these different things in a manual process based on what they think seems to do the best. Cool. So I guess question for you before we move on to the approach. So the objective is to predict engagement and satisfaction. I'm curious, do these objectives seem reasonable, and what are some issues that might come up with this objective? Yeah. Subject to no response. Quite so people don't do most of the [INAUDIBLE] Yeah. So you might have some missing data that comes up, because people might not respond to a survey or they might like maybe people like a video, but they don't click like for example. Yeah. The time spent. Are we the only maybe like shorter or longer? And maybe just a light button switch [INAUDIBLE].. Yeah. So things like time spend. If you have a longer video, maybe that people will spend more time watching that than short videos. And so you may want to control for that. Yeah. Data and balance. I mean, it's possible that there is a lot of engagement. But satisfaction is good. Yeah. So you're going to have data imbalance. Like the surveys, for example, you're going to have a lot less survey data. Maybe a lot less data about whether not users click like or not, but more dense data about engagement. Yeah. A video that user engages with a lot. They might not necessarily like. So [INAUDIBLE] emotional reaction that might be negative and maybe they'll leave a review, but they actually dislikes that is causing issues for them. Yeah. So people might watch something, but not actually like what they're watching. Yeah. So like self-reinforcing groups I think. But if a user screaming on a video, and then you show them more of the same type, then they just clicking on the same thing, but then actually enjoying the content? You're just restricting what they're saying. Yeah. So feedback loops. So once you start deploying the system and actually have it collect data, then that might affect what people are clicking on, and that might lead to a data distribution that is a little bit skewed and difficult to make generations on. Yeah. Well. This is a bit of an editorealization. But from a social perspective, you're optimizing for wasting people's time when you do. Yeah. Awesome. So the point was that maybe optimizing for time spent on YouTube is not good for society. And I really like this response. Because last time, I asked people last year, I asked people this question, and everyone was pointing out technical challenges and not pointing out actual ethical or societal challenges with this. So actually thinking about this objective, not just from the standpoint of a technical standpoint, but also what should you be optimizing for I think is really important. Yeah. One thing that we platform might also care about is that we want to make sure that along with exploitation, there's also some exploitation happening. So some media might not be thought popular, but for longer for the platform, this more diversity and solve these kind of problems they might not be that into. Yeah. So I mean, in general, these metrics are very short term metrics. And actually, factoring in what's going to happen in the long term, and whether or not videos that-- and yeah, exactly what that feedback process will look like is really important for their business model, and also in general for actually serving stuff that people want to see. Cool. For the sake of time, let's go into actually how they do this. So the basic option, this is really the baseline is a multi head architecture, where they passes the input features into a shared bottom layer, and then how these task specific heads that are predicting different engagement and satisfaction measures. They found that this can harm learning when the correlation between tasks is low. And they're actually going to try to improve upon this architecture right here by actually allowing the model to share a little bit less between the tasks. And so what they choose to do is they use an architecture, which they refer to as the multi gate mixture of experts model. And what essentially this is going to look like is there is still going to be one shared bottom layer. But there's going to be a number of different experts. And it's going to try to actually have the model choose how it uses those experts each of those parts of the model based on what it finds works well. And essentially, what this will try to do is to allow different parts of the network to specialize for different tasks while also having it figure out whether it's useful to reuse some components. So specifically, what this looks like is after the shared bottom layer, you have a set of expert neural networks. So these are different modules denoted by AI. And for a given set of features for input X and task K, we're going to try to basically predict which expert we want to use for that input and for that task. And so the way that we're going to do this is we're going to pass this through a softmax function. You can think of a softmax function as something that's trying to predict a one hot vector, but in a soft way, so that we can differentiate through it. So in particular, this is going to give us a probability distribution over the experts that we want to use. And then once we have this probability distribution, we're going to wait the predictions of the experts by these probabilities. And then once we have the output of this weighted expert outputs, we'll then compute the output from there. And in their experiments, they implemented this in Tensorflow with TPUs. They trained on videos in temporal order because they have a ton of video data. And so they're going to be running training continuously to consume the newly arriving data. And then they did online A/B testing in comparison to a production system. And then here, also model computational efficiency matters as well. And in the results, they found that this mixture of expert models with eight experts is able to do 3% better on satisfaction metrics and 0.45% better on engagement metrics, which is actually pretty substantial given the scale of this kind of system. And this is in comparison to a shared bottom network. And you can also look at how it's utilizing the different experts for different tasks. And we see that there is some specialization. For example, expert 7 is used a lot for satisfaction task 4. But there's also a considerable amount of sharing. For example, expert 5 is used for a lot of the tasks. Cool. So we're basically out of time. So to recap the lecture, we talked about multi-task learning and how it learns a neural network conditioned on zi. We talked about how the choice of the task weighting is going to affect the prioritization of the tasks, and we also talked about how conditioning on zi will affect how the parameters are shared. And if you observe negative transfer, it's helpful to share less. And if you observe positive transfer, it's helpful to share, but potentially try sharing more, or if you observe overfitting. So really these are the key design choices when it comes to multi-task learning systems. And next time, we'll start to cover transfer learning and get to some cool learning topics as well.
AI_LLM_Stanford_CS229
Stanford_CS229M_Lecture_7_Challenges_in_DL_theory_generalization_bounds_for_neural_nets.txt
OK, I guess let's get started. So in this lecture, what we're going to do is that at the beginning we're going to talk about deep learning, especially some of the challenges in deep learning theory. And then in the next probably 5 to 10 lectures, we are going to discuss different aspects about deep learning, I guess. You will see like we're going to talk optimization, [INAUDIBLE] so on, so forth. So basically, in deep learning theory, there are different aspects, for example, optimization, which we spend probably two lectures on later. And generalization is another question which we probably will talk about for probably more than three lectures. And at end of the course, we are going to talk about some other slightly different topics. So in some sense, you can view this as kind of like an outline for the next five weeks. So to talk about deep learning theory, I think it's probably useful to somewhat kind of summarize the classical machine learning theory, which I actually didn't really talk about that much from a bird's eye view that much in the beginning of the course because I felt that if you give too much information at the beginning, it's probably a little bit too much. So but now I'm going to have a higher level view about what classical machine learning theory do in terms of different kind of aspects or different topics. So I guess in the more classical machine learning theory, there are several things. So one thing is called approximation theory. So in some sense, this-- and another keyword is called expressivity or representational power. If you see these kind of things, you know that's representational power. So you know that they are all about the same thing. So what they are doing is really caring about, basically, you want to bound L theta star, which is the best model in your family. So so far, until this week, we always talk about excess risk. We compare it with the best model in the class, and we say that if you can get the best model in the class, then you are done. But actually it's not done, because maybe you are using the wrong hypothesis class. So your best hypothesis class in the family hypothesis-- the best hypothesis in the hypothesis class is probably not great, right? So approximation theory is basically trying to deal with this, right? You are trying to understand whether your hypothesis class is powerful enough to express the functions you care about. So for example, a kind of trivial case, for example, suppose you have some data like this and something like this-- some positive data, some negative data. Here, you know that if you use linear model, then the best linear model is not going to do great, right? Because if you probably find the best linear model, you probably would do something like this. I don't know. So in this case, you can say that L theta star wouldn't be great if you choose your capital theta to be linear family. And then you can study what hypothesis class can contain a good classifier even you have access to population data, so on, so forth, right? So in some sense, this is trying to understand how good can a hypothesis class H approximate the ground truth label function, right? So that's one type of question. And another type of question is what we discussed already, which is about the statistical aspect of sometimes people call it generalization theory. So this is about the excess risk, as we discussed in the last several weeks. So you are trying to bound from above the difference between your learned hypothesis from the best hypothesis, theta star, right? And what we have done was something like you bound this by L theta hat minus L hat theta hat plus L theta star minus L hat theta star. And people have called this the generalization error. The generalization error is the difference between the population loss and empirical loss on the learned type parameter, right? So this is the generalization loss basically the difference between training loss and test loss, right, on the learned parameters theta hat. If, say, the hat is ERM, then this is talking about ERM, but maybe in other cases you are using some other algorithm to find theta hat that you want the generalization error for that theta hat. And this term, as we argued, the second term is always small. Just no matter what hypothesis class you use, basically, as long as your loss function is bounded and this term is always something like 1 over square root of n. So basically, that's why we don't care about this term that much. OK, so what we have done was something like you prove this kind of generalization bound. So you prove something like L theta hat minus L hat theta hat. We bound it by something like some complexity over square root of n. I guess typically, probably you should write this. And the principle here is that if your hypothesis class is of low complexity, then you have better generalization error, right? So simple hypothesis can generalize better. So I think sometimes also people call this Occam's razor. This is, I think, is kind of like philosophical principle which dates back to something like 1100 or around that time, and the principle is something like simple or parsimonious explanation can generalize better to other situations. And you can see even from these two things, right, you can see that there is some kind of conflict or trade-off between the approximation theory and the generalization theory. Because if you use a very, very simple hypothesis class, then your L theta star may not be good enough. For example, for the beta I drew here, if you use linear model, then your L theta star is not great. But your generalization error could be very good because your model is linear and simple. So there is some trade-off between-- and I think people also sometimes called this bias and variance. So the variance mostly corresponds to the generalization theory. It corresponds to statistical error introduced from learning because you have finite data. That's why you have to pay something that depends on how many examples you have. That's the variance, and the bias mostly is a quantity that only depends-- bias, all the expressivity, is a quantity that depends on the fundamental power of the hypothesis class. It's not something that depends on how many examples you have, right? But the variance bias trade-off is essentially the same thing here, but the exact definition of bias and variance can only apply to basically square loss and linear model. That's why we don't use the explicit here. But the principles are somewhat related. And you can also kind of extend this generalization theory a little bit by saying that you can consider the regularized loss. In some sense, you can consider this as application, implication of the transition theory, which says that if you use regularized loss, right, something like L hat reg is something like L hat theta plus lambda R theta, where this is a regularizer that captures the complexity of the hypothesis. So then, you can hope to have a claim like this so you can have a statistical claim of the following form. Of course, this depends on exactly which regularizer you use, what models, so on and so forth, but the form of the claim is something like if theta lambda hat is the global minimizer of L hat reg, then you have a generalization bound. You can bound excess risk, or you can bound, I guess, either the excess risk or the generalization error. I guess they are pretty much related, as we have discussed, right? So all the generalization error, so they are bounded by something. So this is the type of results you probably get from this kind of statistical generalization theory. Because you know that if you-- the reason is that if you optimize this regularized loss, and you, indeed, find a very small regularized loss, that means that your regularizer-- the R theta, the complexity-- is small, and also it means that your training area is small. And then if both of these are small, then you can show that your excess risk is small because this model will generalize to the population of the test case. And then there's a third aspect, which is called optimization. Any questions so far? Right, so there's a third aspect which is called optimization. So the question is about numerically how to find theta hat. Theta hat could be the arg min of the training loss, or maybe you can talk about theta hat lambda, the regularized loss from [INAUDIBLE] right? And this is a purely-- at least in a classical way of thinking about this, you can basically view this as a separate question about-- you can forget about where your data come from. You can forget about why you care about minimizing this training loss. You just say that I'm getting this training loss. That's my job, right? And typically, the approach is something like if the loss function is convex, you use convex optimization. And in all, maybe you can use gradient descent for non-convex functions, so on, so forth. Or maybe stochastic gradient descent. There are many different approaches. And when you measure the success, or you measure the interface is that you care about how well you can approximate the minimizer. Or you can never find exact minimizer using a numerical approach, right? So you always have some small error compared to the minimizer of the empirical loss, and you can measure the error in different ways, maybe match the error in terms of the sub-optimality in terms of how different your minimizer is in terms of the loss function compared to the best minimizer. Or you can compare other quantities. So in some sense, I think from this kind of summary here, you can think of the statistical part is kind of pretty much independent from the optimization part. Of course, there are also interesting interface. For example, you can also ask about what regularizer-- so when you add regularizer, right, so you can ask the question, what regularizer can simultaneously have good statistical performance, but also can be easy to optimize, right? And by easy to optimize, it means that you can optimize it fast, or maybe optimize it in a certain time, maybe d time or d squared time, so on so forth. So there are still interactions between different parts, but if you just need a kind of high level kind of understanding, you can think of them as separate parts, right? The interaction are more on the lower level details about how do you achieve the best statistical efficiency, or how do you achieve the best computational and statistical efficiency? Then you have to talk about the interactions. But at a high level, you don't have to think about them simultaneously. You can think of them roughly separately. Is there a question? Yeah, [INAUDIBLE] Sorry-- no, no. This is another visual, sorry. My bad. This is just two things. My writing is bad. And these two qualities are basically similar, right? So like you care about the excess risk, which is the most important thing, but which is almost the same as the generalization error. And actually, you bound the generalization error. Then you bound excess risk. Sorry, my writing is not clear. So any questions so far? So these are the standard way of thinking about these questions, but what happens in deep learning? What happens in deep learning is that, as you'll see, things becomes more complicated, and for a fundamental reason. And I think the first thing is that for deep learning, there are probably two things that change, at least on the surface. So one thing that changes is that you have from linear model, it becomes nonlinear model, right? And this directly affects the optimization because when you have nonlinear model, it becomes non-convex loss. But this wouldn't change the structure view fundamentally because it makes the optimization question harder, right? So at least at the beginning, this is what I thought five years ago, maybe more than five years ago. When I started to do deep learning theory right after deep learning took off, right, at very, very first I thought that the only difference is that now the optimization question becomes harder. And then the question is just how do you optimize better? But then, I think in probably like about three or four years ago, people realized that there is also another fundamental difference from the statistical perspective, which is that empirically, you always use this so-called overparameterized model. Maybe it's not precisely to say that you always use overparameterized model, but generally, overparameterized models are better than-- more parameters are always generally better, or almost always better. So more parameters generally helps. And it can help even to the extent that when your parameters are more than the number of data points, right? So it even helps when d is larger than n, this still helps. And it even helps when you have already zero training error. So even after you already have zero training error. So this is a plot that I got from some paper. This is from a paper by Neyshabur, Tomioka, and Srebro in 2015. So this is what they've found. Of course, this is only a very small data set, but roughly speaking, the same phenomenon also holds for a larger data set. And you can see here that the black curve is the training error, and the x-axis is how many hidden units, or how large network is. Hidden units means the number of neurons in your network. Which if you have more hidden neurons, you have more parameters. And actually, the number of parameters is quadratic in the hidden neurons in this fully connected case. This is a very simple fully connected network, MNIST, and you can see that after you have more than 64 hidden neurons, you can fit MNIST perfectly. 0% error-- I think literally zero. Maybe not exactly literally, maybe 0.01% error or something like that. And if you look at a typical textbook, right, so what you would do is that you would predict that the test error will go up after a certain point because you are overfitting. You are using too complex of a model, and you are overfitting to the data. That's the purple thing which you would probably read from some of the classical textbooks. And actually, it does happen in some classical settings, but does not happen often in neural networks, or probably never happens in neural networks. And what really happens is the right one. The generalization error actually continue to improve as you have more and more neurons even though you already memorized everything, right? So if you compare 64 with this 4k, basically these are just two networks. Both of them fit the training data with 100% accuracy, but one of them has better test accuracy than the other. So this is kind of a big mystery from a theoretical point of view, especially if you believe in the classical trade-off between bias and variance or the trade-off between expressivity and generalization theory-- generalization power. So this is the big open question, right? And briefly, let me discuss again what's the impact on each of these concepts? And actually, you even have to really think about some of these concepts. Like some of these concepts become entangled or intertwined now in deep learning. So first of all, for approximation theory, I think things don't change that much at least compared to other parts. So for approximation theory, I think generally, I guess, you know that large models are expressive. And there's actually something called universal approximation theorem. I'm not sure if you heard of it or not. In some sense, this is saying that if you have a network that is wide enough, then you can approximate any functions. Of course, that's, in some sense, a misleading way to say it because what does it mean by large enough, right? So if you need exponential number of neurons, that's, indeed, very large. That's large enough, but that's not really implementable. So empirically, you don't even need that many neurons to be expressive. I think you just need polynomial number of neurons. But anyway, so the gist is we do believe, regardless whether this universal approximation theory is exactly answering the question, at least we believe that the neural networks are very powerful. So we generally believe that the best model in this family, especially if you use a wide enough network, this is generally small. This is what we generally believe. And at least what you can show is that you can say this is really small, the minimizer of the training loss. Because if you have a neural network with more than n neurons, so this is just because with more than n neurons, n is the number of examples, you can provably memorize all the training examples. At least you can find one network that memorize all the training examples. That network may not generalize, but this already means that your minimal training loss is very small. It's probably zero. OK, so basically, for approximation theory, I think we generally believe that the models are very expressive. And then that becomes the generalization part, which becomes quite complicated. So there's another information about what practical network does is that in practice, also people don't use very strong regularization. Only weak regularizations are used. And this is kind of like a somewhat important thing to say just because recall that even in a classical setting, right, it's not always that you can show-- sometimes you can have a-- so even in a classical setting, you can have the setting there where you have a lot of parameters, but you have a strong regularization to compensate. So that's allowed in a classical setting, right? For example, if you use a sparse linear regression where you have a lot of features, the dimensionality is very high, but you regularize the sparsity of your linear model. Then you-- wait, I think speaking of sparse linear model, I think I forgot to do something that we left last time about the comparison between linear models. But anyway, my bad. I think I should have it. But anyway, let's continue with this. But anyway, so what I was saying is that even in the classical case, you do allow to have d bigger than n. The dimension can be bigger than n as long as you use the regularization, right? Because if you use the regularization, you implicitly restrict the complexity. For example, if you say that the sparsity of your model is s, and s is less than n, then that's OK. So however, in deep learning, in practice we only use very weak regularization, right? So typically just some L2. At least even with L2 you can work. Sometimes even without L2, you can work pretty well. And also, the regularization strength is also relatively small. The strength is small enough so that you can still fit your training data with basically 100% accuracy. And another way to see the weakness of the regularization is that you can consider the following fact. So this regularized loss, if you, for example, just regularize with something like L2 with some lambda, the regularized loss doesn't have unique global minimizer. Or at least, it has very different approximate global minimizer, right? Maybe if you really care about the numerical precision, if you say like you care about very, very small precision, then maybe there's a unique global minimizer. But for practical purposes, there are many different global minimizers that are very similar in terms of the training accuracy and in terms of the regularized loss. And they all have very small regularized loss. They have very small loss from the regularizer part. They also have very small loss from the training error part, and they are different global minimizers. And another thing is that it's also not true that all of these global minimizers perform the same. So these global minimizers perform the same on the test. I guess probably it's easier to just have a figure here. I think I did prepare a figure. So let's see. I think this is experiment I've done a few years back. There are many different kind of plots you can find online on different papers like this. This is just one of them. I actually take a little bit to exacerbate the differences a little bit, but the gist is always is the same. So this is what? This is CFAR-10, and you have two algorithm, the red one or the blue one. And I'm plotting the training and the test. And these two algorithms only differ by the learning rate. They have the same training objective. They have the same regularization strengths. It's just that the optimizer are different. So at the end of day, you see that both of these two algorithms found some global minimizer, or approximate global minimizer. You can see the training error is close to 0 in both of the two cases, right? So both of these are global min in some sense, or at least an approximate global min up to a very good approximation. But you can see that their test error are very different. So that means that these are two different global min for sure, right, in the parameter space. And also, they perform very differently on a test. So that's kind of the mystery, right, because this kind of refutes the possibility to have a theorem like in the classical case, right? So recall that in a classical case, typically you have theorem like this where I'm seeing-- yeah, so you have theorems like this, saying something like, if you find the global minimizer, or a global minimizer, or any global minimizer of the regularized loss, then you can generalize. You can bound the generalization error. And this is no longer the case because not all the global minimizers are the same. Some of them are better. Some of them are worse, and you probably shouldn't have the same bound for all of them. And some of them probably just don't generalize at all, right? So this is saying that you cannot just say any global minimizer generalized. You have to somehow distinguish different global minimizers found by different algorithm. But what happens here, right? So what happens is that the optimizations start to come into play. And this is the reason. So basically, as I alluded to in some sense, different optimizers found different global minima. And some of them are better, and some of them are worse. So that is saying that optimization is not only-- so optimization is not only about finding any minimizers, any global min. If you just say you find a global min, that's not enough. You have to use optimization to find the right global min. So in some sense, the optimizations have two jobs, uh-huh. One thing is they have to find something that has smaller error, or a small error or small regularized loss, and the other job is that it also has to find something to generalize. It has to find a global minimum that can generalize. So in some sense, the kind of the picture is like this in my mind. So you have this-- I'm using one dimensional thing, right? This dimension is the parameter. And basically, I'm envisioning this kind of toy case where you have the landscape of the training loss and test loss look like this, right? So the training loss has two global minimum. And one of them is a good global minimum, and the other one is a bad one. And a bad one in the sense that the corresponding test error is bad. And the optimization algorithm is not only responsible for finding an arbitrary global min, it's also-- it actually has to find the right global min instead of the bad global min. So somehow, the optimization algorithm is doing something beyond what it's supposed to do, right? So I guess in some sense, this is a one dimensional case. If you think about the high dimensional case, this is something I often use in my slides, it's kind of like you are going to a ski resort. And the first time I came to America, I didn't realize that you can have multiple valets, or multiple parking lots in the same ski resort. So when I go back home, I do gradient descent, right? I just go to an arbitrary valet and I found that my car was not there. And then it's actually a trouble because the resort is closed, and it cannot lift you up. And so it's actually pretty annoying. And then I realized that actually there are much more global minimum, and one of them is better than the others. And you have to find it, so it's not like arbitrary within a set. Or maybe the grid is doing something more than just the arbitrary downhill skiing, right? [INAUDIBLE] is it the fact that [INAUDIBLE] Right. So why the generalization? So the question is exactly mathematically where the generalization theory breaks down. I think the bounds becomes backwards-- basically, the bounds you can prove becomes backwards. The bound you can prove under the existing language becomes backwards. So basically, if you say that you want to prove a bound that works for all, you might work of size 10 million, of size 100 million for only 1 million examples. So if that's the language you are using, then it wouldn't work anymore. So you have to have a more precise way to think about it. Does that answer the question to some extent? [INAUDIBLE] what if you incorporated the fact that [INAUDIBLE] Right. Roughly speaking, that's the approach we're going to take. But there's one problem with this. If you just do it exactly what you said, there's a problem, which is you're going to get the same bound for any algorithm, right? But empirically, different algorithms have different performance. And the way to fix it becomes that you first say that different algorithm find models with different complexity, and then you can have different bounds for them. So the algorithm has to come into play in some way, right? So basically, that's kind of the conclusion here. So the algorithm has to come into play in your statistical analysis, right? Because if you don't have the algorithm there, you are not going to distinguish these different algorithms. So in some sense, you entangle the statistics with optimization to some extent. And so basically, the way to fix it is, at least the current plan, the general agenda I think most of the researchers seems to agree on is that you analyze the optimization and analyze why the optimizer finds a good local minimum. So basically, you need to have a theory that says something like, the optimizer find a theta hat such that-- so one, this theta hat is a global min that's approximate global min of the empirical risk, and also two, theta hat has some special property that you didn't explicitly say that they should do. For example, the property could be low complexity. So maybe, for example, just to give you an extreme case, for example, you run an algorithm without any regularization. But then logically, you say that even though I didn't let it regularize, but actually, the theta hat I found has low L2 norm, or has even the minimum L2 norm. Actually, you can prove these kind of theorems in certain cases. And then, because of the special property thing, and this implies that it can generalize, right? And the people have kind of proved theorems of this kind of form in many different case. So for example, you can talk about SUD, right? So SUD probably has some special preferences in terms of what models they want to find, and maybe SUD with different kind of specifications, right? So you can have large arrays, small batch, and so forth. I'll talk about that in a moment. But generally, we want to say that the practical optimizer people are using can have some preferences on certain types of global minimizer. And then after you have this-- as I said, after you have the special preference, then you can use the-- so this part from the special property, the low complexity through generalization, this could be more classical. This could be classical theory, or maybe improvement of classical theory depending on what complex measure you are talking about, as you suggested. So that's the current kind of statistical of extending deep learning theory. Of course, there are other kind of approaches, but I think this is pretty much, I think, kind of like the high level-- people have almost reached a consensus on the high level approach here, I think. And what are the best results? Let me have a brief summary of what are the best results people know, roughly speaking, in each of this aspect. So basically, first of all, for-- so let me just make this a bit more formal. So I guess a little more formally. So basically, you have probably three tasks in my language. So first, you prove that-- I guess I'm repeating myself a little bit in some sense. So you prove that the optimizer converges to approximate local or global min of L hat theta. And then in the second task, you also have to prove that in addition to one, the theta hat also has low complexity. For example, something R theta hat is less than C for some complex dimension R. And this R depends on the algorithm, depends on even the details in the algorithm like learning rate, batch size, so and so forth. And then task three, you say that for every theta such that R theta is less than C, and maybe L hat theta is close to 0-- so for every theta with low complexity and small training error, we have the test error L theta is also small. So that's kind of like the general idea. And what people have done in this kind of area, so regarding the task one, which is the optimization question, task one is optimization. So I think maybe if you want to associate some keyword to this, people would call the first question optimization, and the second question people often call it as implicit regularization, in fact. Yeah, probably I should explain this because this is implicit because you never told the algorithm to minimize this complexity. It's implicit in the optimization procedure. And it's a regularization effect because you get some low complexity solution. And the third one, this is probably more or less the classical optimization bound. And for task one, I think what happens is that if you don't have regularization, so I guess-- sorry, so for task one, I think for the optimization question, so one of research, consider the case where you don't have overparameterization. This is overparameterization. Without overparameterization in some special case, you can still prove this in some special case. For example, matrix factorization problem, maybe linearized network, or maybe something like task optimization, you can show that gradient descent or SGD can converge to global min. So here, linearized network means that you don't have any activations. Basically, optimization's linear, so you just stack a bunch of linear models, which doesn't really have any-- doesn't really do anything from a statistical point of view. It's just purely for-- you only analyze that as an exercise for your technique in substance. But you can still publish papers in it just because everything about optimization is very complicated. Even analyzing linearized network is difficult. So one of the thing that people have done. But you can see that this doesn't really address all the issues, right? Because you don't allow overparameterization, and it only works for linearized network or matrix factorization problem, which is completion, so and so forth. And recently, in the last three or four years, I think, you can also do this optimization question for neural networks-- for any neural networks-- for almost any neural networks, deep, shallow, so on, so forth, but with the caveat for special hyperparameters. So special hyperparameters means something like maybe-- so first of all, you need overparameterization. That's actually probably good because anyway, empirically people use overparameterization. But the limitation is that you also need special learning rate or special initializations and learning rate, so on, so forth. And that becomes a problem. By the way, this is typically called NTK approach, neural tangent kernel, which I'm going to talk more in the future lectures and explain why this is called neural tangent kernel. So this is the so-called NTK approach. And the problem with this approach is that this special initialization is a problem, and also special learning rate or special algorithm. So you also have something maybe. So you need something about batch size. For example, in most of the paper, the batch has to be very big. You can only analyze gradient design. You cannot have stochastic gradient descent. So this is kind of like the restriction on the hyperparameters. At the beginning we thought, OK, that's not a big problem. We have these hyperparameters, and then next day we probably extend them to other hyperparameters. But it turns out that there is some serious limitation in the hyperparameters. Because as I motivate before, even you change the learning rate schedule-- in the figure we found, right, so in this one, this is a real experiment. Even if you change the learning schedule, you change the performance of your model. So if you analyze the special learning rate schedule and you analyze especially in translation, then maybe you are not actually analyzing anything impressive. So for example, in this NTK case, I think what we can analyze, the algorithm you can analyze wouldn't give you the best performance that deep learning offers. You probably get something like 80% on CIFAR, but the best algorithm probably get like 95%. Of course, there are improvements along this line, but generally the issue is that you make this hyperparameter so special so that you lose the correct implicit regularization effect of the optimizers. And you are analyzing an optimizer that doesn't have the correct implicit regularization effect so that they don't generalize as well as the real deep learning algorithms. But still, I'm going to talk about this because this is a very nice idea and in certain cases is pretty useful. And then for the implicit regularization question, right, so the question about why the optimizer prefers certain kind of low complexity model, people have had a lot of results on special cases. So special models-- and actually, maybe I should call simplified models-- I don't know why, somebody took my yoga mat, yoga brick for some reason, and I have to use the book. Anyway. So special or simplified models, and also special optimizers. But here, the special is especially in the right way. So you're analyzing the effect of the optimizer. So you focus on each aspect, each paper in some sense. So what are the models that people have analyzed? For example, linear regression, this is something you can say-- and here you can say that certain initialization prefers certain kind of models. And you can also talk about logistic linear regression, and here, we will see that you can prove something like even the model just wants to find the minimum-- even the model just tries to minimize the logistic loss actually tries to find the max margin solution. And also for matrix sensing or matrix factorization problems in a linear neural network. So you can talk about this, and also, there are special aspects of the optimizers. And sometimes there has to be a combination of the problem and optimizer, because certain optimizers wouldn't have implicit regularization for certain problems. So you can talk about the GD, you can talk about SGD, and SGD, I think there is actually also about the noise covariance. Like what covariance will give you the right implicit regularization, also the noise scale, which also matters. And you can also talk about approximate dropout. This is something you do in your optimizer which will change the implicit bias, and you can also talk about learning rate, which is also actually important, and batch size, so on so forth. And also there are unsolved open questions for tempo momentum you know like factorization. All of this has some implicit regularization effect. So that's why this becomes complicated, right? So everything you do in your optimizer, everything you change, would possibly have implicit regularization effect. Sometimes it's positive. Sometimes it's negative. Of course, most of the tricks that we have seen have positive effect because that's why they survive and they are published, right? So that's the statistical I guess. And I'm also going to try to mention a more general result that we have that me and some collaborators have done. So you can also try to have a more general result which says something like SGD on L hat theta is roughly equivalent to doing gradient descent on L hat theta plus L lambda r theta for some R, for some regularizer R. This is a result that we can show. This is a much simplified-- the high level idea of a result that we can show. But of course, there are limitations. So these kind of more general results have weakness in other aspects. For example, you may have additional assumptions, or you can only deal with certain stochasticity, so on, so forth. But I think from this result you can see that this is kind of the things that we are trying to do. So if you add stochasticity, then you automatically, implicitly you got a regularizer for free. Even though you are using stochastic gradient descent on the original training loss, but somehow you get a optimizer for free somewhere. OK, so I think basically, we are going to talk about many of this in the next few lectures, in the future lectures. And for the task 3, for the generalization bound, this is also an interesting open question for deep learning. Because you also want to have precise transition bonds that can be compatible with the regularizer you got from the previous part, right? So we have said that the optimizer has a preference, but does that preference leads to a better generalization? That's another open question, right? So for example, you can have-- so one of the paper in 2017 proved that for this, if you use this as the complexity measure where AI is the weight of ith layer, so if you use this, then you can guarantee your generalization bound. That's one of the early results along this line. But the problem with this is that this is not precise enough, right? This is still too big to be modeling in some sense. So you sometimes need more precise optimizers. For example, if you can guarantee that-- I would talk about the limitations probably when I really talk about this, but this is still not precise enough. And you sometimes need more fine-grained complexity measure that is more compatible, more fine-grained. And also, ideally you want something that is a result of the optimizer, right? So you want this regularizer here to be the same regularizer as what you had in the implicit recognition effect part. So that's the third part. Yeah, I think that's basically a high level overview of some of the lectures we're going to talk about-- some of the lectures in the next few weeks. And of course, there are other open questions in deep learning, as well. For example, what's the role of the parameterization? So in these tasks, I didn't mention any of those, so on, so forth. But for those kind of things, I don't think there's a systematic study yet, so that's why we don't talk about them much for now. And I think for the immediate plan, I'm going to talk about task three here first because we are in this mode of proving generalization bound. We have talked about Rademacher complexity, and all of this depends on the Rademacher complexity. And I'm going to talk about that first, and then I'm going to move on to the other parts. Any questions so far? [INAUDIBLE] Sorry, I didn't hear the question. [INAUDIBLE] Yeah, I got the question. So the question is whether any of these results or tasks depends on the data distribution? Yes, they all depend on data distribution, I think. So all of them assume some of data distribution underlying. So some of them require something stronger, some of them just require some regularized connection, but I don't think you can go away without any data distribution assumption. And some of them have very strong data distribution assumptions, to be fair. And that's actually, in some sense, in my opinion, that's one of the technical challenge here. It's kind of like a subtle balance. If you assume too much about data, then you lose the realisticness. But if you assume something too strong, then-- sorry, but if you assume too less about data, then you have some hardness results. So certainly without any data assumption, you probably shouldn't be able to prove almost any results here just because things become simply hard, especially if you talk about computational procedure, it's very easy to get into NP hard instance. So we need some data distribution assumption. And another even more complex question is that, how do you leverage data distribution assumption? Like we don't have a lot of tools. So for example, if you assume it's Gaussian, then what you know? You know something about what's the moment, so on, so forth, right? You can do some certain kind of derivations. But I don't feel like we used even the property of a Gaussian enough in some sense. And let alone other kind of data distribution assumption, we don't have a lot of good tools to use them. Cool. So if there's no any other questions, I'm going to move on to the generalization bond for neural networks. And you can see that this is still roughly in the kind of mindset of the classical setting. The only difference is that we are looking for proper complex measures, not only a dimension dependency, but something sometimes more complicated. And you will see that this part is really a direct extension of what we have done in the last three weeks, because the tools are shared and it's really just that you need better tools. All right, so now let's talk about the particular setup that we can do. So we're going to start with two layers-- two neural networks. And then in the next few lectures, we're going to move on to multiple layers. And for two layers let's use the following notation. So let's say your parameter theta contains of two parts. One part is w, and the other part is u. So w is the second layer, and u is the first layer. So basically, on the network of theta x will be something like w transpose phi of ux where u is a matrix that maps dimension d to dimension m where m is the number of neurons. So basically, ux will be m dimensional. So this will be m dimensional, and you apply an element wise ReLU function. So phi is element wise ReLU function. So phi of a vector z1 up to zn is equal to, basically you apply the elementwise. You get max z1 0 to max zm0. So after you apply phi, you get an additional vector. and you inner product with w, you get a single scalar. So we have a model that outputs a single scalar using these two layers, u and w. And again, we still call xi yi, the training data set, as usual. OK. So our goal is that first to show a Rademacher complexity bound, and then we also talk about how this RC bound is relevant to practice. And I think for the day we probably wouldn't even be able to finish number one because I'm going to have actually two bounds. One is better than the other. So here is a theorem for a Rademacher complexity bound. So the theorem is that-- so suppose you have a hypothesis class that consists of models look like this, parameterized by theta, where you require that number of w is less than Bw and norm of ui is bound by Bu. I guess I didn't define ui, so this is-- let me say this. So u is this matrix of m by d matrix, and let's say each of the rows is u1 transpose up to um transpose. So each ui is of dimension d, and so that's why u times x is really in the product of this ui with x. That's the notation I'm going to use. OK? So basically, ui's are rows of the weight matrix. So we restricted the w, and the norm of w, and norm of ui to something like Bw and Bu, and then we also assume something about-- the data has expected 2 norm square less than C. I guess actually this is probably C square. Have a typo here. And then under all of these assumptions, you can prove the Rademacher complexity bound Rn of H is less than 2 times Bw Bu times C times square root m over square root n. So I guess just a remark is that this is not ideal bound, not a good bound, because m shows up in the bound. And actually, it shows up in a wrong way because it says that if you have more neurons you have a worse bound. So the m shows up in the more classical kind of sense where you have more neurons, you have more complex models, then it's not great. So basically, you cannot use this theorem to explain the size of deep learning or the overparameterized model because this is saying overparameterized model we'll have bigger Rademacher complexity. But you want a bound that is better when m goes to infinity in some sense to explain the plot that I kind of showed here. So as m goes to infinity, you want a better and better bound, in some sense. But this one gives you a worse and worse bound. But it nevertheless lets you prove this because this is kind of like a warm up for what we'll show next. [INAUDIBLE] I see. So maybe let me rephrase the question first, make sure. So the question, if my understanding is correct, your question is that why you're expecting this right one is going to 0 is decreasing forever, right, instead of really going up after a certain point, right? It's just we don't have enough data points, right? Like we didn't run that very super large scale experiment. I think the answer is that we do think this is already large enough for us to kind of believe that it will never go up like this because 4k neurons for this task is really, really a lot. Like 64 already allowed you to memorize. Typically, you wouldn't even run so many. You probably just-- maybe it would be easier to convince you if I show you 4 up to 108. And you will see something like this, and then you ask me the question. I will show you 108 up to 4k. You probably would be more convinced. Yeah, but 4k is already pretty large, I think. But of course, you can never rule out the possibility that after maybe a million neurons it goes up. It just sounds unlikely. [INAUDIBLE] So I guess I think the intention of the question was that whether this bound really is growing as m goes to infinity, right? So because both Bw and Bu could depend on m, and maybe they depend on m in different ways. Maybe Bw increase as m goes to infinity, and Bu probably decrease as m goes to infinity. So that's definitely a possibility, right? So I think the thing here is I'm choosing the scaling so that it's at least arguably fine to think of Bw and Bu to be constant. So why? The reason is that this is probably a little vague. So the ui is the contribution of each component, right? But w is the contribution of all the components. So in some sense, you are saying that the top layer, you control the contribution from all the components. And you want to say that that's the constant. You don't want that to grow as m goes to infinity because-- so basically, maybe one way to think about this is the following. So if you think about the scale, the scale over here does make some sense. Because ui is on the order of, let's say, a constant, and ui transposed is at least a constant that doesn't depend on m, right? So ui doesn't depend on m, and ui transpose x doesn't depend on m. Here, I'm writing this a little bit-- so here, theta could probably have some dependency on B. We only have a dependency on m, let's say So ui's are out of constant, and then you have sum of wi phi of ui transpose x, right? So each of this term is on order of constant, and your wi, the total contribution is constant. So that's why the total thing you can somewhat believe that this is on the order of constant. Because it's not like each of the wi's on order of constant. It's that the sum of the squares of them is on order of constant. So in some sense, you can believe that this whole thing is on order of constant, especially if you have-- I guess it depends on how you think about this. So if you replace wi by 1 over square root m, this is actually-- I guess depending on how you approximate this, roughly speaking, if you use Cauchy-Schwarz you're going to approximate by something like sum of wi squared, or 1/2 times sum of ui transpose x squared. Uh-huh. And this is something-- let me see. Why this is on order of constant? Maybe we're actually even more generous than that. So the L2 norm of w is a constant, but I think you can still make this bigger if all of them are correlated, right? OK, so I think this is-- I'm pretty sure the answer-- I should have answered, but I'm not-- I don't see I have a convincing answer right now. So maybe we can discuss offline for a few minutes. Yeah. But I think the scaling is chosen to be at least reasonably correct. Of course, you can still argue certain-- there's always a-- for example, depending on how w correlates with the squares, there's always some kind of flexibility. But I think the scaling is relatively OK. Anyway, but it's is a very good question because you can have misleading results if you are not very careful about the scale. OK. So let's see. OK. So maybe-- I have 15 minutes. I think I can prove the theorem in 15 minutes. So what we do is that we use the definition of the Rademacher complexity and gradually peel of the sup, so like we did before, right? We have a sup, and we have to somehow get rid of it. [INAUDIBLE] But difference by square root of m, so that's why I got-- I was thinking to use the argument, but I think it's not going to be right. Because sum of wi phi of ui transpose x, I think the most pessimistic bound, right, would be something like this is less than if you replace each of the wi by 1 over square root of m, and each of these bear a constant, you're going to have sum of this. And this will be square root of m. So then in some sense, this is not helping me to justify to use this scaling. Right. But if you believe that there's some consolation, so suppose you believe that there's a consolation here, then this would be something on the order of 1. So basically, if you want to-- so if you believe in there is a consolation, then it's a reasonable scaling. Or in other words, suppose you want to make the scaling even smaller, right? Suppose you want to say that I'm going to believe Bw is even smaller than the scaling I give, or Bu is even smaller, then you have to assume there's a strong correlation in your model. Otherwise, your model wouldn't even [INAUDIBLE] with something on order of 1 So whether you are willing to do that, so for example, suppose I tell you that this actually experience things, right? So then I have to convince you that I can choose Bw to be on the order of maybe 1, and then ui to be on order of 1 over square root of m. And then this wouldn't go-- the bond, indeed, would not grow as m goes to infinity, but you will find that sum of Bi and phi of ui transpose x, it's very difficult to be big. You have to match up everything to make it big enough to fit that label. So would you be willing to do that? I think you can arguably say that's not really realistic. OK, cool. So I guess let's prove this. So the proof, as I said, we're going to try to remove the sup in our definition of the Rademacher complexity step by step. So first of all, let's define v to be the post-activation intermediate layer. Let's define phi to be u times x, which is a m-dimensional vector, and you can also correspondingly define vi to be the corresponding activation for the ith example. This is in m dimension. And then using this notation, the Rademacher complexity on the empirical sum of the empirical Rademacher complexity is you expectation. The randomness is from sigma. And you take sup. You have sum of sigma i transposed times here to write f theta of xi. But f theta of xi I'm going to rewrite it as w transpose vi just because that's the notation. So let me just replace this here. w transpose vi. This is f theta of xi. And then here we take sup over two things, over both w and u. And the dependency on u is heading vi. And let's clean up this to put the 1 over n in front, and you have sup over u and sup over w, w transposed times 1 over n times sum of sigma i vi. I guess this probably looks familiar for you because we did something like this in the linear case, as well. And then you can get rid of the w, but you still have the u. So you sup over u, and you get rid of the w. w has L2 norm bound, so it's less than Bw. So you get the sup of this is equal to Bw times the L2 norm of sigma i vi. So now we've got rid of the vw. We can put vw in front. And now let's deal with the u, and the u is something like-- I think I shouldn't have 1 over n here anymore. My bad. So now this is i. This is a sum from 1 to n. And as we write this, let's plug in back the definition of vi, which is phi of u xi. And what I'm going to do is that I'm going to do kind of like a-- if you get familiar with this, you can see that this is a very loose way to do this. I'm going to replace the 2 norm by infinite norm. So I'm going to say that this is less than square root m times infinite norm of this. This is just because-- and then vector v 2 norm is less than square root m times the infinite norm. And so if v is in m dimensional, OK? And now the reason why I want to replace it by infinite norm can be seen later, can be seen now because somehow with infinite norm I can simplify the sup. So now I have a sup, and note that what this vector is, this vector-- maybe let's do something here. So this vector is the sum of a bunch of vectors, right? The infinity norm is above the dimension, the coordinates of this vector. So basically, each of these dimensions is E relay something like sum of sigma i phi uj xi. And the sum is over i. This is the jth dimension of this vector. So basically, I can take sup over j, and sup over u, and sum of sigma i phi uj transpose xi and for 1. And I can actually also write uj here because if I take the sup over j, the jth actually only depends on uj. And that's actually kind of the main reason why we want to use this infinity norm because once you write this, you found that all the j's are equivalent, right? Like anyway you are taking sup, right, so it doesn't matter whether it's uj, u1, u2, right, so the sup is the same. So this is equal to-- [INAUDIBLE] Is this an equality? Oh sorry, this is an inequality. So sup over u, a single vector u. So you replaced uj by u, and you say that this needs to less than Bu. Because uj, you used to have a bump Bu. Let's just skip it for simplicity. And then you can write this as i phi of u transpose xi. In some sense, you remove the m dependency because for infinity norm, how many m's don't matter. And now there is one step where I'm going to remove the absolute value. Because if you don't have the absolute value, it's kind of like-- let's first remove it. So by removing it, will pay a factor 2, and this requires something that is not exactly trivial, but I will not prove it in the interest of time. So you can remove this absolute value. And the reason-- it's in the lecture notes. The reason it's actually, fundamentally, it's pretty simple. It's just basically because the sup is actually mostly positive, like almost always positive. Because you can choose the u it's always positive. With or without absolute value, it doesn't really matter, at least for this case. Because you are taking the sup, right? So anyway, it's going to be positive. Because you can choose u to make this quantity positive. So this is what I-- I will ask you to refer to the lecture notes for the formal proof. And then now, after removing the absolute value, you can see that this is something like a Rademacher complexity of something simple. Because you can view this as your function now, and this is the Rademacher complexity of this kind of function. But still, you have phi and u, right? So that's why we are going to use the Lipschitz composition. So this will be less than 2. You copy all the constants. So this is by the Lipschitz-- I think the Lipschitz composition or the Talagrand lemma. Talagrand lemma. So I think in some sense, you think of-- maybe you can define something like, I guess, maybe H prime to be the family of u transpose x. And then you can also look at phi of H constant composed with H prime. So the Rademacher of phi composed with H prime is going to be equal to-- it's this quantity, right? And this is less than the Lipschitzness of phi, which is 1, ReLU and then you get H prime, which is this quantity. So that's how we do it. And now it becomes linear. So u transpose xi is a linear function class, and I think we have done this before. So for L2 norm constant linear class, I think you can get something like this is less than 2 square root m Bu over n times-- so this is phi w times Bu times square root sum of xi 2 norm squared. This is just by what we had for the linear model. For the loss of quality, you didn't put [INAUDIBLE].. Oh, sure. Yeah, sorry. My bad. Where the 2 come from? So the 2 come from here, this line. And this is something I didn't explain, right, so when you remove the absolute value. So how you get they are exactly the same without losing a 2, is that the question? I suspect it's possible, but I'm not 100% sure. So the proof in the lecture notes does lose 2, but it sounds like it's possible, right, you can save that factor 2. Because the intuition I had doesn't really tell you why you should lose anything, right? So my intuition is that this quality is just always positive. So the x value doesn't matter. So that intuition didn't tell you why you should lose the 2. Well, I will improve the things, I think-- at least the proof I figure out, I read from the book. Maybe I figured out myself, I lose the 2, so maybe it's because I didn't do exactly the right thing, exactly the right thing. OK, so I may-- and then the very last step, you can take the expectation of the empirical Rademacher complexity, and this is the expansion over s. Then you just get like what we did before. So the expectancy of this is less than C times square root of n, so you got this is bounded by 2 squared m phi w Bu times C over square root of 2n. That's because you use the Cauchy-Schwarz for this part. This is exactly the same as what we have done for the linear models. OK, so I guess this is a natural stopping point, and next time we're going to have a bound that somewhat improved on this so that you don't have the explicit dependency on m. Any questions? OK, I guess I'll see you on Wednesday. Sounds good.
AI_LLM_Stanford_CS229
Stanford_CS229M_Lecture_11_Alllayer_margin.txt
So last time we talked about the generalization bounds. And today we are going to talk about some better generalization bound for deep networks. So recall that last time what we did was that we show something like the Rademacher complexity is bonded by something like this, times some polynomial of the norms of the width, right? And we said that this come from a kind of a worst case bound for the Lipschitz-ness of the model. So and actually this is the worst case Lipschitz-ness with respect to input over the entire-- worst case over the entire input space. And this is because when we do the covering number, we have to use this Lipschitz decomposition-- this Lipschitz compensation lemma. And there you have to use the Lipschitz-ness for the entire set. Sorry, this is a little bit distracting in the light, just because I'm sharing the screen using my laptop so that I can charge my iPad. OK. So and we have discussed a few motivations for us to improve upon this theorem. So I guess we discussed four of them. And one of them I, guess I'll just briefly mention them. And what is one of them is that this boundless exponential in depth, which is bad. Because typically you have a lot of players. And another thing is that this is worst case Lipschitz-ness. And another thing is that typically you want to have something like SGD prefers Lipschitz models. That's good. But this is Lipschitz models. And where Lipschitz is on the empirical data. Because if you think about an algorithm, and an algorithm can only do something on the empirical data, right? So we'll show this more later in the course. But even if think about it, so on a high level, right? So the algorithm can only prefer something about empirical data, but not about the entire space, right? And also, we said that for tighter bond, we are going to have something of data dependent, something that depends on the Lipschitz-ness on the empirical data. So concretely, I guess today we're going to do is that we are going to show something like. This the generalization of parameter theta is a function of Lipschitz-ness of f theta on the empirical data x1 up to xn, and also the norm of theta. And its function is a polynomial. So that there's no exponential dependency. There is no [INAUDIBLE] So that's the goal of this lecture. And we will call it-- so and we have to define some kind of-- or introduce some new machinery to achieve these kind of things. And the reason is that this is a different type of bond than what we have done before. Because you can see that on the right-hand side, you have a function of the training data. So typically on the right-hand side, right, so the so-called classical uniform convergence, I guess what really uniform convergence really mean? That's slightly kind of debatable. Because it depends on how you scope it. But at least what we have discussed in this lecture, all the bounds are doing something like this, right? So the bounds before were all looked like for every f in some hypothesis class f, the empirical loss is less than something like a complexity measure of capital F over square root of n, something like this, right? So or maybe with high probability. Something like this. And or, alternatively, we can also achieve these kind of things. I think we-- implicitly we discussed this, right? So for every f, L of f is less than a complexity measure of little f over square root of n. So here this is capital F. So the first type is what we do exactly, what we got exactly from Rademacher complexity. Because you just apply Rademacher complexity on it. And this is in some sense the Rademacher complexity. And the second type, you can also get it by doing a little bit things from the first type. So you can get the second type by something like-- I guess this is a remark-- by considering F to be something like all the functions where the complexity of little f is less than capital C. Think of the complexity as focused on the norm of the width. You first define a hypothesis class where the norm of the weights is less than capital C. And then you apply 1 on all, on capital F on this hypothesis class. And then you do a union bound, and then take a union bound on all C, right? So for every capital C, it defines a hypothesis class. And you probably can write it as capital F sub C. So and for this capital F sub C, you can do the standard Rademacher complexity. And then you can say, I'm going to enumerate over all possible capital C and then do another layer of union bound on top of it. We never do this formally. But this is just one parameter you can just discretize whatever you want. So in some sense, this is how you get the type I type II bond. But the thing is that either of this bound, on the right-hand side the bound depends on the empirical data. It's always a property either of the model or of the function class. So the question is, how do you-- if you want to get something like this, like our goal today, you have to do something more-- you have to introduce some new techniques, right? So our goal is to get something like-- I think maybe let's call this-- I think we call this data-dependent bound, generalization bound. This term might be a little bit overused in certain cases. But what I mean here is that you want to have a bound that with high probability for every f, your population loss is less than some maybe complexity of f and the empirical data. So the right-hand side is also a random variable that depends on the empirical data. Of course, you're asking this for high probability anyway, right? So you're asking that for all-- with high probability over the choice of the empirical data, this inequality is true. And this is still useful in the sense that you can regularize the right-hand side. You can add the RHS as a regularizer. So not only this is an explanation in some sense, but also it can be used actively as a regularizer. Because the right-hand side is something you can optimize, right? So this is the goal that we are trying to achieve. So and in some sense, I think I used to have a little argument about why this is actually the right thing to do. It's kind of tricky, because these days, still there is no consensus on what exactly kind of generalization bound you are looking for. I believe that this is one thing that is good to have. But there could be other forms of generalization bounds. In some sense, you can argue that this is the best you can achieve in the sense that you cannot have a stronger one on the right-hand side. Because, for example, you cannot replace this empirical data by population distribution, right? If you replace that, then you can just choose-- suppose you allow the complex measure to depend on a population distribution. Suppose you allow that I can have complexity of f and the population distribution p. Then why not just define this to be lp of f? Sorry. Why not define this to be the population risk? What if you allow this, why not just define it to be something like x from p fx? So the population risk would be a good complex measure. Then in some sense, you lose the gist here in some sense. It becomes too trivial. And in some sense, that suggests that you are cheating in some sense by allowing the complex measure to depend on p. So in some sense, the fundamental question we are facing about this in the generalization bound is that you don't have access to the population distribution. You want to have an empirical measure for complexity. So that you can use that for regularization. Anyway, but this argument is kind of anyway debatable. So for now, we're just saying that this is one of the reasonable goals, right? So and why doing this is challenging? I think the first thing is that this is challenging because you cannot do the simple reduction as we have done before. So the reduction between I and II, type I and type II bounds doesn't work anymore. And, why? This is because let's give a try. For example, let's define capital F to be all the little f such that the complexity of f x. Suppose you say this is less than c. Suppose you define this, right? This is your hypothesis class. And let's say, suppose we attempt to use-- attempt is that you use f with Rademacher complexity for capital F. What's the issue? Why we cannot do this? The reason is that if your complex nature depends on the data, then your random hypothesis class also depends on data. Before your complex might depend on data, your hypothesis class is just a fixed hypothesis class. So now, it's a hypothesis class that depends on data. So f is also a random variable depending on data. Data means empirical data. And then you can use the Rademacher complexity. The theorem for Rademacher complexity, for why the Rademacher complexity bonds the generalization error, that theorem requires the capital F to be a fixed hypothesis class that is fixed before you draw the random data. So that's the challenge. OK? And how do we address this? So in some sense, the way to-- the high level way to address it is to redefine, or you have to have a refined way to think about uniform convergence. So some refined uniform convergence. This is not going to be exactly what we do eventually. Because what we do eventually will be something very clean and doesn't have any kind of a subtlety. But this is the rough thinking how do you think about it. So maybe let's make assumption. This assumption, suppose the complexity measure is separable in the sense that this complexity of f on the empirical example is of some form like g of fxi, right? It's really some function of f and xi and you take the sum of them. So suppose in this special case, then you can think of-- essentially what we are doing is that we are considering-- then we can consider the augmented loss. So you can define something like l tilde f is equal to something like lf times the indicator that this complexity is less than c. So in some sense, what you are doing here is that you are changing the loss function in some way so that it's easier for you to use the existing bound. So before, for example, let's say, the mental picture I have in mind is something like you have a loss function, which is something like this. Maybe let's say this is the empirical loss. And have some region. And this is the region where you have low complexity. So but this region is a random region. Because the low complexity-- the definition of low complexity depends on data. So this is random. So that's why you cannot use the uniform convergence only on this low complexity region, right? So you cannot say that I'm only going to apply my uniform convergence for this region. Even though that's your goal, but you cannot apply the Rademacher complexity theory. So kind of this augmented loss, what is fundamental is doing something like it change the geometry outside the low complex region. So you, for example, you just defined a new loss function to be 0 here. And then, the same thing as it was in the low complex region. So now we have a globally defined loss function. And so, basically, the region that you are taking union bond over, right, the hypothesis class is still the same. But you change the loss function. So if you do this, then you can hope to-- so can hope to apply existing tools on l tilde of f. And l tilde of f is sometimes kind of like a filtering thing that filters the low complexity. But you don't do it in a technical way. Technically, you are just changing the loss function. That's the only thing we do. But the effect of it is the same as you change the hypothesis class. So I think this is the first thing-- this is the first attempt that we have done in all of our paper and we try to address this. And this is actually the fundamental idea in some sense. So you change your loss function so that you can deal with different type of quantities or different regions of the hypothesis. And then later, so this is one of the paper we had in, I think, 2019. And we got some results. And if you exactly do this indicator thing where you change the loss like this, you can already get something. But the results are messy. So then we kind of in some sense think a bit more broadly, right? So in some sense, all this is doing is change the loss function, right? So you are trying to have a surrogate loss. And the surrogate loss, we are not actually unfamiliar with it. Where we have used the surrogate loss in the margin case. But it's just the surrogate loss there is the simplest way. The simplest is surrogate loss. So basically, what I'm going to talk about today in the main part is this so-called order margin, which is a different way of-- it's kind of like a surrogate margin. And once you have this kind of a fake margin, this is kind of, in some sense defining a new loss function for you. And once you have this new loss function, you can do everything in your super clean way. And then you can apply the existing tools in some sense. So this is a sketchy, a vague introduction. I'm not sure whether there are any questions so far? What do you mean by all layering? Oh, sorry. This is a-- yeah. This is the name of the thing we are going to introduce. But we are going to introduce a new margin, which we call it all-layer margin. Yeah. I probably should define that formally. So we [INAUDIBLE]? So basically, the midpoint I'm doing-- I'm saying here is that we are going to define a surrogate loss. And using the surrogate loss-- the point of the surrogate loss is to change our original loss so that you can focus on an important part of the space. And the surrogate loss will be basically boring for this high complexity part. But they are just-- they are not doing anything. They are basically zero one loss in some sense. And so, that's the general intuition. OK. So now let's see how do we do that exactly. So we're going to start with a generalization of margin. So let f-- so this is a classification model. So typically, you just threshold f and get 0, 1. And your margin is just f itself, right? So the typical margin, the classic-- the standard margin is just defined-- the standard margin is just equals to y times fx. y is between plus or minus 1. That's what we used before. And now, I'm going to define a so-called generalized margin. We say gf, xy is a generalized margin if satisfies the following two property. So the first property is that gf, xy is 0 if you classify correctly. I think I have a typo here. Let me think. Sorry. I think if you classify wrongly, and this will be larger than 0 if fxy is classified correctly. So let me mark this. This important typo. So and you can see that this is trying to imitate the standard margin. For the standard margin is bigger than 0 if you classify correctly. And otherwise, you say you zero it out. So that's, in external margin, also this is only defined for correct classification. So in some sense, you can extend to incorrect classification just by extending it to 0. And so, and we say that-- and there's another small thing. Which is that we have to define the so-called infinite covering number. So this is defined to be l infinity epsilon f is the-- this is a small technical extension of the l2 covering number. It's not that important in most of the cases. It just makes in some sense-- in some cases, it makes the definition cleaner. And in some cases, it makes the proof a little bit easier. So l infinity covering number is the minimum covering size with respect to the matrix rho, where rho is defined to be this l infinity norm. So basically, you say that-- you look at the entire space of the input of f. And you look at a difference between fx and f prime x, and you take the sup. So basically, this is the f minus f prime infinity node. So given these two, what we will say is that our lemma will be that-- with the, you can have a analogous theory, analogous to the modern theory, where you use this generalized margin and also the infinite covering number. Actually, you can even do it with l2, like the standard covering number. It's just easier to state with the infinite covering number. And also, maybe before doing that, let me also have another remark, which is that this infinite covering number is larger than the standard l2 covering. This is just because this is the more demanding notion. Because you are demanding that f and f prime are closed at every possible input. And before, you are demanding that f and f prime are closed on the empirical data. So this is because the matrix that we used before was the matrix that is smaller then the matrix used in the infinite case. So with this small extension, what we're going to do is that we're going to say, actually, you can have analogous margins here with the generalized margin. So the lemma is that, so suppose gf is a generalized margin. And let this capital G to be the family of gf, where f is ranging over the capital F. And suppose, recall that this is like what we are-- this is in some sense just a slightly more complex version of your model hypothesis, right? If you just use yfx, then this will just be y times fx as the hypothesis class, as the class G. And this is a little more general than that. And suppose for some R, the covering number, the infinite covering number of G is less than R square over epsilon square for epsilon and 0, for any epsilon larger than 0. Suppose you have this 1 over epsilon square decay in the low covering number. Recall that this is one of the regime that is good. So this is actually the worst regime we can tolerate when we do the Rademacher complex theory. So and suppose you have this. Then with probability larger than 1 minus delta, delta is the failure probability, which will be hidden in the logarithmic over the randomness training data for every f in capital F that correctly predicts our training example, right? So for margin, we always-- in the margin theory, we always consider functions that can correct it. But if other examples, then you have the 0, 1 error is less than of tilde of 1 over square root of n times 1 over the minimum generalized margin, plus O tilde of 1 over square root. So recall that basically before, what we had was-- oh, there's an R here, sorry. So before what we had was that here you have the standard margin, the minimum margin over the entire data set. And here, R is the complexity of the model hypothesis class, right? And all the other things are the same. Now, the change is that now here you replace it by generalized margin. And R becomes the hypothesis-- the complexity of the hypothesis class of this generalized margin Gf, right? And the complexity is measured slightly differently. We are using the covering number. But actually, you can also use Rademacher complexity here. It's the same. I'm just stating it so that it's easier for the future part. And this bond is actually not very tight. You can actually improve this bond in some ways. But this is the simplest version. And the proof of this is basically, it's just we just basically reuse all what we have done with margin theory. It's just everything seems to just transfer exactly. So just to replace-- in some sense, the proof is just replace the F by G in the margin theory. I will do this step by step. But this is a short version. So technically what you do is still use the ramp loss. Recall that the ramp loss was the loss function that looks like this. This is a gamma, this part is gamma, this part is 1, something like this. And recall that before, after we have this ramp loss, we define this surrogate loss. So we define a surrogate loss l hat gamma theta to be-- before we just applied the model. But now we use the generalized margin. Before here this was just f theta. But now it becomes like G of f theta-- G sub f theta. And we can also define the surrogate population loss, which is just the expectation of the empirical loss. So and before what we did is that we use the Rademacher complexity to control the differences of this true loss function. We said that you take l gamma theta is minus l hat gamma theta, is less than the empirical Rademacher complexity of f. That's what we did before. But now it's the-- sorry. Before we did the empirical Rademacher complexity of l gamma composed with f. And now it's l gamma composed with g because the function class-- the function is different. Plus O to the 1 over square root of 1. [INAUDIBLE] Sorry? [INAUDIBLE] Oh, thank you so much. Yeah, that's a-- thanks. So I would just switch to this. I only have one charger. But yeah. You need another charger? I have one-- No, I think it's the problem is that when you use this, I cannot charge. Oh. Right? Oh, but I can-- it doesn't matter how it-- yeah. It's not a charger, it's the plug, the hole. Yeah. OK, so now it works? OK, good. Thanks. OK, cool. So now we have to use the Rademacher complexity. And then, the Rademacher more complexity is less than the covering number, right? So I guess, maybe let's still do the covering number. So covering number, let's do some preparation. So we assume the infinite covering number. But actually, it's, I guess, let's say, so the covering number, the standard covering number l gamma composed with G, the l2 pn this is less than the standard covering number where you use the-- by removing the l gamma. So l gamma-- so l2 pn. Because this step is using the Lipschitz-ness of l gamma. So it's actually 1 over gamma Lipschitz-ness. So this is using the Lipschitz-ness of the covering number. And now, next you say it is also bounded by the infinitive version. And then, the infinitive version we have an assumption. The assumption was that for every epsilon, this is less than R squared over epsilon square, gamma square. The last type is that assumption. So you can see that actually, even suppose you assume something about this, then is also fine. If you assume something about-- so you don't have to literally use the infinite node. So and then, because this low covering number is less than this, and we have this kind of translation, right, so that if you translate a log covering number to Rademacher complexity, you got Rs l gamma composed with G. It's less than O tilde of R over gamma square root over gamma square root n, right? This is by chaining the [INAUDIBLE] theorem and its consequences. Because we have discussed what kind of covering numbers to implies, what kind of Rademacher complexity, right? So and then, the same thing, I guess, take gamma to be gamma min, which is the min over i, G i f, and yi, right? So and then, this step is not formed. There are some caveat here. Because gamma is a random variable, you have to do union bound eventually. But let me not get into it. I guess we had this issue before as well. But it's only one number you can discretize and do union bound over gamma. But suppose, let's say we just take gamma to be gamma min. And then, L hat gamma min is 0. So then you got L01 theta, then 0 plus O tilde of R over square root n times gamma min, plus O tilde of 1 over square root. OK. So this proof is not 100% formal just because the technical-- I am not allowed to take gamma to be anything that depends on the data. So I have to really show it for every gamma. And that requires another union bound over gamma. OK. Any questions? So maybe, let's see what we have achieved with this lemma. What we achieve with this lemma is that now if you define your-- basically you can put everything in this generalized margin. This generalized margin in some sense is a way to twist your model output. So you can stretch the model output for certain f. And you can squeeze it for certain other f. So in some sense, this is what we actually will do, right? So we, in some sense, stretched the function for those places where-- you see how do it. You stretch the function according to where you are at. And so, basically everything is folded into this generalized margin. And the question is, so the question now is that question. So for what gf you can bound the generalized error-- you can bound the covering number of g, right? And also, you want this gf to be somewhat meaningful, so on and so forth. So and suppose if you just take gf to be the standard one, yfx, then the covering number of this gf will be the same as covering number of f. And it will be-- then the Rademacher complexity will be something like then the covering number depends on the product. So but we are trying to do better than this. OK. So how do we do this? So now we define this so-called all-layer margin. This is a special instance of this gf. This is a concrete definition of gf for which we can bond the Rademacher or the covering number complexity. So to define this all-layer margin, this generalized margin, so we have to actually introduce some notations. So we are considering some perturbed model. So I guess, I think-- sorry, one moment. Maybe I think actually it's useful to have some motivations before I defined. I though I'd try this. So our motivation is the following. So if you think about the linear model, and the margin is defined to be-- the margin, the standard margin, so the normalized margin, so normalized margin is defined to be something like y times fx over the norm of maybe setting your model is double transpose x. So your margin is defined to be y times the model output over the 2 norm of w, right? So this is the normalized margin, which is something that governs the generalization performance. And the question is, how do you normalize, right? So if you have deep model, then you can try to normalize by something, right? So if you have a deep model, so one attempt is that you can try to normalize by some quantity. Maybe this could be the product of the Lipschitz-ness or maybe something else. So that's the natural attempt. And in some sense, all the previous work is in some sense doing this. You are normalizing the margin based on the worst case Lipschitz-ness. So and what we are doing is that we don't know-- we don't want to normalize by only a constant that depends only on the function class. So we take a different approach. What we do is we say, we interpret the standard margin by something else. So we have another interpretation. So our interpretation is that you can view this as something like minimum delta such that w plus delta-- sorry w times x plus delta y is less than 0. So you are trying to find the minimum perturbation of your data point such that after perturbed it, you can cross the boundary. So intuitively, this is also kind of right. Because the margin is the distance to the boundary. So it's also the same thing as how much you can perturb it so that you can cross the boundary. So this is the kind of perspective we take to generalize the margin for all-- for deep models. So if you take this, there is some kind of a small-- if you do the exact math maybe something doesn't match exactly. But this is kind of like the rough intuition about it. And how do we do this exactly? So for deep models, we are still trying to take this perturbation-based perspective. But we have to perturb-- it turns out we have to perturb all the layers, not only the input. So the first attempt we tried is that you just perturb the input. You try to see what is the smallest perturbation of the input so that you can change the decision of your model, right? But that just technically doesn't work. It doesn't seem to capture the fundamental complexity. So we have to consider this perturbed model that perturbs all the layers. So what we do is we have a perturbation delta which is a consequence of perturbation delta 1 after delta R. And each delta i is a vector. And the way you perturb is the following. You also have to work out the normalization in the right way. So you first perturb the first layer. So the first layer used to be w1 transpose x, w1 times x in a deep net. And you perturb that by adding delta 1 which is a vector, times norm of x, or 2 norm of x. And then you perturb the second layer. So how do you perturb the second layer? You first apply w2 on the first layer, on the perturbed version of the first layer. And then you perturb it furthermore with delta 2. And how much you perturb with what's the scaling in front of delta 2? Delta 2 is a vector. The scaling is the norm of the first layer. So how do you exactly design this perturbation is a little bit tricky, right? So we tried various versions in our research. And it turns out this is actually make everything fit nicely. So you can do this for multiple layers. And then, eventually you have this hR, the R'th layer perturbed layer is equals to you first apply the matrix multiplication and nonlinearity on your previous perturbed layer. And then you perturb it by vector delta R scaled by the norm of the previous layer. And after you find this perturbation, you can ask, what's the smallest perturbation that changed my decision? So you can-- and that's the definition of the all-layer margin, which we call mf, xy. This is the all-layer margin. It's defined to be the minimum perturbation. And how do you measure the size of the perturbation? You measure it by the sum of the 2 norm of the perturbation of every layer. And your constraint is that after perturb, I guess you call this fx delta. This is the perturbation of the whole model. fx delta auto after perturbation times y it becomes inactive. So incorrect perturbation. You can also do this format here on labels. But it's essentially the same. So I'm doing binary labels. So this is the definition of the all-layer margin. So you can see that the definition becomes much more complicated. But then the proof of it will be easy. And I guess you can also intuitively interpret this. So mf xy, so in some sense this is big if it's hard to perturb. So if it's hard to perturb, it's hard to change. It's hard to change decisions of the network. And how could it be hard? I think they are the two ways to make it hard to perturb. So one thing is that the model f is Lipschitz. And this means that it is very Lipschitz. So this means that you have to perturb a lot to make a big difference or feel a big change of your model output, right? So and another possibility is that your fx just is large. And sometimes your standard margin is large. If a standard margin is large, you have to change a lot, right? You also have to change a lot. Because before you're outputting something like positive, fx is very big. And now you have to change it to another side of the boundary. So then you have to perturb a lot, right? So or maybe I could say yfx, y times fx is large. So typically, also I always talk about y is 1. So positive means that you are very confident about your prediction. And if you are very confident, then it means you have to change-- perturb a lot so that you can change your mind, right, so that the model can change its mind. So and here, Lipschitz, technically, this is Lipschitz in the intermediate variables-- intermediate layers. Because you are measuring how robust it is to perturbation. But the perturbation is done on the intermediate layers. But the Lipschitz in the intermediate layers, it turns out that it's actually close to Lipschitz-ness with respect to parameters. I'll discuss that in a moment. So and once you have all of this. So then you can have the following theorem. So this is saying that with high probability of l 0 1f is 0 I error of f is less than O tilde of the following. We have 1 over square root 10 first. And then you have sum of-- this is the so-called 1-1 norm of w, which I'm going to define in a moment. And also minimum i, mf, xi, yi, plus O tilde of R over square root, where just sum of absolute values matches w. I guess, in some sense we are in the mindset that anything polynomial in the norm doesn't really matter, so doesn't matter that much. So this is in some sense you just consider as polynomial. But of course, you can also talk about whether it's 1 to 1, 1 norm is the right choice of the norm. In some sense, this is not the best norm we can hope for. So there is still some room for improvement here. But I guess, suppose you ignore the anything polynomial norm. So what's important here is this all-layer margin here. So basically, this is saying that if the all-layer margin is always big. The utilization is good. If the all-layer margin is small, then your generalization is bad. And what's all-layer margin? All-layer margin is about the perturbation robustness so the intermediate layers. So this is saying that if you are robust to perturbations in intermediate layers, and that implies that you have good generalization. And you can also compare this with the bond that we got before. You can pretty much argue that this is strictly better than before. So basically, so compare-- is this the right place for us to discuss this? I guess, let me discuss this comparing with the previous bonds later when I'm doing all the kind of remarks about this theorem. But you can show that this is better than the previous one, mostly just because of this mf-- in some sense with mf, xy it's kind of roughly speaking, you can think of this as-- so in the worst case, I think this is small. This may be the smaller than over fx, something like this. So because this is a Lipschitz-ness and this is how much you have to change your output. You have to change your outlook from fx to 0, right? So and this is a Lipschitz-ness. So that's why you have to change-- you have to make a big movement to change it, to change fx from something like positive or negative to 0. And wait, my bad. I think I-- sorry about that. I think I'm doing a-- should be this. And I said, that's why this is better than the previous bond, because the previous bond didn't consider the different Lipschitz-ness at different data point. But here, you are really talking about if your Lipschitz-ness at the data point you have seen, then you can generalize well. But maybe let me discuss this more. I think let me have a more thorough discussion about this later. I just don't want to-- I want to have a-- show a little bit of this just so that you don't feel like this is a useless bound. But maybe bear with me and just assume this is useful. And then we can discuss all the interpretations. Any question so far? How well I'm doing on time? OK I guess I only have 30 minutes. So let's just dive into the proof. So I guess the proof requires a few steps. But a few small steps. So first of all, it suffices to bound this N infinite epsilon g by O of this. But I think I have some-- sorry. I have some typos here. Don't-- I think this should be this. Yeah. This should be this. I'll double-check later. Because it's always a polynomial, so I didn't really pay too much attention. But I think this is a typo. So I think you only have to show this. Sorry, I don't know. I don't really what this-- I will send a square note taker clarification about this. I think, I don't know exactly why that square is applied inside or outside. But either way, you have to show some bound like this. So let's assume this is the correct bond, and then you basically have to show something like this. Because if you have this, then you can use the lemma before. And then on the generalized margin, you get this Rademacher-- this generalized bound, right? So essentially, we just have to bound the covering number of g. And it turns out that the covering number of g, you have this very nice decomposition lemma. So let's say that fi define each layer, the hypothesis class for every layer. And we also constrain that wi1, 1 norm is less than beta i, OK? So then, your f is really fr composed with fr minus 1 up to f1. This is the notation we have used. And we recall that we had a kind of a decomposition lemma before, which was kind of complicated, right? So you have all of these dependencies and how the arrow propagates. But now the lemma is pretty simple. So let m composed with f denote it's family of the all-layer margin. And then consider then you have that, the log of the infinity covering number, where the radius is just simply the sum of-- the average, in some sense a quadratic average of the radius on each layer. And you care about the generalized margin. This is less than the sum of the log infinity norm covering number of epsilon i f to fi. So in some sense, this is saying that you only have to deal with the covering number for every layer. And then you've got a covering number for the composed function class. But you don't get the covering number for the compose function class exactly. You get the covering number of the all-layer margin of the composed function class. So and here, is n infinity epsilon i fi is defined with respect to the input. So there is an input domain to define this covering number, right? So the input domain, which is the one-- the 2 norm bond is less than 1. And so, I guess the most important thing is that this is not a-- this is n, this is the earlier margin, OK? So and the corollary is that if each of the layer you can bound the covering number by something like ci square over epsilon i square, suppose you can bound this. Let's use little c here. So then, take epsilon to be epsilon times ci over sum of square root, ci square. So i is equal to this. Then we have log epsilon-- you have the carbon number of the compose model is less than sum of ci square over epsilon squared. So which means that suppose you believe ci is a complex measure for each of the layer. Then you can get the complexity for the composed model, the all-layer margin of the composed model, the complexity will be just the sum of ci squared. Yeah. I think. I know-- I didn't have error here. I think this is indeed something like this. Yeah, this was correct. Sorry. OK. Cool. So and the ci will be something-- like ci will be something like the wi1 comma 1 norm. And that's how you go through all these things, right? So basically, we can show-- I think I will improve this. But this is not-- this is because this is for one layer, right? So we assume you can basically you can believe that you can basically invoke a theorem for a linear model to get this so indeed it's true. So for linear models you get something like this. And beta i was the bound, on the, sorry, call it beta i was the bound on the 1, 1 norm of wi. And this will imply that min theorem. OK. So I think I hope I convinced you that basically as long as you prove this decomposition lemma, then you are done. Because you, for the right-hand side you invoke something about linear model. And then you plug in this lemma, and you get the covering number bonds for the all-layer margin. And then you get the original theorem using the lemma I have shown before for the generalized margin. OK? So any questions so far? OK. So now let's prove the lemma, the decomposition lemma. So we always-- so I guess I only stated lemma for the concrete fi, right? Which is the z map to sigma yiz. You can also have-- you can state the lemma in a more general form. And also you can prove it in a more general form. But I'm only going to prove it for this particular family of fi. And so, the first step is that-- so we'll show-- so there are two steps. Step one, we show that mf, mx is 1 Lipschitz in f. And what 1 Lipschitz in f means is the following. So for every f and f prime, mf xy minus f, f prime, xy it's less than in some sense the Lipschitz-ness of every layer. So you have all-layers. So this is sum from 1 to R. And you take the max of fix minus fi prime x to node. X [INAUDIBLE]. So here, f is equals to fr, composed with fr minus 1 up to f1. And f2 prime is fr prime composed with fr minus 1, f1 prime. So basically, the Lipschitz-ness of this all-layer margin is something that doesn't have actual scale in some sense because you are looking at this ball with no scale. And also, it only depends-- basically on sum of the Lipschitz-ness, or sum of the differences between f1 and fy prime. So there's no multiplier here. You are not multiplying on the Lipschitz-ness of f. So it's really literally a sum. It's very clean. And let's prove this step one in a moment. But suppose you have step one. Then what you can get is that you can use step two. You can use the step one just to get the theorem relatively easily. What you do is you say, now you construct a cover. Construct a cover. And how do you do that? The cover is-- the construction is also kind of trivial. So what you do is you let U1 up to UR are be epsilon 1, epsilon R covered of f1 up to fr respectively. And recall that if you still remember what we did last time, the covering was very complicated. So you iteratively construct covers and make it very complicated. But now we just individually construct covers for every fi. And then, and we say ui such that ui is equal to this the infinite norm covering number, epsilon f, fi. And then, so this means that by definition, so we got that for every fi and capital Fi, there exists some function ui in capital Ui, such that fi minus ui it's smal, right? So and I guess we are using this matrix, this infinity norm matrix, f ix minus uix 2 norm is more than epsilon i. This is we know by definition. And now, we're going to turn this into a cover for the compose the family. So and the cover is just, so we just take the U to be the family of just the composition of all of this, the composition of UR composed with UR minus 1, composed with U1, which is-- and this will be our cover. And we'll show this-- we'll be showing will be our cover for m composed with f. And why that's the case? This is because suppose we are given f is equal to fr composed up to F1 in capital F. Then let u1 up to ur up to u1 be the nearest neighbor of fr up to f1. So then, as you can see that using the Lipschitz-ness, this minus n, let's say u is equals to ur composed with u ms1 up to u1. so you get this. So suppose you do this. This is less than using our step one. Sum of the difference between fi and ui, the worst case difference between them over the norm bar 1. And because f and u are close, that's how we constructed the cover. So we get square root sum of epsilon i square from 1 to r. OK. So basically, once you have such a nice Lipschitz-ness property, then you can just cover everything individually. And you don't have to think too much about the composition. The composition is trivial, because this deal with this, is dealt with by this. So now, why the Lipschitz-ness holds? So let's prove step one. So and so we only proved the upper bounds. So we only prove this by symmetry. But because f and f prime have the same row, so you can-- you only have to prove one side. And you can flip them to the other side. And it's sometimes the way to prove it is just really-- each of them is defined by some optimization program, right? And they are the solutions. The optimal value of the optimization programs, right? Basically, you are trying to show that two optimization programs are doing similar stuff. And how do you do that? Typically, you construct optimal solution of one optimization program into a feasible solution to another optimization program. That's how you relate to optimization programs. So let delta 1 star up to delta R star be the optimal choice of delta in defining mf, x of f. So and we want to turn-- so our goal is to turn this into delta 1 hat, that R gamma of feasible solution of mf prime xy. So if it's a feasible solution, you get mf prime xy is less than sum of delta i hat square. And then you can relate this to some of the other i using your construction, all right? OK. So that's the rough idea. And how do we construct this the other one hat up to get an r hat? So we wanted it to be feasible so that we can have this in an inequality. This part is going to be the feasibility part. So basically, the way we do it is that we want to construct-- so we want to make f prime with delta 1 hat up to delta r hat doing the same thing as f and with delta 1 star up to delta r star. So basically you want the perturbation on f prime to do the same thing as perturbation-- the perturbation of delta i star on f. So that, then you know that this will be a feasible solution. Because what's the feasibility? The feasibility is about whether you perturbed the prediction to the other side. So if this one can perturb the prediction, the change the prediction to the other side, then the other one can also change the prediction because they're doing the same thing. That's the principal and how do you do that is just pretty much just algebra. So f has parameter w1 of wr. And f prime has parameter w1 prime up to wr prime. And let's consider the computation. So I guess the computation is this. So h1 1 is equal to w1x plus delta 1 star x. h2 is equal to sigma w2h1 plus delta 2 star h1. So on and so forth. hr is equal to sigma wr hr minus 1 plus delta R star hr minus 1. So this is the computation you did for mf, xy, right? So and I want to imitate this computation by perturbing f prime in some way. And how do we imitate that? So the imitation is kind of trivial. So imitate this. So what you do is you say, you take-- so for f prime, what happens? h1 is equal to w1 prime x plus something. Plus and how do you-- so and you suppose you predict the delta 1 star x, they wouldn't have. Because this w1 prime is different from w1, right? So you have to predict something in addition to make this computation the same as before. And how do you do that is you perturb in addition w1 minus w1 prime x. And then, these two are literally exactly the same. So basically, you declare this as a new perturbation. You declare this has to be delta 1 hat times x 2 norm. So basically, you compensate the difference between w1, w1 prime by adding this additional perturbation. So that means that delta 1 hat is equal to delta 1 star plus w1 minus w1 prime x over 2 norm of x. And you do the same thing basically for every layer. So now h2, you want h2 to be equal to the same h2 above. But your first step is you're only perturbing based on w1 prime. You are not perturbing based on w2. Sorry, you are only perturbing based on w2 prime, but not based on w2. So what you do is you first perturb the original one, the other 2 star h1. And then you compensate the dif by perturbing even more. Something like this. And you declare this entire thing will be defined to be the other two, h1 2 norm. And that means that the other 2, delta 2 hat, the 2 hat will be delta 1 2 star plus something like h1 2 norm. And the denominator will be sigma w2h1 minus sigma 2w2 primeh1. I guess you do the same thing for every layer. And in general, you just take the other i hat to be delta i star plus sigma w1 h i minus 1, minus sigma wi prime h i minus 1, over h i minus 1 2 norm. And now, we got our goal. So basically, delta 1 hat on f, on f prime, is the same doing the same thing as delta 1 up to delta R on f. I'm using these shorthand just to save some time in writing. So I'm saying that, basically I'm saying you perturb the delta 1 hat up to delta r hat from f prime it's doing exactly the same functionality, the same prediction, as the other one. So that means that this is a feasible solution. This is a feasible solution for mf prime. So that's why mf prime xy is less than the sum of delta i hat 2 norm square, square root. And now, this is equals to I guess I'm going to bond this by square root sum of delta 1 star 2 norm squared plus square root sum of the differences between them. And this is using the so-called Minkowski-- I guess, I always think of it as Cauchy-Schwarz but I think there's a technical name, which is called Minkowski inequality. So what it's doing is that the Minkowski inequality is saying the following. So if you look at square root of the sum of ai plus bi 2 norm square, this is less than square root sum of ai square, and square root sum of bi 2 norm square. So this is the Minkowski inequality. And actually, you can prove this inequality by Cauchy-Schwarz by just x-- taking the square on both sides. And cancel a bunch of terms, and it becomes Cauchy-Schwarz. All right. So we apply this when-- where ai is delta i star, and bi is this thing, is the difference between them. I think 5% is enough for me. Yeah. Cool. So yeah. One minute per percentage. OK. So now, let's see. So and this one is the mf of xy. And the other one, you can bound it by-- this is, I guess this is literally you can bound this by square root sum over i from 1 to r, the max over x in 1 norm and sigma wx minus sigma a prime x square. Just because this whole thing is homogeneous. So dividing by the 2 norm is the same as restricting 2 norm to be 1, right? So and then this is equal to mf xy plus square root sum over r max x 2 norm less than 1, f ix minus fi prime x squared. And this is what we wanted for step one. Any questions? [INAUDIBLE] w and w prime? Yeah. So w and w prime-- w is the parameter for f, and w prime is the parameter for f prime. And they don't have-- at least in this context, they don't have any relationship. Because I'm just trying to show this step one. So I'm taking two arbitrary f and f prime. And I want to say that the all-layer margin difference, difference in all-layer margin is bonded by the difference in each of the layers. So it doesn't matter what they are. [INAUDIBLE] Yes. So f prime involves all the wi primes, and f involves all the wi's. Cool. Any other questions? It feels like this proposition that we just proved [INAUDIBLE] Can you say again? [INAUDIBLE] Yeah. So this-- if I'm guessing the question, I think all of this depends on the definition of mf, yes, of course. And it's actually, when we do the research, I think it's that we are trying to meet in the middle. You have to change the definition in a way so that the analysis is OK. But in some sense, this is-- in some sense because the proof is simple and clean, so somehow I feel good about the definition to some extent. Yeah. So I guess I'll use the next few minutes and the 4% of battery to talk about some of the comparisons, interpretations, and our next possible extensions. So I think, I guess interpretation, I kind of in some sense I've discussed this a little bit. So the most important thing is the all-layer margin part, at least that's our side. And we don't even care about the norm. So then you can compare with Bartlett at all '17 is the paper that we discussed last time. So you can formally do this, or you can formally say that the perturbed model, if you look at the difference between the perturbed model and the original model, the difference is something like if you do some kind of telescoping thing. This is supposed to be not super hard. So you can basically imagine that for every layer you perturb something, so you pay something like that. And then you also have to pay the blowing up factor because of the other things. So you can prove this. So if you ignoring some minor details, which allows me to have a cleaner exposition. So for example, you ignore the dependency on r, then you can basically say that if-- maybe let's also suppose y is bigger than 0, just for simplicity. Say y is 1. Then basically, if you want fx to be bigger than 0, and fx plus delta to be less than 0, that's kind of what the situation would be. You perturb the model to predict the wrong thing. And then, this means that your delta needs to be basically something larger than product of the spectral norm. Because at least, because that's how you can make enough difference. So if your delta is too small, then the right-hand side will be too small so that you don't really make a big enough difference. So times fx. So that's just saying that basically in some sense this is saying that mf xy over y times fx, the new margin versus the old margin, the ratio is something like this. I'm writing this in somewhat informal way. So I'm ignoring constant or even ignoring some small minor details. I think this product probably shouldn't probably range from 1 to r, it should miss some terms in the middle. But those are not super important. And this is basically saying that if you look at the inverse margin, this is kind of like f, fx times the product of the spectral norm. So this is indeed a better bound than before. Because our new bound depends on this and old bond depends on this, on the right-hand side times the spectral norm. So this is a better bound. At least in this aspect. And another thing is that later-- but why this is, how much better it is right, compared to the previous one? That's a question mark. So is it true that your all-layer marginal becomes polynomial instead of exponential? There are some indicators that this is a much better bound empirically, or conceptually. Empirically, we did verify it seems to be much better. The number, it becomes smaller. Just because empirically your Lipschitz is better than the worst case bond. And another reason why you can somewhat hope that the empirical-- this is better is because later we will show-- I think I've said this once before. But let me write it down again. So SGD prefers Lipschitz solutions, and in some sense-- on the data points, and Lipschitz on the data points. In some sense this is saying that your algorithm in some sense is minimizing the Lipschitz-ness on the data point. So that's why your Lipschitz-ness on the data point is probably better than the worst case Lipschitz-ness over the entire domain. And that's probably why the gap between these two bonds are significant. So in some sense, this is saying you are implicitly minimizing-- maximizing the all-layer margin. So but of course, this is approximately. Because all of this what SGD prefers-- in terms of the form, we will see it's similar. But they won't be exactly matching the same form. So we haven't got a kind of fully coherent theory yet. But conceptually, it all seems to roughly match. And another thing is that there is something which actually people actually use in practice, which is called SAM. This is called sharpness aware regularization. This is something that can let you two get better performance empirically on many data sets. And what they are doing is that they are doing perturbation. So we are doing a perturbation. But they are also doing a perturbation, but they are perturbing the parameter theta. So they are trying to make this model more Lipschitz in the parameter theta instead of more Lipschitz in the hidden variable, the intermediate variable h i's. But actually, these two are very related. So here is a fact. If you look at the loss, the gradient of the loss respect to the parameter wi, this is equals to the gradient of loss respect to-- I'm always-- this is on a single example, always on a single example. This is the equals to the gradient of the loss respects to the hidden variables in the layer above and times the hidden variable transposed. So this is just by derivation. Actually, this is used to called-- wait, I'm blanking on the name. In neuroscience, there's actually a term for this thing. But this is just literally, just you compute a gradient of wi, how you do it, you use change 1, you get this. So here, this is the gradient respect r1, this is the size of the environment. So if you look-- so that's why if you look at the norm of the gradient in respect to the perimeter, then it's quite related to the norm of the gradient with respect to hidden variable. This is a vector, this is a vector, and this is a matrix. So that's why this is true. So Lipschitz-ness in parameter is similar to Lipschitz-ness in hidden variable. It's somewhat related. So the last thing-- I guess I'm running out of time. Sorry. So this is a more general version, where you don't have to care about the minimum margin over the entire data set. You can prove something like test error is less than 1 over square root 1 times-- instead of the average, this is average margin instead of the worst case margin-- the minimum margin over the data set. So you look at average inverse margin of this form square. And then times the sum of complexes of each layer. So and plus low order term. Oh, really? Oh, of course. It can really-- OK, 5% is not enough. But this is literally the last thing I want to say. Yeah. But maybe let me-- are there any questions? So basically, the last thing I want to say is that instead of having the minimum, like all-layer margin there, you can have the average all-layer margin. Cool. OK. Any questions? OK. I guess then see you next Monday, oh, Wednesday, in two days. Wait, today's Monday. Right. Yeah, OK. See you. Bye. Thanks.
AI_LLM_Stanford_CS229
Stanford_CS229M_Lecture_6_Margin_theory_and_Rademacher_complexity_for_linear_models.txt
Well, hello, everyone. So I guess-- so in the next-- I guess, in this lecture, what we're going to do is we're going to bound Rademacher complexity by some concrete formula for concrete models. And by concrete models, I really just mean linear models for this lecture. And in the few lectures later, we're going to talk about neural networks. And just as a review to connect to past lectures, we have to prove that generalization error, excess risk or the generalization error is upper bounded by Rademacher complexity. That's what we have done last time in the last few lectures. And in the next few lectures, we're going to talk about how to upper bound Rademacher complexity exactly for concrete models, like linear models of networks. And also, we are going to deal with the classification loss. So there is something to do with classification loss because it's a binary loss. It's not continuous, so we have to deal with it using some special technique. So that's the overview for this lecture. So let's first set up the basic things, its classification. So we're going to deal with binary classification. So I guess as you probably would expect it, you have y which is in minus 1 and 1 and some classifier h, which maps the input space x to real number R. So here the classifier maps-- we think of h as the function that maps the input to the real number, the logit, for example. And when you make the prediction, you take a sign of the output, of the classifier. So you take the sign of h(x). And this gives you the classifier, right? The h outputs positive numbers. You output 1. And otherwise, you get negative 1. And H is the family of h's. That's orientation. And the loss function on each example x, y is equals to the indicator of y is not equal to the two exa-- true label is not equal to the sign of h(x). That's our setup. Right. So I guess the first thing is that I'm going to very briefly mention this finite hypothesis case. That's just a very quick kind of note. Or we have already done finite hypothesis class, right? So it's probably useful to know that you can recover the same bound for finite hypothesis class using this machinery of Rademacher complexity. Right? That's some kind of like a-- probably a reasonable requirement if you think that Rademacher complexity is a powerful tool. So there is indeed such a theorem, which I'm not going to prove today, because the way to prove it actually is more related to something more advanced later. So I'm just going to state the theorem. So saying that if f satisfies that for every f in f, the sum of f Zi squared, this is-- it's less than n squared. This is a condition that is a little kind of not super intuitive. But actually, what really this is saying is that this is implied. This is a weaker version of just assuming f(z) is bounded less than n. So if f(z) is binded by n, then, of course, the average of the square is binded by n squared. Right. So it's just I'm assuming-- I'm stating the state, the weakest version of the theorem for generality. And if you know this, then you have that the empirical Rademacher complexity on this example is z1 up to zn, right? So s is equal to-- s is equal to z1 up to zn, right? So it is bounded by something, by the size of this hypothesis class. And it's bounded by the logarithm of the size of the hypothesis class times something like M squared, the range of this function class n, function class f. And also, you divide by n. You take the square root. So this is-- essentially, basically log f over n, square root log f over n. . And if you apply this to the finite hypothesis class that we have talked about like, for example, if you apply this to a loss function, a binary loss function, you get what we had before. It's kind of-- it's almost exactly the same bounds eventually. And we are not going to prove this. But we are going to prove it in the future lectures. Today, we are not going to prove it because the techniques is more related to something what we are going to use later. But this is not-- this is just saying that we can achieve what we had, but it's more interesting when you apply this Rademacher complexity for continuous function class, right? And we have also talked about what's the limitation of having finite hypothesis class. For example, the limitation is that even if you do this with some kind of discretization with continuous models you're going to lose a parameter p in your bound, right? So if you do this plus some discretization, then likely what you're going to get is something like p over n, where p is the dimensionality of the model, it's number of parameters in the model if you do some discretization. And they wouldn't be super impressive given that we already have done those brute-force discretizations that we had done before. Right? So today what we're going to do is that we are going to have a different way to upper bound Rademacher complexity not using this kind of tools. And the way that we do it is actually more algebraic and analytical as we'll see. So before doing that, we are going to-- so first deal with the loss function, right? So you can look at the loss, right? This is L01 x, y, h. All right, it's this. So the thing is-- the tricky thing is that there is a sign here, right? So if you don't have the sign and h of x is outputting something like binary, where we call that we have done this in one of the previous lecture, so in that case, we assume h(x) is-- so in previous lecture, we have shown that if h(x) is outputting something like binary, right, it outputs plus 1 or minus 1, then you can show that the Rademacher complexity of f, the Rademacher complexity of the loss functions, the losses, is basically on the same order as the Rademacher complexity of the hypothesis class, h right? That's what we did last time. But now, we are doing a slightly different definition of the h, right? So the h is the function that outputs the real number, right? It's the one that before the sign function. So then this kind of like-- this kind of like reduction doesn't work anymore, right? So, of course, you can still apply the same thing to the sign of h, but then you're going to get a Rademacher complexity of the sign of h, which again is also-- it's kind of like you didn't solve the problem. You had the problem in a different way. So we're going to express a deal with this issue first. And that's called-- sometimes I think people call it margin theory. So we're going to introduce a bunch of tools to deal with this sign issue. In some sense, you have to convert the real number to the binary number in some effective way. And then we're going to bound the Rademacher complexity of linear models using analytical tools. So that's the plan. OK. So I guess the kind of the intuition is that the scales, in some sense, like matter, when you do classification implicitly. But even though at the end of the day, your scale doesn't matter. So kind of the-- the kind of the motivating example is the following. For example, suppose you have a classification class, right? I'm using red for positive data, and you have some kind of like-- you kind of have the-- I use circle for the negative data. And if you think about different classifiers-- for example, this classifier and this one, right? So these two classifiers just-- intuitively, I'm not claiming anything rigorously. If you intuitively think about these two classifiers, the pink one probably should generalize worse than the blue one because you only see these eight examples. Maybe if you-- kind of like your new example, test example, maybe it looks like-- maybe it's here. Then the pink one would have a mistake on this test example, right? But the blue one sounds like-- less likely to make mistakes on test example. So intuitively, the blue one seems to have somewhat better generalization just because this kind of like separates the two clusters more clearly and more confidently. So the confidence order-- so in some sense, you can think of this h of x itself as the confidence because this is a real number. In some sense, this is-- the little bit bigger it is, the more confidence you are about this example. And this does matter to some extent. And this is what we are going to say. So like how do you somehow reason about this and make them matter in some sense in your analysis? So here is the more formal approach towards this. So let's first define-- OK, so I guess-- let's first assume-- this is a assumption throughout this lecture. So we assume that we classify all the examples correctly. So assume the training error is 0. So perfect classification for training data. And you can see that this is, in some sense, reasonable, especially given the more of the kind of like success of large network, right? So typically, you can make the training error very small. And this was actually a reasonable assumption even before deep learning came into play. Before deep learning, what people did was that you add more and more features, and your dimensionality of the features becomes higher and higher. And at some point, the dimensionality of the features becomes bigger than the number of examples, and then you can always theta tune data with zero error. So formally what that means is that if you look at a training example, it's always equal to some h oops, this. That's what I mean by training error zero. And under this kind of training error zero hypothesis class, you can define the so-called margin. So this margin, technically, I think, is only defined for-- at least if you don't do any modification of it, you probably should only define for zero error classifier. And this is the so-called unnormalized margin. So the margin of x-- this is really just the y times h theta of x. You multiply x with y just because you want to make it a positive number, right? So if y is positive, right, we want a positive class, you want h(x) to be big. And if y is negative, you want h(x) to be small, right? So in some sense, margin is kind of like a very informal version of confidence. It's not a probability, of course. It's between 0 and the infinity of, right? So this is always nonactive if you create a classifier exactly on the training data. So this is always nonactive when you are correct on this data point when y and h theta x y is equals the sign of h x, theta x, yeah. OK? So this is the definition of the margin of a single example. And then you can define the margin of this data set by the margin of our data set, margin of the classifier on the data set. So this is defined to be-- when you look at the minimum margin over all examples-- so you take the minimum over yi times h theta of Xi. Of course, this margin is a function of the classifier. If you change the classifier, you have different margins, right? In some sense, the blue one-- as I drew there, right? Has a bigger margin, a pink one, because in some sense, this pink one has some example which has very small margin. And the blue one, for all the examples, you have big margins. So you take the minimum. Then over all the examples, you still have relatively big margin. But I guess here I'm defining the unnormalized margin. So if you look at unnormalized margin, it's not exactly the distance from the example to the hyperplane. So you have to normalize it so that it becomes the distance to the hyperplane. But I think we-- I think, in this course, we don't need to actually define the normalized margin per se. So if you normal-- so for linear model-- I guess you probably you have learned this from CS229, right? So for linear model, if you normalize this margin with the norm of the theta, then it's going to be the distance between example to the linear separate to the hyperplane. And the minimum margin would be the minimum distance of all the examples to the hyperplane. And our goal would be something like-- basically, you are going to bound the generalization error or the Rademacher-- I guess we bound the generalization error binded by the Rademacher complexity. That's what we did in the past for the Rademacher complexity and then this guy, some function of the parameter and the parameter norm to the normal theta and some function of the margin. That's what we'll eventually get out of this lecture. And while we need to define these margins, the reason is that this partly come from a technique to deal with the loss function. So we're going to introduce a surrogate loss function that takes in a margin-- that takes the margin into account. I guess we-- at the end of all of this will be all together-- intuitively, the reason we want to do this is that we somehow believe that margin matters for the generalization. Then you probably want to have a bounds that depends on the margin. And you also have a loss function that also depends on the margin. So so far, if you look at the 0-1 loss, it doesn't depend on the margin, right? How large h(x) is doesn't change your 0-1 loss on an example, right? So as long as the sign doesn't change, you don't really care, right? So we want something that kind of depends on the margin. So the loss function is called ramp loss. I think sometimes it's also called just margin loss. So this loss function has a parameter, gamma. And gamma is kind of like the target margin, in some sense, or kind of a reference margin. You can think of it like that. So this is the loss function that takes into a single number t, and it outputs-- maybe let me draw it first. I guess maybe let's write down the technical equation. So if t is greater than gamma-- OK, let me draw it. So this function is a function that looks like this. And here this is gamma, and here this is 1. So when t is larger than gamma, you make it 0 when t is equal to-- when t is less than 0, you make it 1 that corresponds to the flat area on the left hand-side of the origin. And then when you are between 0 and 1, you linearly interpolate. So this is the way to linearly interpolate is 1 minus t over gamma if you are between 0 and gamma. Right. So this is the linear region. And why you are-- why you're interested in this? The reason is that this is, in some sense, a extension of the 0-1 loss. So maybe let me first define notation, so with a bit of use of notation. You can also write l gamma x, y. You look at the margin loss applied on this classifier h that's defined to be l gamma y times h of x. This is the definition. But these two l gamma have different meanings on the left hand-side and the right-hand side as you can see. So this one is the one we just defined. So basically, first of all, before, when you talk about loss functions, right, it takes in two arguments, y and y hat, right? But for classification, the only thing that matters is you take the product of them. So that's why you only care about y times of h times h of x, right? So basically, in this notation, we have noted the ideal loss function, l 0-1 loss function of x and y, which x is equals to some indicator of y times h(x) is y times sign of h(x). I guess that's the same thing, right? So y times h(x) is larger than 0, right? So this is the 0. This is a different way to write 0-1 loss function, right? So if y and h(x) doesn't have the wait, go back-- this is less than 0. Sorry, I have to mark that. I have a typo here. So that I can fix it for the future. OK. So this is the binary classification loss, and this is the so-called ramp loss. And you can see the difference is that the indicator function would just look like this. This is indicator, t is less than 0. And what we do is that we extend-- we make this indicator function more continuous. That's basically what we are doing. So in some-- and you can know that. So from this, you can see that the l gamma y h(x) is always bigger than the indicator of y h(x) is less than 0 just because the function above [INAUDIBLE] is bigger pointwise than the function below, right, which means that if you look at the 0-1 loss of an example, it's always less than the ramp loss, at an example, which means that if you look at the test loss-- right, if you take the expectation over x, y joined from p. All right. So this is the final thing you really care about, the test error, right? So this is the fundamental thing you care about. You can at least upper bound this by the population error under the ramp-- under the population loss under the ramp loss. Right. So by doing this, you change the-- you make the loss bigger, right? And then we're going to bound this. So basically, with this, eventually, what we're going to do is we're going to bound the test error, the test loss under the ramp loss, which is the upper bound on the binary loss. Right. So this is our goal, upper bound this. OK. And how do we upper bound this? So I think it's kind of probably-- at least when I read this, at the first time from a book, it's unclear why you want to do this continuation. It will come just-- it will come in a moment. So one of the reasons is you want to make a Lipschitz so that you can somehow get rid of the loss. But before doing that, we have to-- I guess, let's first clear up the hig-level thing first and then let's look at the low-level detail about how to use the loss. So the high-level plan would just be that you let I hat gamma. This is the empirical loss corresponding to the ramp loss. And you can also define-- I think this is a function of h. You can define the population loss. As I said, is this l gamma x, y h. And then if you use Rademacher complexity, the machinery we have developed you get the population loss minus the empirical loss is bounded by 2 times the empirical Rademacher complexity plus 3 times log 2 delta over n. Right. This is what we did in the previous lecture, right? The generalization bound can be bound. The generalization error can be bounded by the empirical Rademacher complexity, where f is this family of losses defined by the ramp loss. Right. So this is saying that eventually you are going to just need to-- basically, this will be the goal next, right? Because if you have the bound on this Rademacher complexity, you have the bound on the-- we are going to do this more carefully after we have the Rademacher complexity. But roughly speaking, once you have the Rademacher complexity, you have an upper bound on the population ramp loss. And the population ramp loss upper bound the population binary loss. Right. OK. [INAUDIBLE] Where is the sup? [INAUDIBLE] Sure. So yes. Yeah, but without sup, it's also true, I guess. Yeah. So I guess for average, this is true. All right. With high probability for average, this is true. Technically. OK. So now let's talk about the Rademacher complexity. And this review is why we care about the ramp loss. So Rademacher complexity of our f relates to the Rademacher complexity of h in a pretty nice way. And here is the-- here's the lemma that relates them. It's called Talagrand's lemma. So seeing the following-- so suppose you have a function of p-- it's a one-dimensional function. And it's a Lipschitz function, kappa Lipschitz function. I guess we have kind-- of define the Lipschitz function. So this really means that if you have two-- any two numbers, p of x minus p of y is less than kappa x minus y. And here is just absolute value because all everything is one dimensional, OK? And once you have this, then you can look at the composition of this one-dimensional function with any hypothesis class. So this is defined to be-- you compose phi. So basically, you map phi to the composition of p of h of z, right. So this is the mapping, right? This is a-- phi will be the loss function basically. So here is abstract. So you can compose any function of phi with the hypothesis class. And to get this phi, compose with h and-- why I'm changing the color? Anyway, so then you can get what you-- then what you can get is that the composition-- the composed hypothesis class is bounded by kappa times Rs over H. So basically, they're saying that if you compose anything on top of the existing hypothesis class, if what you composed with the p function is Lipschitz, then you just only blow up the effect of kappa by the Lipschitzness. And so with this, you can probably see why we care about relaxing the binary function because the indicator of function is now Lipschitz. But if you use the ramp function, it will be Lipschitz. And that's what we do next. By the way, this theorem is-- this lemma doesn't have a very simple proof. We're not going to prove it in the lecture. It does require some-- it's kind of like something, kind of like pretty-- in my own opinion, it's kind of pretty novel and deep to me, like I-- I used to be able to prove it on-- like I prove it once myself. But I think all the existing proofs I know is kind of like somewhat a little bit mysterious to me. But the high-level-- the intuition probably is reasonable because you have a hypothesis class. You compose it with something that doesn't really introduce additional fluctuation that much, right? So that's why you don't make the hypothesis class much more complicated. But if you look at the-- kind of like the-- if you look at exactly what this formula is saying-- this is saying that you take the sup of h in H. So how do you write the left-hand side is something like this. sigma i p of h of Zi. And we want to show that this is bounded by h Zi. Right. That's the goal. That's what this thing is saying because some would kind of like imagine why this is difficult to prove because you cannot really change the order of expectation with sup. If you do that, you make the inequality to lose. And somehow there's a phi somewhere in the middle of this equation. It's kind of very, very hard to pull it off. Anyway, this is just my personal comment about this dilemma. It sounds pretty deep to me. OK. Anyway, so we're going to take-- we're going to use this. And I think it's probably somewhat-- I mean, obvious how do we use it? We are going to take this phi function to be the ramp loss. Right. So l gamma t. Then because the ramp loss-- I guess let's go back to here. So the ramp loss is the Lipschitz function. The Lipschitz constant depends on gamma because here the Lipschitz constant's 0. Here is completely flat. So how the Lipschitz stays depends on what's the slope here. And the slope there is 1 over gamma. Because this is gamma and this is 1, so the slope here is 1 over gamma. So the Lipschitzness of the ramp loss is-- of phi is equal to 1 over gamma. Right. And if you take-- so if you take H to be-- I guess let's take H prime to be this. You map x, y to y times h(x), where h is in H. H prime is still not exactly the same as h because there's a y multiplied with h. And then you take f to be phi composed with H prime. And then by the Talagrand's lemma, what you have is that the Rademacher complex of f, which is what we care about, is less than 1 over gamma the Lipschitzness times Rs of H prime. OK. So we kind of get rid of the effect of the loss function by using this Talagrand's lemma. And then you can also relate H prime to h much easier now because H prime and h-- what's the difference? The only difference is you have a sample. Rademacher complexity is not very sensitive to sampling. This is just because Rs-- I guess we have done this before. At least interpolating some other proof, right? So the Rademacher complex of H prime is something like this. sigma i yi yi times h xi. That's using the definition. And now you look at this. sigma i yi has the same distribution. i's y i, so sigma i, because anyway you flip it, right. So that's why this is equals to-- you can basically get rid of the yi. You get just this. And then the [INAUDIBLE] constant is the Rademacher complexity of h. OK. So with all of this, what we got is that-- so this is basically combining these two things. What we got is the Rs of F is less than 1 over gamma times Rs of H. And you can see that the interesting thing is-- first of all, the loss is gone. And second, y is also gone, right? So you don't have any y's in the right-hand side anymore. So basically, at the end of-- the only thing that matters now is the h(x). OK. And with this, we can put this-- all of this thing together to get a [INAUDIBLE] bound on a binary test error. So recall that we assume-- the perfect classification will assume that yi times h of xi is bigger than 0 for every i will assume a perfect fitting. Perfect. Good. And then you can take gamma min to be.-- right. So this is the-- so let me see why I'm using-- so this gamma min is the empirical is the minimum margin for this data set, right? So now, you have that, I guess, in technical-- let's see what's-- I actually have a typo here, sorry. So let's just call this gamma. We use gamma to define this, right? So then if you look at L hat gamma h, this is what-- this is going to be-- I think it's going to be 0 because you have l gamma y i h(x) i. And y hi is always bigger than gamma and recall that in this ramp loss, if you are bigger than gamma, then you are 0. So basically, every example has zero loss under the ramp loss. So for the training example is the binary loss and the rent plus are not different because they are both 0. And therefore, you can have the following sequence of inequality. So you first bound to the 0 1 loss of h by the ramp loss. This is because the ramp loss is always better than-- larger than 0-1 loss. And then you say that this is smaller than I hat gamma age plus the Rademacher complexity plus something like, O Rs H over-- let's do it a little bit slowly so the Rademacher of complexity of f-- plus some square log 2 over delta n. And then you use the inequality between f and h. So you get Rs of h over gamma-- plus square root log 2 over delta over n. And then this one is 0 as we claimed because this is the empirical ramp loss for the-- from the training data. So this becomes 0. So you got just the big O of Rs of H over gamma. Right. So gamma right, so, gamma. So there's a caveat with with this inequality. I'm not sure whether any of you have noticed that. But if you have noticed that, maybe hold on. For a second, let's first somewhat interpret. OK, maybe let's just expand this notation. So the problem-- so what's the caveat here? There's actually a mistake in some sense, not a serious one, but there is an issue with this derivation. The reason is that-- what is the definition of gamma? Right. So here the definition of gamma depends on the data. And then you just mash up all the independents, all the-- so if gamma-- when we do all of these things before-- so gamma is a constant. And then you have the gamma first, and then you draw your data points. You have your Rademacher complexity, so and so forth, right? So but here we take the gamma to be something that depends on the data which will break all the Rademacher complexity machinery because in a Rademacher complexity machinery, you cannot let your loss function or your function class depend on data, right? So theory-- the function class F, this cannot be-- this cannot depend on data. Right? So we did want to deal with a uniform convergence, the h hat, the classifier, the final classifier you bound, can't depend on data. That's the benefit of uniform convergence, but the function class f cannot depend on data. So that's the small caveat. But this is not a very big deal. But if you choo-- OK. But if you choose gamma to be something that depends on data, then your function class depends on data. And then you break this way. But I think-- I'm not going to deal with this very formally just because this is not a super-- for mathematical rigor, of course, you can do this, but it's relatively easy to fix. The way to fix it is that you do another union bound on the choice of gamma. So now you choose gamma to be the minimum margin, depending on data. But what you should do is you should also prove this for every gamma, right? So if you can prove this inequality for every gamma, this bunch of inequalites. This is, of course, here. For every gamma of, I guess-- you can prove it until here for every gamma. And then in the last type, you can choose the gamma to be the one that you wanted because you're already done with the Rademacher complexity. So you just plug in whatever gamma you want, right? And the way to do it is actually relatively easy. Roughly speaking, what you do is you say-- you look at the-- gamma is a single number, right? So to do uniform convergence over a single parameter is always relatively easy. And after here, it's even easier because you don't have to care about multiplicative bounds. So suppose you have a bound on what's the largest possible gamma you have. Let's say, you have B. Then what you can do is you can discretize this into multiple buckets, something like this. Maybe you can-- so you can have one bucket is B over 2 to B. Another bucket is B over 4 to 2 B over 2. And you prove this for every point in this discretization. It would discretize-- so basically, within every bucket, you just don't really change much, right? The only difference between two numbers in the same bucket is only a factor of 2. So at most, you lose a factor of 2. So basically, this is saying that you only have to show a bounds for those points between each bucket, right? Those kind of boundary points. And how many points they are? There are only log B points, in some sense. And you can also do a uniform convergence. So all of these points actually get even technical. You can even get log-log B dependency. But anyway, so this is the rough idea about how to do this last step of uniform convergence. Because it's relatively easy if you look at the papers, I think most of the papers don't actually do this step just for simplicity. Of course, the state of theorem in a different way so that the theorem is still correct. So they just don't do this very, very last step to make it simpler. So, yeah. So that's also what I'm going to do. I'm not going to prove with you like a super rigorous theorem. I guess, if you really know how to prove it, what's going to happen is the following. So suppose you really want to have a theorem. The theorem statement will look like this. So it will say that with probability larger than 1 minus delta for every-- so for every gamma between some 0 and gamma max, then you can say that for every h, L gamma h less than L hat gamma h plus some big O of this plus square root with log 1 over delta over n plus square root log gamma max over something like this. I guess maybe gamma max should be larger than 1 so that you don't, yeah.. And as a corollary, you can have L01 h, which had for the hypothesis you care about, right, is less than something like O of Rs of H over r gamma plus square root of log 1 over delta over n. So here this is the empirical gamma. Maybe I should call it gamma mean. I think I somehow have a little bit inconsistent notations here, sorry. So this is min over i y. OK? Can you [INAUDIBLE] for gammas [INAUDIBLE] so you can make [INAUDIBLE] but then so like, you just have a max equal 1 so that last one [INAUDIBLE] OK. I guess-- I think the question is whether why don't take gamma max to be really small, right? So first of all, it's not clear whether you can always prove that the final gamma you have on the empirical data can be really small. That's probably not-- actually, you want the gamma to be big. And you want the empirical data to have bigger margin so that your generalization is smaller, right? So you do want to make somehow the gamma like-- at least that's the interesting regime. This very, very small gamma regime is probably not the most interesting one because your bound would be-- [INAUDIBLE] Your right-hand side would be very big. So actually, if the gamma is really, really small, you probably don't need the third-- oh, I'm sorry. I think I know, sorry, my bad. There is a third term in this as well. Let me fix that first. But suppose your gamma is really, really small, right? So you probably don't even need a third term because the first term is already very, very big. That already kind of governs your generalization bound. So you do care about somewhat a large gamma. But there's still a question about why you want gamma-- what if all the scales are very, very small, right? So I think it's really just that-- I think technically you-- let me see. So does that answer the question? It did. [INAUDIBLE] Yeah. Yeah I think there are some kind of small things-- for example, what if your everything is your super small, right? What if all the numbers are extremely small? I think you can make this bound a little bit tighter in some ways. Yeah. I think there's another question. [INAUDIBLE] This one? Oh, this is a log gamma max. And the same thing here. Yep. OK, so I guess [INAUDIBLE]---- generally, I don't recommend to spend too much time thinking about this small subtlety here. The most important thing is to do the first term, right? So let's try to-- so I guess maybe the interpretation is more important. So the first term, this is Rs of h over gamma, right? Gamma min, where gamma min, this is the empirical margin of the entire data set. And this is saying that if you are very confident about all the training examples, then you're going to have better optimization bound. Your bound will be smaller. Or if all the examples are very, very close to be-- all the training examples are close to zero, right? This is min i, yi, h xi. So if all the H outputs on your training examples are very close to 0, that means your gamma max-- Actually, only one of them in this definition. As long as one of your-- example, chain example, has a very small h value, has a value very close to 0, then your generalization bound would be quite worse. So you want all the examples to be very far away from 0, like very confident in some sense. And, on the other hand, you want your classifier-- you want the denominator, the numerator to be as small as possible. You want your classifier to output-- to be less complex. And also, there's another thing to check here, which is the scale does match. So the scaling actually is the right thing. So for example, we have talked about that Rademacher complexity depends on the scale of your function. Right? So if you say you multiply all your function by a half, then your Rademacher complexity will be reduced by half. And you can see here that this bound makes sense because you cannot cheat by doing that. So if you multiply your function h-- sorry. So suppose you say you have a h prime, which is all the functions divided, let's say, 100 h, Right? Suppose you divide by 100. Then what happens is that the Rademacher complexity of h prime indeed it's divided by 100. But the gamma min will also be divided by 100. So that's why you cannot cheat these bounds by a trivial rescaling of your hypothesis class. And that's also kind of shows that something like this will have to show up here, right? If you don't have this, you only have R's of h, then your bound wouldn't be right. Right? Because your bound wouldn't be variant to scale it. So basically, I'm saying that this is invariant scaling. OK? So this basically concludes with our treatment about the loss. So basically, the take home is that you have-- [INAUDIBLE] what is the margin? And the other is the Rademacher complexity. And now let's bound the Rademacher complexity for linear models. So what I'm going to do is that I'm going to do the linear model today. And next lecture, we'll talk about generally deep learning. And from next lecture, we're going to talk more about nonlinear models, in general. So I'm going to have first have a overview of our deep learning and then come back to this to talk about the Rademacher complexity of nonlinear models. That's the high-level plan. So for linear models, here is a theorem. So suppose you have a hypothesis class h which maps x to w transpose x, where w is now your parameter, unless your parameter w has 2 norm less than B. And also let's assume the data distribution has bounded-- L2 norm has a bound, right? So this is the L2 norm square and expectation is bounded by C square. Suppose you know these two things, then you can bound the Rademacher complexity. The empirical Rademacher complexity is bounded by B over n times square root sum of x i. So I guess this is not immediately. No. The scale is on the right dimension. But I think the average one is easier to interpret. If you look at the average Rademacher complexity, you can bound this by B times C over square root of n. So first of all, you get the square root independency, which is very typical for Rademacher complexity bound. And second, in the numerator, you got B, which is the bound of the L2 norm of the parameter. I also get a C which is basically talking about how large your data is, what's the norm of data points, right? So you should have both of these two things come into play because, again, Rademacher complexity is sensitive to scale. So you shouldn't be-- so you should have all the scaling things there because otherwise, you can cheat, right? So like for example, what if you don't have C here then your bound wouldn't be true because you can scale your x arbitrarily to make the Rademacher complexity arbitrarily big. So you have to have all the scalings all right. So right. So that's the-- the first thing we're going to show up our linear models. We're going to have some other theorems about linear models under other constraints, and then we're going to compare them and also compare them with the previous bound as well. But let's first prove the theorem. And this kind of also demonstrates how do you generally bound Rademacher complexity using this kind of like a somewhat analytical approach. So we start with the empirical one, empirical Rademacher complexity. The definition is that you draw some-- you draw some sigmas. And then you look at the sup of this sum. And here you write w transpose Xi because this is the model output. And you take sup over w and what's the constraint of w? The constraint of w is that L2 norm is less than B. And now, let's do some derivations. So first-- so we basically want to solve the sup first. So to solve this sup, I want to understand what this thing is, right? Though we realize that this is actually a linear function of w. And you can write this as the inner product of w with the sum of sigma i. Xi. This is just because you can pull the linear term in front [INAUDIBLE]. And now, what's the sup of this? This is easy because I guess-- what is the sup of w in a product with some vector that's quite, say v where you take sup over 2 norm of w less than B, right? So basically, we want to find the vector that has maximum correlation with some vector v. And you have some constraint on the norm of v, right? So this is just literally equals to-- I guess there are multiple ways to do this. For example, one way to do it is you can use Cauchy-Schwarz. So you can say, w v is less than the norm of W times norm of v. So this is less than v times norm of v. And actually, this can be attained by searching W. So the answer is that the sup is equal to B times the 2 norm of v And how do you attain this inequality? To attain equality, you just choose w to be in the same direction of v so that your Cauchy-Schwarz, this inequality, is tight, and you got the right number, right? So I think this is probably-- I think this is one of the homework zero question that guys had for homework. OK. And you can apply this thing to here. So basically, you get B times the vector v. The v corresponds to this thing. So you got rid of the sup. That's a big thing for us because the sup is very hard to deal with. And now we have a norm of a random variable. And this random variable is a sum of random variables. And note that here we are talking about empirical random or complexity, so the only randomness come from sigma but not x. But still, this is a random variable. It's a random mixing of this axis. And how do you deal with this? So we are going to use the Cauchy-Schwarz again. So I guess maybe for preparation let's first get rid of the-- move all the B the and-- B and N so, you get the sum of sigma i and xi 2 norm. And you say that this is less than the square root of the expectation of the square. This is just because expectation of B is less than square root expectation B square. So I guess maybe I shouldn't call it B for any random variable x, I say x square. The nice thing about it is if you square it, then I think we have seen this kind of equation not only once. In some other cases, we also seen this-- the nice thing about square is that you can expand it, right? So you can just expand what's inside this expectation. This is equals to sum of sigma i square Xi square plus sum of sigma i sigma j Xi inner product with Xj. This is just an expansion. And then another thing is that this is-- here you have i is not equal to j. And because i is not equals to j, expectation of sigma or sigma j is 0, because they are independent on the variables. This is equals to expectations sigma i, expectation sigma j, which is equal to 0. So this term is gone. So what we have is B over n times expectation, sum of sigma i square. Sigma squared is actually 1 because it's Radamacher variable. So Xi 2 norm squared. And then the Xi 2 norm square and the expectation is over sigma, right? It's always over sigma. So there's no-- it's equivalent to no expectation because Xi is not a function of sigma. So we just have this. And this is our bound. Our bound was this, exactly this. So here it appears that it decays, 1 over n. But actually, in the sum, you also have something about n, right, the sum grows as n grows to infinity. So you balance them. You get actually 1 over square root dependecy, which will be [INAUDIBLE] the average. Right? If you average over x again. We call that-- the average Rademacher complexity is the average of empirical Rademacher complexity over the randomness of the data set. Then you got B over n times expectation over the randomness of s. s is the-- the concatenate is the set of Xi's and you get this square root. So now you get into this exact situation where you have some square root inside the expectation, which is not very convenient. So you raise to the higher power like Cauchy-Schwarz using this thing. So you get B over n times expectation of the square of this. I'm sorry. I should use x superscript i. Right. And now it's B over n square root. And each of the Xi's has the same distribution. And then we also assume that this is equals to-- I think this is our assumption, C, about C, right? We assume this is equal to C squared. So this is C squared. So you got-- C squared when you take square root. So you get B C square root of n. OK, sounds good. Any questions? OK. So next, I'm going to show another-- go ahead. [INAUDIBLE] like Cauchy Schwarz, you could just bound it by [INAUDIBLE] Yes. So this is a great question. The question is, what if you don't use Cauchy-Schwarz? You use triangle inequality. I think I-- that's actually a very good question I should-- Actually, let's try to do it a little bit. So I guess what you mean here is that you-- wait, by the way, where you want to use this? In the second application, like you've already [INAUDIBLE] Like once you already, for example, here, where, here? Like, sorry, yeah, yeah. From here to here, basically, right? Yeah. So that's a very good question. So if you don't do this, if you say, there's no different color to indicate this. So if you say that you do triangle inequality, you're going to bound it by B over n times expectation of the sum of Xi 2 norm. OK? So then let's say you also take expectation over Xi just-- maybe we don't have to do that. So let's say this is v over n times sum of expectation of Xi Xi 2 norm. And let's see what-- and you can see what happens here. So you have n terms here. And each of these term is on some constant scale, I see. And so basically, the sum will be on all of n. And then you cancel it with n, so you get B. So basically, at the end of that, you don't have dependency on n anymore. So that's strictly worse because we do want to have a dependency, something like 1 over square root n. That's something that goes to 0 as n goes to infinity. And the reason on why this is a loose inequality, the reason is because here-- if you look at this, this is a sum of things that can cancel each other because sigma i flipped things, right. And if you do triangle inequality, you are basically assuming all of these factors in the same direction, right? So even with the flip, right, so it's possible that all the xi's are in exactly the same direction. Let's say that's already the case, right? But with the flip, they cancel with each other, right? So one of them is going this direction. The other is in the opposite direction. So you have the cancelation. And that's why this Cauchy-Schwarz is or this Jensen inequality is more kind of like height. And this is exactly the gist here, right, because you have to use the concentration, cancellation between the sigma i's. If you don't use it-- actually, if you don't use it, strongly enough in some sense you wouldn't have a good bound eventually. [INAUDIBLE] Right. Exactly, exactly. Yeah. OK, cool. And the next thing would be another theorem. And this different theorem would still deal with linear models but have different norm measurement of the parameter. And you will see a different bound. And this is one of the reasons, one of the things that I motivated, one of the reason I use to motivate Rademacher complexity because I was saying that you can get more precise dependencies on what norms you want to use, for example, right? So suppose we still have the same H or we have the different H. It's still linear model but the constraint on the parameter now becomes L1 norm. So this is L1. And we assume that now the infinity norm of x is less than C. for all i. For all i. And also, let's specify a dimension. Xi is in Rt. Actually, this is an interesting thing. So before, we didn't even specify the dimension of x in the previous theorem because it doesn't show up in the bound. And it has to be some vector in some dimension of course, right, but it doesn't matter what the dimensionality is. Actually, you can apply this to even infinite dimensional vectors as long as the norm of x is bounded by C. But next thing will depend on dimension. The dimension is D. And then the empirical Rademacher complexity is less than B times C times square root 2 log d over square root of n. Right. And you can see now the 1 norm starts to matter. So it's still kind of-- so let's suppose you ignore log d. So basically, still something like B times C, but it's a different measurement. On the B and C, the definitions are different. Now the B is the 1 norm of W. And C is the infinite norm of x. We'll compare this two theorems after we prove it. Let me see how much time I have. I think I do have time to prove it and then compare. So the proof won't be complete, in the sense that I have to invoke some lemma, which I will prove-- actually, will be proved by you in the homework. But let's do the most of the stuff. So the definition is the same thing. And now it becomes something like-- again, you can view this as some w times some v, where v is this 1 over n times the sum of sigma i Xi. I'm writing here, this is sigma i w transpose x squared. So we are doing the same decomposition. But now, you are taking a sup over 1 norm. And you know that if you take the sup of over 1 norm, constant P V times W, this is also relatively easy to prove. I mean, this is equal to B times the infinite- sorry, it's W times V. W times V. This is going to be equals to infinite norm of v. So that's how we eliminate W. So we just got the infinite number of B, Which I see. Sorry, yeah. So you see what's going on here. I think this is-- Oh-- I see. So, right. So this is equals to this. However, so now we've got a problem. So we have this infinity norm. And so how do we proceed, right? So you can, for example, use triangle inequality, right? So then, again, we don't use the cancelations, right? So if you use triangle inequality you're going to flip-- you swapped the sum with the infinity norm. But you don't have the cancelation. So how do I deal with this? So in some sense, I'm going to-- kind of like the infinity norm is kind of like something different, then you cannot use this analytical tool. So what I'm going to do is I'm going to do a different approach or somewhat different approach. So from here-- so this inequality. So this is a data in some sense. So what you do is that you say-- you look at what this really means. So you can say that this is equals to B over n times the sup of W is in sorry, B such that-- I think I probably got the wrong version of the notes. That's why I was like a little bit of kind of like surprised by the notes because I think-- the newer version shouldn't be like this. But anyway, so the first thing you can do is you can normalize it to be 1, so you get w times v, right? That's w bar times v, times v. That's easy. And then what you can say is that if you think about what maximize this inner product among all the L1 non-bounding vector-- actually, the only thing you care about is-- so the sup is actually literally close to-- if w1, wbar is between plus minus e1 plus minus e2, so and so forth, plus minus ed. This is my claim. And the reason for this claim is just that if you look at sum of W for i Vi, right, suppose i is index for the coordinates. OK. So how does-- so basically, you can say that-- so basically, you know that in this sup W bar V is equals to v infinity norm. You know this. And what you care about is that what are the extreme point? Right, in what case you achieve this equality? And it turns out that the way that you achieve this equality is is that w-- basically, you want to take a w bar i to be 1 for i is the largest entry. So for i such that Vi is the largest, is the max over Vk. j is in d. Right. So I'm not sure whether this is-- this probably requires a little bit of thinking offline. So at least you can verify in this case if you choose wi to be 1 for this case and all the wi bar to be 0 for all the other i's. Then what you get is that you get sum of wi bar Vi is equals to just Vi. And Vi is equals to V infinity norm because Vi is the largest one. I suppose we take the absolute value here. Then this is exactly correct, right? So if you choose this wi, then you-- and all the wi bar 0-- so you've got Vi infinite norm. And that's equals to V-- the Vi absolute value, that's the V infinite norm. Right? And also, if you don't have absolute value on the left-hand side, you can also flip the Wi to be either 1 or minus 1. Does that make sense? Yeah, it's relatively easy, so it probably requires a little bit of thinking offline as well. So but the basic claim is that when you do this kind of like maximization over the simplex, you always get the vertex, right? The extremal point is always the vertex. That's another way to think about it. And the vertex are those natural bases. And then we basically got into-- so what happens is that this is a finite hypothesis class. What does that mean? So basically, you can think of your hypothesis class. H bar is something that x maps to W bar transpose x, where W bar is only inside this family of plus minus e1 up to plus minus e. Right. So you don't have all the linear classifiers anymore. You just have 2D linear classifiers now, right? So basically, this thing is just equals to the-- basically, if you put a B outside, you get this plus minus ei, wbar V. And this is just equals to V times the Rademacher complexity of this hypothesis class H bar. OK? And we have a claim that in the-- and also, we have claim in one of the very early part of the lecture. So we claim that for hypothesis class you can burn it by-- so this is-- let's go back to the lecture, to the notes before. So this is the lemma underneath. So the Rademacher complexity is the log of the hypothesis class size is bounded by the log of hypothesis class size times something like M square, where M square is the largest possible value we can output from this hypothesis class. So let's compute what M square is. The size of f is pretty clear. It's 2d. And what's the-- so this is equal to 2d. But what is the corresponding n, right? So we can say that for every W bar in this plus minus ei, you know that it looked a lot-- I guess you know that W bar Xi this is bound by the L1 norm of wbar times the L inifinity norm of Xi. And this is bounded by 1 times C, where C is the infinity norm bound for Xi. And that means that if you look at these things that we have to verify in the lemma-- Right. So we have to verify that the sum of the squares of the output is less than M square. Right? That's what we have to verify. So then because each of the term is less than this, so we can just verify the sum of squares. This is 1 over n times n C square, which is equal to C square. So basically, the corresponding n will be the C square. n square will be C square so that's why Rs square is less than a square root 2C squared, log of H over n. This is just the square root of 2C square log d over n. And now recall that we have a B here, which we got. So Rs (H) is less than B times RS(H) bar, which equals to-- which is less than B times C times square root 2 log d over square root of n. OK, any questions? So yeah, I think we're about time. So I guess next lecture at the beginning, I would discuss how do you compare, or how do you interpret these two theorems? Right. So these two theorems have their strengths on different cases, depending on what kind of W's you are-- what kind of data you have and what kind of W's you can fit from the data. I will do that in the next lecture. OK, sounds good. I guess that's all for today. See you next Monday.
AI_LLM_Stanford_CS229
Stanford_CS229M_Lecture_1_Overview_supervised_learning_empirical_risk_minimization.txt
OK, so let's get started. So the formulation-- so most of this course will be about supervised learning. So in some part, we're going to talk about unsupervised learning. But I think maybe like 80 of the lectures will be about supervised learning. So this is about supervised learning. OK. So let me just-- so we have some definitions. So input space-- so this is the data that you want to classify or kind of like you want to do regression on. So under the label space, that's called y. And there's a joint probability distribution [PAUSES] p over the space of x times y. And there is a-- let me see how do I-- I guess probably, I would still try to do-- maybe I should do this. And this is better, right? And we're going to have some trending data points. So these are x1, y1 up to xn, yn. Each data point is a pair of input and output. And we will use n for the number of examples forever. So n is reserved for the number of examples for this course. And each of these data points xi, yi is assumed to be joint IID from this distribution p. So p is the distribution we are interested in. And then we have some examples from it. And we have some loss function. And this loss function takes in two labels. And it also is a number that characterizes how different these two labels are. And I think typical convention is that the first one is the predicted label. And this is the true label. Oh, this is the observed label. And you assume that if you have-- the loss is always non-negative. I think in some cases the loss can be negative. But in most of the cases, the loss is non-negative. And now suppose you have a-- you can also have a predictor. Because this is what you are interested in. You want to have-- sometimes it's called model. Sometimes its the hypothesis. We're going to use all of this interchangeably. All of these are used in some different sets of contexts, but they all mean the same thing. It means the function you want to look for to predict your label. So this is a function that's called h. It's a mapping from y to-- x to y. And you can define the loss of the predictor. Example, x, y and a loss will be you first plug in h of x, which is your prediction. And you have y. This is the loss. And then after we divide all of this, you can define the so-called expected or population, risk or loss. This is kind of the interesting thing about machine learning, like everything has like two names at least. I think two is the lower bound. Sometimes you need three. And also, my brain is kind of like-- for different kind of situations, I use different name for this. You prepare for that. Just because when I learn this part of things and those literature, I use that name, and then if you learn something else, then you'll-- those kind of papers use a different name. But so my brain is just like-- these names spreading into different parts of my brain. So I might use inconsistent terminologies a little bit. But all of these are the same. They are the same. Expected just means population. And risk just means loss. But of course, I will try to-- I will try to be consistent as much as possible. And this expected risk or population risk is defined to be the expectation of the loss. And here, the random variables are x and y. And they are drawn from this population distribution p. And that's why it's called population risk. And this is your final goal. Your final goal is basically to minimize. So find H that minimize the population risk. At least, this is the goal for the first at least 15 lectures. Right? So this is the goal for supervised learning. You just want to predict x value as possible. OK? OK, so to achieve this goal, you also have to introduce more concepts, right? So one concept is this so-called hypothesis class-- sometimes, hypothesis family. And you can also call it predictor class, predictor family, model class, model family. So let's call it capital H. And this is a set of functions [PAUSES] from x to y. All right. And you can define the so-called excess risk. Because at the end of it, you're going to search over a set of functions. Right? And maybe the set of functions is very bad. For example, the set of functions only contain one function. So that's why people define this as so-called excess risk, which tries to kind of define your error relative to the power of this hypothesis class or this set of functions. All right. So these excess risk is with respect to capital H, so this is defined to be your population loss or population risk minus the best you can find in this family. OK. So basically, this term is the best model. In H. Which H, sorry? The inf. The inf. Yes. So this is a good question. So inf-- basically, you can if you ask me. I guess we just-- for this course, let's say they are exactly the same. Of course, they are not exactly the same just because sometimes you don't have a unique minimizer, right? So you have a sequence of-- like you can-- maybe I'll have a post to explain the subtle differences between these two. But for almost-- I think for exactly for this entire class, you can just assume inf is the same as minimal. Yup. Cool. And this is a lot of 0, because this is the minimum. And so that there's no way you can get better than the minimum. So that's why it's larger than 0. So in some sense, this is trying to kind of-- think about excess risk as one way to kind of only think within the family of H, right? So now, if you get 0 excess risk, that means that you cannot do anything better within this family. Right? Of course, if you change your family, maybe you can get something else. But at least within this family, there's no way you can do better. OK. So this is the basic language we have been working for this entire course. Any questions so far? In any case, just feel free to interact me at any point. You don't have to wait until I pause in either the Zoom meeting or here. So some quick examples to make it less abstract, so this is-- I assume this is relatively abstract. So one of the type of questions in regression problem where your y, the label set, is real number, so they are continuous labels. And oftentimes, for regression problem you have the so-called square loss. So this is the L y-hat y is equals to maybe something about half absolute y-hat minus y square. For example, you want to predict the temperature, it makes sense to use the square loss. Of course, there are other different type of loss. And another possibility is classification problem. So in this case, the y is a discrete site. You have a set of k labels. Maybe it could be two labels-- cat versus dog. Or it could be multiple labels. And then your loss-- the final loss you care about often is this so-called 0-1 loss. So basically, you say that if you didn't get the right label, then you are close to 1. Otherwise, you are close to 0. This 1 is the indicator. So indicator E is 1 if E happens. An indicator of E is 0 otherwise. So you'll see that when you really do the practical machine learning and algorithm, you are not going to use this loss because of the other issues, but this is the loss you care about eventually. At least, this is one of the losses you could care about eventually. This is the so-called accuracy of the error, right? But when you tune it, you maybe use cross entropy. That's a slightly different question. So OK. So now, this is the setup and the goals. And now, let's talk about one important algorithm, which is called empirical risk minimization. This is the algorithm or type of algorithm that we can analyze for quite some time. So the algorithm is very simple. So I guess this is what you do in practice everyday. So you have some training loss. Sometimes, it's called empirical loss. And sometimes, it's called empirical risk. So this loss, we use our l hat. l hat means it's empirical. Every time we use a hat here, it's kind of like-- it pretty much means empirical. So you have the sum of the average of the loss and all the examples, h i, y i from 1 to n. And then, you do the so-called empirical risk minimization, ERM, empirical risk minimization, where h hat is-- you find the best model in this family. I guess here I'm using argmin. So argmin are is just exactly the same for this course. So you find the best model within the family that minimizes your empirical risk. And you can break tie arbitrarily. We don't care about breaking ties in many cases. And so this is the algorithm. So using this algorithm, you may need to use some other optimizers to find the minimum, right? But this is the abstract way of thinking of the algorithm. You'll find a minimum. And the key question is that, how do we-- why this is a good algorithm? Why this is doing something sensible? And one of the key property of this is that-- one of the reason why this is somewhat meaningful is, I guess as you know already from previous classes, because it's x i, y i, i b from p. So then, if you look at the expectation of the empirical loss over the randomness of the examples-- so if you take expectation in particular of one example, let's say. And let's say the examples are run, then this is equal to the population, right? This is exactly the same as you draw x y from p, h x y. To verify this is just a change of notation. In some sense, this is average. So the expectation of the empirical loss is the same as the-- so basically, it's saying that if you take expectation of the empirical loss, I will have h, which will be an average of all of this, 1 over n times expectation l of h x y. All right, so this will be equal to l of h. And here, the randomness comes from all the x i's and y i's. So this is the typical justification we have for this kind of algorithm because the empirical loss is a good-- is an estimate for the population loss. That's why minimizing the empirical loss probably would lead you to minimize the population loss. So in some sense, at least a good part of this course is to justify more formally why this is the right thing for us to do. Intuitively, it sounds right. But formally, whether this is-- we want to kind of prove that this is actually the right thing. And it's actually not that easy because it does depend on some other things, for example, how many examples you have and how large your hypothesis class h is. It's not that simple. This is just kind of an intuition. So all right, any question so far? And also, we're going to have-- I guess-- I assume that most of you know this. This is just a formal definition. So when you really do this, you have a hypothesis class. When you really do it in computer, you have to have a parameterized family so that you can optimize the parameters. So you can also have a parameterized family. So you call this H, for example, something like h is sub theta, and where theta is in some space of parameters. And maybe let's say theta is in RB or some kind of like-- is the parameter. And then, for example, theta that could be-- so capital theta is the family of parameters. This is the-- sometimes you want to say that you only do it for sparse parameters or only do it for certain kind of parameters. And one of the example of this is that you can take h to be, for example, h theta x, which is equal to theta transpose x. Then this is all the linear models. OK, so this is easy. And then you can also do ERM for parameterized family. So I guess here, this is actually probably the most important cases because in particular you do parameterized family. And now your training loss, let's define these training loss still as l hat theta, as l hat. But with a little abuse of notation, you say that theta is your input of the training loss. Theta is the parameter. Before we said, the training loss is a function of the model, and now it's a function of the parameter because the model and the parameter are just a one to one correspondence, in some sense. Maybe not one to one, but they have a correspondence. So your representation for the model is really through the parameters. So each parameter corresponds to model. And this is just the-- I'm just writing what you're expecting probably. So this is the empirical loss. And here I'm overloading the notation a little bit, and we are going to overload this notation in this course many times. And sometimes you write this thing-- sometimes you write this as alternatively, again with a little abuse of notation, you sometimes write this as this. And just because theta is what-- and x, and y, are what you care about, after you know these three things, you can compute a loss. These are just some notations. Because we are sometimes going to use these notations. Sometimes we use these notations a little bit exchangeably, so it's good to be aware of that. And you can define the so-called ERM solution, which is the argmin of the empirical loss. And where theta, is in this parameter, capital theta. And sometimes you just write theta hat as a shorthand for ERM, sometimes you can write this. But you don't have to remember some of these kind of cases, just we're going to remind you later. So in the goal, as you can expect it, it's really, again, just to show the excess risk of theta hat ERM is small. Because that's the success kind of criterion, right? You want to show that, you find some theta hat, and this theta hat is working and working in the sense that the excess risk is small. And this is basically the goal of the first probably few weeks. And the core in some sense is really to, I guess kind of like a trailer. In some sense, the core idea is to show that l theta is close to l hat theta, right? Because you are minimizing the l hat theta, but you care about l theta. So you have to show these two are similar in some sense, but it's not that easy. Next question [INAUDIBLE]. Sorry this is-- I guess that's me. Sorry, this is me. My bad. Actually I have a typo here, you might notice as well. Thanks. OK. [INAUDIBLE] goal again? So the goal is to show that your algorithm works, right? So this theta hat ERM is doing something, right, it's good. And what does it mean by a model is good? A model is good in it means that, at least in our definition, it really only means that the excess risk is small, right? But if you can make sure that you are kind of close to getting the best model in this family then that means you are doing well. So that's why the goal is to show that excess risk is small for this model. [INAUDIBLE] Eventually you care about the learning algorithm. But to show this, it does depend on what the family of the hypothesis is. But the final, final goal is that you show a learning algorithm using these family of models can work. Do you ever actually evaluate l hat? I assume it requires a sort of distribution of l theta hat. So can you evaluate l? Yeah. Empirically, I guess, yeah. So yes, you can evaluate l pretty well in the sense that you can have a hold out of data. So that's why the validation data is used. Of course, there are some subtleties about-- OK, so how do you evaluate l if you want. So the ideal scenario is that you collect some new data, and they are fresh data. And then you use the empirical estimator for it. The subtlety would be that whether you have seen this data before, right? If you haven't seen this data before, then you are all great. But if you have seen this data, then it becomes tricky. So that's actually exactly what we are doing here because I hat and l, right, so this is intuitively very much correct, but the question is that-- we will talk more about this. The subtlety is that whether we have seen the data before or not. Any other questions? OK, cool. OK, sounds good. And this is the kind of main topic in this course. Although there are going to be more and more subtleties about this, for example, like in the first few weeks, we're going to talk about this. And then other things in this course-- so we're going to talk about how to for example, one thing is how to minimize l hat theta, right? So suppose you know that all of this is great, but you still want to know how do you do this in a computationally efficient way, right? That's something we're going to touch on for a few lectures. And also we're going to talk about additional complications in some sense in deep learning. In some sense, this framework becomes questionable when you do deep learning. Of course, some part of it still survives, actually most of the part survives, but some of the-- if you really go into the low level technical stuff, then some of the technical stuff stops kind of making sense, and there are a lot of additional complications, right? So far everything is still kind of OK, but then once you go one level lower then some of the classical techniques don't apply to deep learning. And also we're going to talk a little bit about enterprise learning, which is somewhat different, but still some of these losses are involved, of course. And the transition, all of this-- like the notation still mostly applies, but with a little bit of differences. OK, so that's the formulation. And now let's move on to asymptotics. Before that, any questions? OK, cool. So what does asymptotical analysis mean, right? So this is a type of analysis where you assume that n goes to infinity. So like n, the number of examples, goes to infinity. And you show a bound like this of the form. Something like excess risk, this is our goal, which is l theta hat minus argmin-- sorry minus mean. This is less than c over n, plus little [INAUDIBLE].. And here this constant c, is a constant, but it's not a universal constant. It's a constant that, of course, that depends on n, but could depend on the problem. For example, dimension, right? So and a little on this, kind of what you learn from calculus, this is a lower level term compared to 1 over n. So this is kind of the general kind of approach. And after we talk about this, I will talk about-- then we're going to move on the so-called non-asymptotic approach, which, I will discuss that after we talk about this. OK, so-- There's a question. [INAUDIBLE] Shall we close door? I don't know. Could you be a little quieter? Yeah, I think that one probably is fine. Anyway, yeah, that's a good question. Yeah, so I think while we care about this bound. So we want a bound that goes to 0 as n goes to infinity. Because you want to say that, if you have more and more examples, you can do better and better. But whether it's 1 over n, or 1 over squared n, or 1 over over n square, that depends on what's the truth, right? So it just turns out that the right bound is 1 over n as we'll see. You cannot get better. You shouldn't get worse. Of course, it still depends on the setting a little bit. But for the setting, we're going to talk about well over a is indeed the right path. Yep, so cool. OK, so now let's get into a little more concrete set up. So we are going to write this theta is our p. This is our family of parameters. And theta hat is-- I'm writing this again, so but just to rewrite it, this is the ERM solution. And let's just for notational convenience, let's define theta star to be the best model in this family, right? But this is the population risk but not empirical risk. Theta star is the best in terms of the population, and our goal is to bound the excess risk, which will be just l theta minus l theta sub. OK, excess risk. So our goal is to show l theta minus l-- sorry, I theta hat minus l theta star is small. OK. And a trivial consequence of this is that l theta star is the mean of l theta. Find this. OK, so here's the theorem that we are going to prove. So typically in this course, I'm going to take this approach that we state the theorem first, and then talk about why we have to prove it, or how do we prove it. So we assume the consistency. By the way, this is like-- as with what I said in the beginning, this part of the lecture is a little bit informal just because I don't want to get into too much trouble. Too many [INAUDIBLE]. So what does the consistency of theta hat mean? So this means that theta hat eventually converts to theta star in probability, as n goes to infinity. If you are not familiar with what convergence in probability means, it doesn't really matter that much. So the reason why you have to have something slightly different is because this is a random variable. Theta hat is a random variable. If it's just some deterministic variable as a function of n, then you can define a convergence in the trivial way. But here theta hat is a random variable. So technically this means convergence in probability, just in case you are interested in this, but it's not that important. So convergence in probability means that if you take a limit, as n goes to infinity, we look at the probability of theta hat minus theta star is larger than some epsilon. Its probability will go to 0 as n goes to infinity, for any epsilon that's larger than 0. But it's not very important for this course. For this course, it's perfectly fine that you just understand this intuitively. Theta hat is a random variable because [INAUDIBLE] depends on the probability distribution-- Yeah. Exactly. Is it except for the fact that the math from the set of theta hat is like measurable, correct? Sorry, can you say that again? Something like a map from the samples of theta hat is measurable. Yeah, we have all of those. Yeah. When you're writing in landscape, the stuff was a bit bigger on the board. Would it be possible to- Yes, but I think the issue is that it's going to be smaller vertically. So I felt that this is better I think because there is more things shown on the board. Could you maybe write a bit bigger? Bigger? Yes, sure. That's fine. Maybe I should also repeat the question for the Zoom meeting as well, but yeah, next time. OK, cool. And also this in formulae-- OK, sounds good. So and then, let's see. So we also assume that the Hessian of the loss h theta star is full rank. And what does the Hessian mean? The Hessian is-- probably most of you have seen Hessian if you have taken CS29, but Hessian is just a second of derivative, but you organize it in the matrix. So the Hessian of a function f is a matrix. And its matrix, all the answers are the partial derivatives of this. All right, this is a matrix of dimension p by p if f is a function that maps r p to r. OK, so and there's also some other regularity conditions which I'm not going to even state because it's probably not super important for this course. And for example, this involves something like the gradient is finite, something like that. And under these assumptions, then you have a few things. You can know a lot of things about the theta hat. So first thing you know is that formula you have to write this. So with square root n times theta hat minus theta star. This is bounded, this op of 1. So I'm going to define op of 1 in a moment. But this is really roughly speaking just the saying that theta hat minus theta star is roughly on the order of like 1 over square root of 2. Something like this. So if you multiply theta hat minus theta star by square root n, then it becomes on an order of a constant. So what is this op of 1 here? Again, this is not super important for the course. You can, if you don't think of it as constant, you can just think of as o of 1 as in most of the standard CS courses. But the detail here is that bounded in probability, xn is a random variable and indexed by n is op of 1. This means that for every epsilon that is not a 0, there exists a bound n, such that if you look at the probability that xn is bigger than the bound, this would go to 0. I guess for sup, you can think of it as max. If you are not familiar with the sup, this is going to be very small eventually as n goes to infinity. But if you are not familiar with all of these jargons, just think of this as o of 1. [INAUDIBLE] minimizer is unique? Yes. Actually, the minimizer is unique is already assumed when I defined this, in some sense. So again, I'm pretty informal here, but I'm already assuming that the minimizer is unique. But indeed, if the minimizer is unique, I think you need Hessian to be full rank. But I think Hessian is full rank doesn't mean the minimizer is unique. OK, sounds good. So any other questions? But the most important thing here is that you somehow know how far theta hat is close to theta star. And how far it is, it's kind of something like 1 over square root n and as n goes to infinity. And then also you know that how different l theta hat, the population risk of the minimizer, theta hat, is close to the population risk of the best model, theta star. And how different they are? So they are different, in this sense, where if you multiply the difference by n, then you get a constant, which pretty much is saying that l theta hat minus l theta star is something like 1 over n. OK, so and actually more on this. So you also know that what's the distribution of this theta hat minus theta star. So theta hat minus theta star is a vector, right? And if it's multiplied by square root n, it's going to be on the order of constant. But also you know what the distribution of this random variable is. As n goes to infinity, I think this distribution is, in distribution, is converging to a Gaussian distribution, which means 0 and some covariance. And this covariance is complicated, let me write it down. Something like this. By the way, all of these are in the lecture notes. So you don't necessarily have to take notes if you don't want it. Anyway, right, so how to interpret this covariance, I think that's-- it's not interpretable for the moment. But the point is that it's a Gaussian distribution, and after scaling by square root n. So if you don't scale by square root n, it's going to be smaller and smaller. But if you scale by square root n, then it's going to be a Gaussian distribution with a fixed covariance. And it means 0, so theta hat is centered around theta star, so that's very good news. And also, the first thing you also know something about how different was the distribution for excess risk. We have talked about the excess risk as a random variable is on the order of 1 over square root, 1 over n here, right? So this is what we have talked about. But then also you know exactly what's the distribution. So the distribution is, it's actually a complicated stage, but let me do it. So first you define a random variable. Let's call it a Gaussian random variable with some covariance. The exact details here also don't matter that much because it comes from the derivation. You derived it, and you found that this is exactly the right thing. So the point here is that if you define this random variable, then you can know that excess risk, which is l theta hat minus l theta star. If you multiply that by 1 over n, then in distribution, it converges to this random variable, the norm of this random variable s. s is the Gaussian distribution. And you also know what's the expectation of this. If you really want, which is something on the order of 1 over 2n. And you also know, was the constant. OK, so all of these formulas don't necessarily matter that much because you derive it, you got this, right? But the point is that you almost know everything. So you know everything about l theta hat. You know the distribution of theta hat. You know l theta hat. You know the distribution of l theta hat. It's very powerful. And you can make all of these formal if you want. Any questions so far? The first assumption [INAUDIBLE] property of [INAUDIBLE].. Is that a property of what? [INAUDIBLE] Yeah, so my understanding is the question is that, what the consistency assumption was? Is that a property of something, right? So what property like, is this the property of the problem? Yes, that's correct. So it's a property of the problem, meaning that it's a property of the model parameterization. Yeah. So this might answer the question. I have no idea how we would do this equation from a Gaussian. I'm not following [INAUDIBLE]. Sorry, you are not following why this is true? So what are some other materials that can be-. I guess maybe you can talk about this offline, it's OK. Yeah, just come to me after the course. Yeah. But you're not expected to, just one thing for anybody, you are not expected to see why these are true, right? These are just some statements saying that, OK, this can be done mathematically. I will show you something about how to draft this, at least somewhat informally. And the proof of actual techniques is pretty simple. The calculation is a little bit tricky. It's a little bit complicated. You have to work through it. But the fundamental idea is very simple. Yeah, so far I'm only stating that these are all correct. You can prove all of this. That's the only thing I'm saying so far. Are these [INAUDIBLE] very strong, or are they easily verified by any problem? Yeah, so for example, the consistency assumption, right? Yes, so that's a very good question, right? So far we see this very strong statement, everything about theta hat, right? So some things probably should go wrong because otherwise we would probably solve all the problems. There's no non-linear assumption. It works for nonlinearity, right? So I think the problem is that the consistency assumption is a little bit tricky if you don't have n goes to infinity. You really have to have n to be really, really big. Then you can somewhat have the consistency. And I think basically, the whole thing-- the whole problem of this-- the limitation of this theorem is that you need to let n goes to infinity, and you really need very, very big n to potentially see this effect. So we're going to discuss this a little bit after we move on to the non-asymptotics. But yeah, that's a trailer. Yeah. Right, so when n goes to infinity, you have super powerful tools, in some sense. But still these are actually reasonable characterizations for minor cases. So it's not like they are completely off the reality. I guess they are not necessarily that applicable to the modern practice just because in these days we don't have n goes to infinity, right? You have a million data points in your ImageNet but your parameters are like 10 million. So n is not going to infinity as you fix the parameter. So that's going to be the next half of the lecture to some extent. [INAUDIBLE] one or two or-- [INAUDIBLE] Yeah. One and two are consequences of three and four, yeah. And actually when we really prove it, if we do a very formal proof, you're going to prove three and four first, and then do one and two, yeah. OK, I think I have 15 minutes, right? Yeah, 15 minutes. Yeah, OK, so what I'm going to do in the next 15 minutes is to show kind of an informal proof for one and two. And next time I'm going to do a little more formal proof of three and four, and then we're going to get done with this asymptotics. And then we'll move on to the more non-asymptotic stuff. So this is actually the proof, right? So the key of the proof is two things. One of the things is that you're going to do tail expansion around to the star. And second thing is that you want to somehow use the fact that I hat is close to l, and nabla l hat is close to nabla l. Nabla l hat is the gradient, the empirical gradient, and nabla l is the population gradient. And this is by law of large numbers. OK, I'll elaborate on this. But the most important thing is really the tail expansion, right? So once you can work in that neighborhood of something then everything becomes somewhat easy, OK? So now let's talk about how to really do it. So when you do tail expansion, so the starting point is the following. So you care about theta hat, right? And what you know about theta hat is that 0 is equal to nabla hat l-- 0 is equals to-- the gradient of the empirical loss at theta hat is equals to 0. This is because theta hat is the minimizer, right? And minimizer then, the stationary condition tells you that the gradient is 0. But you want to relate this to l because everything is easier when you do it with l, because I is the population. First relate this to theta star. So you want to relate everything-- basically, the whole idea is that you want to relate theta hat to theta star and l hat to l. So the first thing is that we try to relate this to theta star. So you can write this as, theta expands around theta star. So theta star is a reference point. And then this first-order term, this is a zeroth-order term, and the first-order term will be the Hessian of the empirical loss times theta hat minus theta star. So this is the higher-order-- this is the tail expansion for multi-dimensional function, but it's exactly the same as one dimensional case. It's just that you have to deal with some of the matrices. So maybe just a small remark here. So what I'm doing here is that I'm expanding something like gradient of g, g plus epsilon, abstractly speaking. I'm going to do a lot of these abstractions for small things, right? So suppose you care about this, and epsilon is a small thing, and z is your reference point, you can show that the tail expansion for this is really something like nabla c plus nabla square root g z times epsilon. And this is a matrix. And this is the vector. And how do you verify this? You can do this for each dimension individually, and you get this equation. This is intuitive as well, right, because the Hessian is the gradient of the gradient. So this is the first-order tail expansion, OK? Any questions? OK, so now after I do the tail expansion, then you know that, this is-- the left hand side is 0. And then you can rearrange, right? So put this on the left hand side. So what you get is that nabla l theta star, theta hat minus theta star. It is roughly equal to minus-- sorry, this is equal to minus [INAUDIBLE] l hat theta star, plus some higher-order terms And then, now you have theta hat minus theta star. You can take the inverse of the Hessian. So you have theta hat minus theta star is equal to the inverse of the Hessian times the empirical gradient at theta star plus higher order terms. Total terms. OK? [INAUDIBLE] Sorry, that's my bad. It's still l hat so far. Thanks. OK, cool. OK, so that's exactly the right point. So now I need to change all the l hats to l. And what I know-- so I know that basically, I want to kind of change this to l. I want to change this to l hat, to nabla l as well. And also I need to consider the differences between them. So how do I do that? So at least I know a few things, right? I know that expectation of l hat theta star is equal to l theta star. I know the expectation of nabla l hat theta star is also equal to nabla l theta star. So you assume enough regularity conditions so that you can switch to the gradient with the expectation and you also have something like this. And this is equal to 0 because theta star is the minimizer of the l. So that's why this is 0. And this is a p by p matrix, which is full rank, as we assumed. And basically, this is saying-- and also you can, because this, this is average of n IID terms, right? What is the-- because this is 1 over n times sum of nabla square, l x i y i theta. So it's a sum of IID terms. Then you can use law of large numbers to say that this n converges to this. And similarly, you also know that-- I'm sorry, what I'm doing here. My bad. Nabla l l, this is converging to this. OK, so if you want to just get the-- Moreover, by law of large numbers, you can also get something more accurate about this convergence. So here you are only showing that it's converging, but also you can know that how much different they are. You know that if you look at the difference between this minus this, this should be on the order of one. Or more accurately, this will be a Gaussian distribution, which means 0 and covariance nabla l i theta star. Nabla x y theta star. I guess this is because-- I'm using central limit theorem here, maybe I should first review the central limit theorem. So when you have central limit theorem, you know that suppose x hat is equal to 1 over n times sum of xi. And xi or IID, from some distribution, d. And xi, let's say is in d dimensional. Then let's say sigma is the covariance of xi. Then you know that as n goes to infinity, a x hat, convert this in probability to the expectation of x, all right? That's the law of large numbers. It's called law of large numbers. And then the more accurate thing is that you can look at the difference between x hat and expectation for x. And you know that if you scale a difference by square root of n, then this converts it to a Gaussian distribution. First of all, it's on the order of a constant. And secondly, the distribution is mean 0 and covariance x. And in some sense this is saying that x hat intuitively-- or informally, this is saying that x hat minus e of x is on the order of 1 over square root 2. OK, so this is central limit theorem. And what we are doing here is that-- in this equation, what we are doing here is basically applying the central limit theorem where you apply x i, it corresponds to the gradient of l at xi by i. Yeah, this is the gradient of the loss at example r. OK, so basically we have done some of these preparations so we know how different nabla l hat is from l, and also we know that, the highest n converges. And now we can come back to this important equation here. And we are ready to get something real, so let me-- I mean rewrite that. So theta hat minus theta star this was nabla square I hat theta star inverse times nabla I hat theta nabla l theta star, is that right? No. Copy, this is not. Nabla l theta star hat plus higher-order terms, so this one is close to nabla square l theta star inverse. That's the first thing we know. And also we know that this one is roughly speaking nabla l theta star plus some [INAUDIBLE],, right? So suppose you do most of this, then you get something like this is roughly equal to 0. So then you get 1 over square root. Maybe I'll ask the question first because this takes a little bit of time. [INAUDIBLE] Can you say that again? Is there a difference between x hat and x when you're using the central limit theorem? x hat and x? Oh, sorry, my bad. Wait, what? Oh, I guess so. Yes, so I'm thinking x is also drawn from b. So maybe I should either use xi, or let's say x is a generic variable that is drawn from the same distribution d. But the expectation of x is the same as expectation of xi, that's right. Are we taking a bias term here? Here? Right. Yes, I'm using that. OK, so maybe I'll just do this a little bit more carefully. So I'm basically trying to replace the l hat with l, right? So the first thing is that the gradient-- using this equation, maybe x squared 1. So this is roughly equal to 1 over square root n, plus nabla l theta star, which is 0, so this is roughly equal to 1 over square root 2. So if you don't care too much about the vector versus real number of the distribution then you get 1 over square root of 2. And this one is kind of close to a constant, inverse converges to-- which is a constant. So that these two things together will give that-- maybe I should use the yellow color to continue-- minus something like one over square root n, on this other 1 over square root of 2. So something like maybe-- basically you get nabla square l theta star inverse times 1 over square root of 2. And this is on the order of 1 over square root of 2. So that's how you got that theta hat minus theta star, it's on the order of 1 over square root of 2. Of course, just to clarify, this is not exactly formal because I'm ignoring a lot of things. For example, this is a vector-- this 1 over square root n thing is really a vector, it's not a scalar, but it's on that order. So that's how you get that theta hat minus theta star is on the order of 1 over square root of 2. And also, heuristically if you really care about I theta minus l theta star, then you can-- this is the excess risk. You can also do the tail expansion. You say that this is-- it will expand along theta star. You get that this is l theta star times theta minus theta star. Sorry, this is maybe-- why do I have so many papers in my notes? Sorry, my bad. So this is theta hat. Here the interesting thing is that if you do a first-order tail expansion, you're going to get 0. So you'll have to do second-order tail expansion. So we're going to get theta hat minus theta star times Hessian plus higher-order terms. OK, so the reason why I need to do the second-order tail expansion is because the first-order expansion is 0, because this is 0. Theta star is the minimizer of l, right? So that's why we have to look at the second-order tail expansion. And if you want to roughly see how large the second-order tail expansion is, you can see that each of these terms-- this is 1 over square root n. This is 1 over square root n. So basically, the second term will be something like 1 over n plus higher-order terms. OK, so this is some kind heuristical proof to show why theta hat minus theta star is in the order one over square root n, and in terms of the loss, is on the order of 1 over n. Any questions so far? So the consistency is needed [INAUDIBLE]?? So consistency is needed almost every step. [INAUDIBLE] I'm using the central limit theorem only on random variable, not the function of it. Because I'm not sure whether that's-- Oh, by the way, I forgot to repeat the question, but anyway, I'll remember that next time. So the question was that whether the central limit theorem is applied to the random variable itself. I think so because xi, this xi, right, so that corresponds to the gradient. So gradient of l at that example i is my random variable. So that's how I got-- and then the sum of xi, this corresponds to the empirical gradient, right? And the expectation corresponds to the population gradient. [INAUDIBLE] wouldn't you need some [INAUDIBLE]?? Yeah. You need a lot of different regularity conditions to make all of this work because for example, also there's implicit stuff that I didn't go through which is the inverse, right? I only show that the Hessian converges to for example-- where I did that, so I only showed that the empirical Hessian converges to a population Hessian. You also need to show that the inverse of the empirical Hessian converges to the inverse of the population Hessian. So that's another thing you want to formally deal with. So yeah, every time I give this-- I've taught this two or three times. And every time there are a lot of questions about this first lecture. I still haven't figured out a better way to teach it, but I think maybe the thing is just that I really want to convey this-- convey this idea. The idea is that you can do tail expansion. And you can pretty much do a lot of heuristical stuff and all of them can be made formal. And how to exactly make it formal, it's a little bit tricky as-- these are all great questions, right? All the questions are welcome, but just to set up expectations, this is not meaning to have a very formal derivation here. OK, so I think that's all for today. So next time we are going to make this a little bit formal maybe for 15 minutes, and then we can move on to other things.
AI_LLM_Stanford_CS229
Stanford_CS229M_Lecture_8_Refined_generalization_bounds_for_neural_nets_Kernel_methods.txt
OK, so let's get started. So I think the last time what we were left was on-- I think we covered the weaker generalization bond. And then, today we are going to provide a stronger generalization bound for the neural network. Let me just double check whether I-- sorry. Somehow I got confused where I'm left. OK, cool, cool. Yeah. Ah, yeah. Yeah. So last time what we did was that we had this generalization bound of the form that you have something like something in the square root of n in the denominator. And today we are going to remove that square root of n, not exactly by just improving the bound. We also have to somewhat change the hypothesis class. So that's the first part of the lecture. And then we're going to talk about-- today we're going to talk about stronger. So first we have a stronger version, and then we talk about some connections to kernel method. And then we will talk about even stronger bound for multiple layer networks. And that requires some preparation with some techniques. And we'll talk about those techniques if we have time today. Otherwise, we'll talk about those next week. OK, so just to briefly review the setup. So the setup was that we have some theta, which consists of two layers. And the first layer is, I think this is called second layer, which is the vector w. And the first layer is a matrix that maps dimension d to dimension n. And our model is something like w transpose phi of Ux. And the phi is element-wise ReLU. And so, last time what we had was that we have this generalization bound of the form that this Rademacher complexity of H is bounded by something like times 2 times Bw, Bu times C, square root of n over n, where H is defined to be something like where you restricted 2 norm of w to be Bw. And you restricted the max 2 norm of Ui to be Bu. And that's the hypothesis class. And in some sense, if you, I guess, we discussed this a little bit in the class. So and also I think somebody asked this question. In some sense, there is a scaling invariance. Because you can-- so alpha times w over alpha U would be the same model as w times U, right? So just because you can scale the first layer by alpha and you downscale the second layer by 1 over alpha if alpha is bigger than 0. So that means that you can also change this bound a little bit and rewrite it as something like, basically you can say, roughly speaking the generalization error, it is something like bounded by O of square root n over n. Sorry, this is square root n, over square root n times the norm of w times the max i-- the max of the 2 norm of U. This is kind of the intuitive way to think about this. So today we're going to have a stronger bound that doesn't have the square root of n here. But we will have some slightly different terms here in terms of how do you measure the complexity of w and the complexity of U. OK? So here is a refined bound. Let me define that. Let me state the theorem first. So the theorem is that we define this complex measure that's called C of theta. This complex measure is defined to be the sum of the absolute value, wj, times the 2 norm of uj. And you take a sum over j. And correspondingly, given this complex measure, you can define the corresponding hypothesis class, which is the family of functions with bounded complexity, something like bounded by B. And also, we assume that the norm of xi is less than C for every i. Know that here actually we have a strong assumption of data. Because before we assumed the average of the norm is less than C, or the average of the squared norm is less than C squared. Now we assume each data point is less than C. This is just the technicality in some sense. And with all of this, then we can prove that the Rademacher complexity of H is bounded by 2 times B times C over square root of n. OK. So maybe let me first start with some interpretation of this theorem and see why this is an interesting one to prove, and then I'll write the proof. So a few remarks. The first one is that why this is better than before, right? So I'm claiming that this is strictly better than before, at least in the following sense. So if you really, I guess, so the way that I compare them is the following. So before what we had was that this generalization is something like square root of n over square root of n times this complexity is something like 2 norm of w times this one, as I already said. That's kind of the intuitive way of thinking about it. If you assume the C is a constant. C is just something about the data, which then will change as you change the hypothesis class. It's really something like a constant. And now, you can basically think of this new bound as peak O of 1 over square root of n times B. What is the capital B? The capital B is basically sum of wj times u2 2 not. So basically, the way I'm comparing them is that I'm comparing these two quantities. And the claim that the second quantity is strictly smaller than the first quality. And the reason is just that if you do some simple inequality, you can see that this is less than-- so first Cauchy-Schwarz you say this is less than sum of wj squared 1/2, and sum of uj 2 norm square to the power 1/2. And then the first term becomes 2 norm of w. And the second term is you can bond each of them by the max. So you get n times max j, Uj 2 norm square 1/2. So what you got is square root n times w 2 norm times max j Uj 2 norm. So this is, in this sense, this is a strictly better bound. And there could be the same if your wj and Uj makes all of these inequalities exactly equal. But in other cases, it won't. And in some sense, you can see one of the intuition here is that this new complex measure-- so another thing is that this new complex measure-- C theta captures the scaling environments better. What do I mean by that? So what I mean is that-- so what I mentioned, I mentioned the following scaling environments. So if you have w and U, this is equivalent to alpha dot w over alpha U. This is because the ReLU is scaling environments, and you can do this. But actually, you have a lot of more scaling environments. You can scale actually each pair of neurons in this way. So what you really have is that this is actually equivalent to-- you can scale each wi to something like wj. You scale each of these-- the j's coordinate by alpha j. And you scale correspondingly of 1 over alpha j times Uj. And you do this for different scalar, for every wj and Uj. And this is still the same, right? But just because the sum of wj ReLU Uj transpose x is the same as alpha j, wj phi of 1 over alpha j Uj x, for any Uj-- for any scaling that is positive, right? So and you can see that if you consider these kind of environments, still this complex measure is the same, right? So the complexity measure is, it's really the invariant to the scaling here. Because if you change wj and Uj accordingly, you don't change the complexity. Which to some extent seems to be a good thing to have, right? So but before the complex dimension doesn't have this property, so if you look at this complex dimension. If you scale each of the wj by a different scalar and you scale each of the unit by different scalar, you wouldn't-- this number would change. [INAUDIBLE] Sure. [INAUDIBLE] two normal [INAUDIBLE].. Right. [INAUDIBLE] Right. So you are saying that this? Yes. So yes. So this one, you do make a stronger assumption. [INAUDIBLE] Sorry? Can you say again? [INAUDIBLE] By C, yes. [INAUDIBLE] Sorry, what was the question? Maybe I didn't answer. I was thinking you write that norm [INAUDIBLE] as ex or [INAUDIBLE]? So but I think, I'm guessing what you are saying is that before the condition was something like-- [INAUDIBLE] I think it's 1 over n times sum of xi 2 norm squared square root is less than C. That was in the previous theorem, I think. Or something like maybe-- that was, or in the previous theorem we did like this. Yeah. It's less than C squared. Right. So indeed, this is-- so the new condition is stronger than the old one, because this one implies the old one. Correct. So yeah. So I'm assuming that suppose you say this is not a problem, you just live with the stronger assumption. Then outer bound is strictly better. In some sense, this assumption x actually is a little bit less important to some extent. Because, for example, if any way your data satisfies the stronger assumption, then it's less important. So yeah. But you are right that the data assumption is a little bit different. But I don't think it matters that much. So I guess [INAUDIBLE]? Right. That's true. That is definitely true. Or you can choose the right C. So but I guess, I think the question was more about comparing the two theorems. If you normalize here, maybe you should normalize there. So what's the fair comparison? Cool. So this is one thing about this complex measure. And sometimes, this complex matter is a little bit more environment to-- at least of the trivial environments in the neural network. So and also, the bond is better. And also, another thing that we have about-- nice thing about this is that-- about this theorem, is that if you have n goes to infinity, at least you get a stronger or equivalent theorem. So the theorem its stronger. So what do I mean by that? So let me explain this. So suppose you look at a dependency on m, right? So this whole theorem depends on m implicitly somewhere. I didn't specify that. But now let's make it more explicit. Let's say, Hm is this complexity. And the same thing so where you have m neurons. And also C theta is less than B. All right. So for every m our theorem applies. So but now I'm just making a dependency on m a little more explicit. And you know that Hm is a subset of Hm plus 1. In what sense? In the sense that if you have a function that is in Hm, you can always add a fake neuron, or 0, dummy neuron, to make it Hm plus 1. Just so any f theta in Hm, you can add a dummy neuron. So meaning that you make w plus 1, 0, and the U m plus 1, 0. And then you can extend its function so that it becomes in Hm plus 1. So Hm plus 1 is always a strong-- it's a bigger family of functions than Hm. So that's why you have a-- but the bond will depend on m. You have the same Rademacher complexity for every m. So in some sense, you're bond would be stronger for bigger m. So the strongest theorem would be you just applied for H infinity. So and that's actually, in some sense, the fundamental reason why later you will see that you're going to have a generalization bound, at least the generalization bound that is decreasing as m goes to infinity. So and that's another nice property of this complexity measure. And also, another small remark is that there is something called path-norm. If you don't-- haven't heard of it, it probably doesn't matter. This is a complex measure that people proposed. And people evaluate that-- people found that this is correlated with the real generalization bound empirically. And this is very closely related to the definition of C theta here. So in some sense, what you see, that the path-norm is trying to say that you look at all the path from the input to the output. And you look at the total norms of all the paths. And in some sense, this is kind of like that. It's not exactly the same depending on which version of the path-norm. But the way you think about this is that you look at the input x. And so, this is wj. And this thing is Uj. So in some sense, you look at it-- so every path matters. So that's why you look at wj times Uj first, and then you take the sum. Instead of that you look at each layer first and then you multiply. Yeah. If you haven't heard of the path-norm, what I said probably wouldn't make that much sense. But if you have heard of it, probably you can see the connection there. This is not super important. This is just something people have empirically studied. All right, so we'll talk about more implications of the theorem later. But before that, let me prove it. Any questions so far? So how do we prove this? So you can see that one of the main point in the proof is that you want to change the scaling in the right way, because you want to capture the scaling environments. You don't want to peel off. So before what we did was that we tried to remove the w first. And then we removed the U. You have a sup over w and U, and somehow remove each of them sequentially. And now, the thing is that you still do the same thing. But you want to remove them sequentially as well. But you want to first rescale things first and then remove them so that you can eventually get the right scaling environment. I'm not sure whether this makes sense. You will see more clearly in the proof. So first of all, let's define this vector U. Let's define U bar to be the normalized version for U. So and then, let's start with the derivation. So what we have is that the Rademacher complexity is something like this. I put my 1 over in front just to make it easier. So you have-- this is the definition. And I guess, we do the usual thing. The first steps. The first two steps I'm just plugging in a definition-- xi. And now, we want to first rescale w and U before we take the sup. So what we do is that we read this as wj, Uj 2 norm. So and then we insert the phi, we use Uj bar transpose xi, right? So in some sense, you put a norm of Uj outside of the phi. The norm of Uj is a positive number. So you can put it out outside of the phi. And sorry, I have a little bit trouble reading this. But I think I can remember what-- oh. OK. There's a page segment so that I couldn't read what my notes were. Anyway, so you rearrange this a little bit. So in some sense, we treat this wj times Uj 2 norm as our old wj. And we want to kind of remove that first. That's kind of-- and also, you can see that this one is something that shows up in the complexity measure. The complexity measure is basically the sum of this is less than B, right? The complex measure is really just the sum of wj, Uj 2 norm, right? So you have sup over theta. And you, I guess we rewrite this. You change the order of the summation so that it's clearer that this times sum of i from 1 to n, Uj, phi of Uj bar transpose xi. And here, I guess let's specify what the concern of theta is. The concern of theta is that C theta is less than B. Which means that this wj Uj 2 norm is less than B, right? So the constraint is really just saying that the sum of wj Uj 2 is less than B. And now, you can see that the sum of these quantities is less than B. But we care about the weighted sub. So we weight each of the quantities by something. And then you take the sub. Where's the sigma that dropped out of that last one? What is it? Oh, sorry. Yeah. My bad. There's a sigma here. Sorry. This is the bad-- this is the problem when you draft things on the fly. Just this particular line, I couldn't read it from my notes. So I'm improvising. OK. Thanks. So and then, let's-- OK. So we know that the sum of wj Uj 2 is less than B. So that means that you can use an inequality here. So you say that you, so you, I guess, maybe let me just have a-- so this is basically you are applying this. ai bi is less than-- you know the sub in ai times the max of bi. This is what we applied. Actually, I probably should use j just to be more consistent with-- so this is j from 1 to n, and this is j from 1 to n, aj, bj. And aj corresponds to wj Uj 2 norm. And bj corresponds to this quality, right? It's abstractly what I'm doing. So if you live in this, then you get basically the sum of aj, the sum of wj Uj 2 norm, j from 1 to n, times the max over j. Right. So in some sense, this is how the inequality writes inner the product a and b is less than the 1 norm of a and the infinity norm of b. And then, this quantity, now we got-- this separate quantity. This is less than B, right? So then, this is less than 1 over n times sigma times b times sup over theta max j, sum of sigma i phi Uj bar transpose xi. And now, this-- if you carefully compare this with what we had before, this should look somewhat similar-- familiar. Because in some sense, we achieved almost the same thing as we have done before. We removed the influence of w. And we only have something about U. And here, what you have about U doesn't have the scale anymore. You only have Uj bar. So basically, now what you can do is that you can say that-- so from here it's basically the same thing as the previous proof. Let me try to repeat a few steps. So I guess one thing you can do is that you realize that this max over j is not doing really much. So what you can do is that you can replace this by max over U bar, where the norm of U bar is 1, and some sigma i phi U bar transpose xi. So that's one thing we can do. [INAUDIBLE] Sure. That's good point. So I should have absolute value. So I think I should have it here. And I should have it here. And I still should have it here. Thanks. Yeah. Thanks for catching all of this. So and then, this I guess, you probably also remember, there's a step I skipped before where I remove the x value by paying a factor of 2. So you can do it-- you can-- this is less than this sup. And there was-- all of this are almost the same as-- it's exactly the same as before. And now, you can peel off the-- you can remove the phi by the Lipschitz compensation lemma or the Talagrand lemma. So you can get rid of the phi. So this is Talagrand lemma. And then, you can-- then this becomes the complexity of the linear model. And you do some things. And then you can get the same thing, 2 B, C over square root of 2, where the C comes from the norm of the xi. So basically, from after here, these are-- so this part, the same as before. I guess there is a small difference, which is that the U bar now is normalized to non-one. So you just have a-- so that's why you don't catch up. So before, if you look at the proof before, what happens is that you don't have the-- you have some other control of U bar-- control of the U. You know that norm of U is less than BU. And now you know the norm of U bar is less than 1. So that's why you don't have the BU show up in the final bond. Because the norm of U bar is less than 1. So in some sense, this is just almost the same proof. Which the only difference is that you somehow remove the scaling of U first. You put the scaling of U actively in the w so that you can organize this a little bit better. Any questions? OK. Cool. Great. So I think next, let me talk about some of the kind of the implications of the theory here. Some of them are kind of interesting. So I think one thing is that if you believe in a theory, then what directly we should do is that-- this is not what people do in practice, but I would argue this is also close to what people do in the practice. But if you just believe in a theory, what you probably would do is you want to define the following max margin solution. You want to do the max margin on the minimum norm solution. So I guess you can do maybe either do problem one, where you minimize the complexity of C theta. And with the constraint that the margin is larger than 1. Then why we care about the margin, recall that all of this depends on the margin eventually. Because eventually, your generalization error will be the complexity over the margin. Or alternatively, I think these are exactly equivalent. So you can say that you maximize the margin. And with the constraint that your complexity is less than 1. So let's call this program two. And we can probably define this to be gamma star-- that I probably don't have to define now. So we can do these two programs, right?. So and these two programs, the reason why you want these programs is because your generalization error bound will be something like the generalization error will be something like L theta, will be less than L theta hat, will be less than C theta hat over gamma mean theta hat over square root of 10 plus low order terms, right? But this is using the general machinery that we had, right? So you have 1 over square root of 10. So you basically have the Rademacher complexity. This is the Rademacher complexity. And sorry. I mean, this part is the Rademacher complexity. This corresponds to the Rademacher complexity of H, right? And this is the margin. So that's what we got from the margin theory. [INAUDIBLE] Is there any [INAUDIBLE] difference between [INAUDIBLE]?? It just seems like [INAUDIBLE]. I think, depending on-- I think you are basically right. But I think I would say this is already something, we already achieved something. Because this-- I think maybe the right way to think about this is that you compare this to in two things, so in the very idealistic way. So for all the wj's are the same, all the Uj's are the same, then these two bounds are just the same. So then you are right. You are just voting-- you are just changing on form of a bound and nothing really changed, right? But you somehow fold the square root n somewhere. But the thing is that this is not tight always. And you probably shouldn't expect it to be tight. It shouldn't be the case that all the wj's and Uj's are the same. You probably should have a decaying wj. As you have more and more neurons, you're going to have smaller and smaller wj. [] There's no way that this is tight for all n, right? So it can be tight for all the-- for one n. But if you had more neurons it wouldn't be tight. So the typical thing would be that as you have more and more neurons, these neurons probably should have smaller and smaller norm. Because they are capturing more and more kind of complex subtleties in your function, in your ground choose function. So basically I'm saying that this inequality wouldn't be tight in the idealist-- in the ground truth function, for example. Right. So yeah. But from a very technical point of view, I think that's we only did a very small trick to change the form. Yeah. So this [INAUDIBLE] the other problems [INAUDIBLE] that [INAUDIBLE]? Yeah, I think you can say that in some sense, yes. Or at least that the other bond would be-- yeah, I guess depending on how you think about this. Yeah. But I think the way I think about this that is really just-- the way I think about it is that these two bounds are exactly the same when all the wj's and Uj's are the same. They are all, for example, constant or something like that. Or maybe all 1 over square root of n. So then you don't get anything from this, right? So but it would be much different if you want to find a function where your wj and Uj goes to 0 gradually as you add more and more neurons. OK? So going back to the transition bound. So I think the transition bond in some sense motivates the use of this kind of max norm solution or the minimum norm solution. Just because eventually your Rademacher complexity depends on the complexity of the model. And you also have the margin term from the margin part, from the last part. So and one of the interesting thing is that this quantity, if you think about this quantity, and this quantity you can show this is not increasing as n goes to infinity. So and the reason is actually pretty simple. So but maybe let me write it down just to be clear about what I really mean. So let's use, say, the hat n to be the minimizer of, say, program one. So and n is to index which-- how many neurons you are using. So for every n you have a minimizer. And you can define gamma n star to be the solution of the-- to be the corresponding margin. You can define gamma n star to be the margin with the constraints that C theta is less than hat. So let's say define this to become n star. So I think I want to define this to be true. So let's mostly use 2 as our main thing. This is a little bit typo here. So suppose you solve this problem 2, and you get this maximizer solution. So and then, so your bound-- so this means that the bound is C theta hat n over gamma min theta hat m over square root of n. And because we normalize the C, the complexity to be 1, so this is really 1 over gamma m star times square root of n, right? This is the generalization bound. So basically whether this bond is better or not depends on whether gamma n star is increasing or decreasing. And interestingly, the gamma m star is increasing. And this is in some sense almost by definition. Why? This is because if you think about what that-- the gamma star, m star means, it means that the maximum margin you can achieve when you restrict your complexity to be less than 1, right? and also use n neurons. And the thing is that when you have more neurons, at least you would achieve the same margin. You shouldn't be worse. Just because the only-- so with more neurons, you never get worse. Can at least achieve the same margin by adding just a dummy neuron as exactly the same argument as I had. At least achieve the same margin. Because you just add a dummy neuron, and it doesn't change the functionality, it doesn't change the complexity, it doesn't change the margin, it just everything is the same. But having more neurons give you additional flexibility. You could possibly change your neurons a little bit more cleverly instead of just adding a dummy neuron. That's why you margin-- adding one more neuron will potentially make your margin bigger. So at least, you never get in the margin smaller by adding neurons. So that means that this bond can decrease as n goes to infinity. At least it's not increasing as n goes to infinity. So in some sense, this is the nice thing about this compared to other bonds, where you have explicit dependency on n. If you have an explicit dependency on n, at least if you just look at it, you wouldn't be able to argue that this bond is better. So now you can say this bond is better as n going to infinity. Of course, this doesn't really say-- this doesn't address everything, because this is just upper bound. It's not like you are saying that the actual generalization error is decreasing as n goes to infinity. That would be the ideal theorem of the proof, ri ght? That would match exactly the plots I showed last time, where you have more neurons and your accuracy is improving or your error is decreasing, right? So here you're only talking about bounds, right? So if the bound is loose, then it's unclear whether this decreasing in m thing is really a big deal. And that's indeed true. So but I think this is a, in some sense, this is a starting point, right? So if your bond is increasing the m, that is completely useless. Your bound is decreasing m, that doesn't really mean that it's super powerful. But at least that's a good sign to have, right? That's a good thing to have. And in some sense, it's really hard to capture the exact test error. So if you really want to say that the exact test error of the generalization error is decreasing in m, that's basically the only thing you can do is with linear model. At least so far, the only technique I know is that you just literally compute exactly what the test error is. On linear model you can do the analytical derivation using linear algebra to simplify them. And in certain cases, you can show indeed the error is decreasing as n goes to infinity. This is actually a pretty popular direction in the last few years. People have done this for various kind of linear models. But basically, only restricted to linear models. Right? So here, we want to work with neural network. So we have to somehow live with the weaker result. You only say that the bond is decreasing but not actual error is decreasing. So I guess, the next thing I want to say is that is actually-- another thing that this is different from this program well too, they are still different from what you do in practice. You'll probably don't do exactly this complexity measure. Nobody regularize it like that. Probably somebody tried. It probably wouldn't make a difference. And here, what I'm going to say is that actually it's interesting that this complex measure is definitely different from L2 complex measure, right? But once you minimize the complex measure, you get the same effect as minimizing the L2. Or minimizing the L2 is the same as minimizing this. Maybe let me just clarify what does that mean. So basically, my main point here is that if you maximize margin, sorry. You can-- so can be done by minimizing the cross-entropy loss, the one with L2 regularization. So here I have two things. One is I'm using cross-entropy loss. And the other is I'm using L2 regularization. I'll do one of them as that. So the first, I'm going to do first use L2 regularization instead of the complex measure I defined. And I'm going to say that it's actually doing the same thing. So here is this first lemma. So suppose you consider the one we have considered, right? So let's call this J1, which is you minimize the complexity with the constraint that the margin is larger than 1. By the way, I keep changing the-- sometimes I'm minimizing the complexity with the margin, sorry. And sometimes I'm minimizing the margin with the complexity, so that's the one. So somehow, I probably should make them all consistent. But just in my mind they are always the same. So sometimes I forgot to-- sorry. I should probably just keep a single version of it. But they are the same. They are just equivalents because-- yeah. So anyway, so here, I am minimizing the complexity with the margins larger than 1. And I'm claiming that if you look at another one, which is you minimize the L2 norm, and with the constraint that the margin is larger than 1. So these two are the same. So obviously, these two functions are not the same. There's two complex measure are not the same. But if you minimize the complexity, the extreme point actually turns out to be the same, which is kind of interesting. And the proof is like follows. So I think at least one thing you know is that the L2 regularization, what is that? This is L2 regularization is the sum of the squares of all the parameters, which is sum of wj squared plus sum of Uj 2 norm square. And you can show that this is larger than the complex measure we have defined, because you can use the am, gm. So you can say this is wj squared plus Uj 2 norm squared. And you use-- I think this is called AMG, I mean, inequality of-- for me, everything is Cauchy-Schwarz, so JC inequality. So anyway, so you get wj times U2 2 norm times 2. And you cancel these 2. So this is B theta. So you are minimizing-- so in J2, the program J2, you are minimizing a larger complex dimension. But I guess the intuition is that even though you are minimizing a larger complex measure, but when you-- the extremal point actually will make these two things the same. So the intuition is that the extremal point should satisfy-- should satisfy wj is equal to Uj even when you are minimizing the L2 regularization, right? So and if that's the-- suppose that's the case, then you can believe that these two things are the same. Because when I'm minimizing the L2 regularization, if the extremal point is-- satisfies this, then for this case, if this is true, then C theta is the same as the L2. So then, you are not really doing anything different. So that's kind of the intuition. If you really want to prove this kind of formally, I guess the simplest way to prove this is the following. So you say that, I guess this implies that J2 is larger than J1. And you want to use the intuition to show that J1 is bigger than J2 instead-- it's larger than J2 as well. So what we do is that we say, let theta be the minimizer of 1, of J1. Maybe let's call this-- I think let's call this maybe 3 and this is 4. So is that a good number? That's probably not a good number. Let's call this P1 and P2. So minimizer of P1. And then, what you do is you construct. So you get a theta that is the minimizer of the first one. And you want to construct a theta prime which is very good on the fact-- in terms of the second program. So you construct a theta prime. And what you do is that you say, I'm going to take wj prime to be the renormalized version of wj. And Uj prime again, to be the renormalized version of Uj. And then, you can verify that because I'm just changing scaling, Uj times phi of wj transpose x-- is the same as-- sorry. wj times v of Uj transposed are actually the same as before. And also, wj in terms of the complexity measure, they are also the same after doing this transformation. And this means that C theta is the same as C theta prime. And f theta is the same as f theta prime. So the functionality and the complexity measure didn't change. And what's interesting is that for theta prime, C theta prime is also equal to the L2 norm. Because my construction-- OK-- why I'm doing this construction? I'm doing this construction because I wanted wj it to be equal to the norm of Uj. This is why I chose this scaling. Anyway, I think this should be like this. Oh, sorry. No. Am I right? Oh, no. It's like this. So we can verify wj is the same as Uj 2, this. So because this is actually my design in some sense. You can verify this. But this is-- if this is true, I should change my designs to make it true. But that's the point. So that means that-- so what does this mean? This means that theta prime satisfies constraint of-- so all of this means that theta prime constrains of p2. So that means that C theta prime is less than-- or is bigger than J2, right? And C theta prime is equal to, n theta to the prime is equals 1/2 theta prime squared, is equal to 1/2-- sorry. C theta prime is equals to-- OK. What is this equal to? This is equal to-- let's see what's going on here. Then I want to show that theta the prime is equal to J1. This is because-- all right. This is just because the problem is equal to C theta, which is equal to J1, OK? I didn't change the complexity measure because I'm just rescaling. So that's J1 is bigger than J2. And before you got J2 is bigger than J1. So that's why J2 and J1 are the same. Yeah. Actually I was a little hesitating whether I should show this proof or the more intuitive-- another version which is actually in the lecture notes. In the lecture notes there's a-- it's a different way to prove the same thing. At the end of it, everything is relatively simple. It's nothing really hard. So this proof is very easy to verify. And the other proof is kind of in some sense carries the intuition. And intuition is really just what I said, at the extremal point anyway wj and Uj 2 norm has to be the same so these two complexity measures are not different. So that's the manual intuition. [INAUDIBLE] Theta prime satisfies the constraint p2. So the constraint is only about the margin, right? So the margin is only about the functionality of this model, right? So if you predict the same thing, your margin will be the same, right? So theta prime and theta have the same functionality because you only rebalance the scale, right? You just multiply wj by something and divide Uj by something else. So the functionality is maintained. So that's why the margin is the same. In the first order proof proof, why is [INAUDIBLE]?? In the why there is no-- In the first order proof when you just-- you pull out the sum? Here? When you need it? Yeah. So here, this is the equality? No, the line below it. Oh, sorry. Sorry, sorry, sorry. This, not that. Why this is equality and this is inequality. I got it. OK, cool. Great. So the first thing, so the first lemma we have shown, what we have done. We basically are saying that if you minimize the L2 norm, it's the same as minimizing this complex measure, OK? And we also wanted to do the cross-entropy. And this is something I am not going to prove. But I'm just going to state the lemma. And if you're interested, you can read a paper about it. The proof is actually relatively simple. But I think we won't probably have time today to do that. So the lemma 2 is that if you consider a regularized cross entropy loss, and something like L hat lambda theta, which is equal to 1 over n times-- I guess in this lecture this is the first time I have ever talked about what is cross-entropy loss. But I assume that you somewhat know what they are, right? This is the loss for logistic regression where you have yi times f theta xi. So this is the input. And the loss is the log of 1. So the loss in some sense is really log 1 plus exponential minus t. This is the logistic loss. And you add some lambda times L2 regularization. Suppose you do this. And let's say, let theta lambda hat be the minimizer. I'm going to claim that for small enough lambda theta hat lambda is basically doing the same thing as the max margin solution. But there is a small thing that I have to deal with, which is that what is the norm, right? Because the max margin thing is-- you need a norm. You need to basically-- you need to cover the ratio between the margin and the norm. So that's why my statement is that-- again, I don't know why-- OK, here. So then, my statement is something about this. It's like this. So basically, you can say that if lambda goes to 0 for small enough lambda, then the norm versus the margin will go to J1. J1, which was defined to be the max-- the minimum norm solution, right? This is just I'm recalling the definition. So basically, you are converging to the max margin solution, or the minimum norm solution up to a scaling. Because you are looking at a ratio. So this, when you have a very small theta, you-- sorry. When you have this very small lambda, your normal theta would be something actually pretty big. That's because your organization is too weak. So you are not going to get very big norm solution. But if you normalize the norm with the margin, then you found that this is actually the max norm solution. I'm not going to prove this. I guess if you're interested, this is a theorem 4.2 of a paper I wrote with two collaborators. And actually, this theorem is actually very simple. And actually, it works for not only the L2 regularization, it works for almost all homogeneous-- or almost all regularizations you can think of. So the gist is basically saying that if you care about the max margin solution with respect to certain complex measure, so the complex measure could be L2. In this case, it could be something else. Like here, it could be anything, right? So one way to achieve it is that you just add a very weak regularization in the cross-entropy loss. And that will give you a max margin solution. OK. Any questions? [INAUDIBLE] Yeah. [INAUDIBLE] Yeah. So the general kind of gist is that suppose you care about the max margin solution, right? But max margin solution requires a complex measure, right? You need to say I'm minimizing the norm-- such a norm with the margins larger than 1, or I'm maximizing the margin with some constraints, right? There's a norm or there's a complex measure. So if you want to get a max margin solution, you just put a complex measure in the-- at here, right, so and with a small enough lambda. Then you have a cross-entropy loss. And then the solution, this way will give you the max margin solution. Of course, you can look for the max margin solution just directly by solving the program. But you can also do it this way. And this is something that seems to be more typical. At least, this is what people do empirically all the time in some sense, right? In some sense, this is just linking what people do empirically with the max margin solution, which is not what people do, not typically in deep learning. But there is a-- but the caveat here, you care about the broader interpretation, the caveat here is you need the longitude very small. So basically, this is saying that if you use a very small lambda, you get a maximum resolution. But empirically, actually, you don't use that small lambda. You actually use something bigger than this infinitesimal small lambda. So empirically, you probably wouldn't get exactly maximum resolution. You're going to get something similar to it, but not exactly the same. And actually, it's kind of interesting that, I guess, probably for CS 239, you have learned a max margin solution. So it sounds like before deep learning that's the right thing to do, right? But even if you look at linear models, it's never-- at least I haven't seen. I'm not a practitioner. I do a lot of theory. But when I do experiments, I've never seen that max margin solution is the best for linear model. Somehow it's like when you use a very small lambda, you do get max margin solution. But if you use bigger lambda, sometimes it's a little better. So I think max margin solution in some sense is just a theoretical approximation of what people really do in practice. All right. So let me see. So the next part I'm trying to connect this deep learning thing, this deep learning not very deep, like two-layer network thing with the so-called L1 SVM. This is also kind of like-- I think people-- the exact thing is in my paper as well. But it's only three paragraphs in the appendix. And we are not really inventing it. We just in some sense said something that people already knew implicitly. We thought that it's useful to write it down. So the general thing is that you want to say-- what we claim is the following. So we want to claim that this is-- we are doing-- what neural network is doing with this two-layer network, and the max margin solution, it's really just doing something like L1 SVM in some kind of feature space. So but let me explain. I guess I haven't defined what L1 SVM is. So you're probably familiar with SVM. That's the so-called L2 version of the SVM. So here you are going to have a slightly different version of SVM. So the idea is that first of all, let's look at infinite number of neurons. Because we have claimed that more neurons is always better. So why not think about infinite number of neurons? And let's see what infinite number of neurons will do for us, right? So you look at the max margin, where you have infinite number of neurons, this is the largest possible margin you can achieve with even infinite number of neurons. And suppose that this is achieved by U1, so on, so forth. You need infinite number of neurons, probably. So many neurons. Actually, you can achieve this without an infinite number of neurons. You can achieve it with, I think n plus 1 neurons, any number of data points. So but let's say you have infinite number of neurons. Just like basically, infinite is not very different from n plus 1 neurons. As long as you have more than n plus 1 neurons, you don't really get anything more from this. So and again, U bar is the normalization of U. So I think we have kind of played with this a lot of times. This is equivalent to w1 U1 2 norm, U1 bar, so on and so forth, right? This is what we have. Let's call this theta-- I think I'll call it theta tilde in this case, and I call this one w1 tilde. So we have then this rescaling a lot of times. And we know that if you rescale this, you don't change the complex measure. And then here, the complex measure is the wj Uj 2 norm. And this is just the wj tilde absolute value. So this is the 1 norm of wj-- w tilde. So that's where the 1 norm comes into play. So basically, the idea is that after you change this viewpoint, basically you just view w1 and w1 tilde as the variable. And then, you are doing some kind of sparse linear regression or sparse SVM. So formally, what you can do is that you can pretend so every U in the sphere-- on the sphere Sd minus 1 is the d-dimensional sphere. So you pretend every U in the sphere shows up in the collection of U bars. Once you pretend that every U-- why this is possible? This is just because adding more neurons is never a bad thing. But you add a lot of neurons to the Ui's, and you just set those corresponding-- this is just because you can add neurons at Uj and 0. You just add this things, it never changes anything. If you don't see any neurons in this collection, you just add this neuron into the collection. And you add 0 as the coefficient. It doesn't change the functionality. It doesn't change the complex measure. So that's why you can pretend that the collection of u1 up to Un, and I guess there is also more-- you have infinite of this. It's really just a collection of all the possible unit norm vector on the sphere. That doesn't really change anything. And once you have that, then-- so once you pretend that this Uj bar is just equal to Sd minus 1, then you can take a continuous perspective. You can say that this f theta tilde x is really something like sum of-- if you write this discrete version, you get this. But you can say this is-- you can think of this as a continuous version, where for every U bar you have a w. And you are integrating over all the U bar. I'm not sure whether this make any sense. This is the simplest way that I came up with to define this without talking about what's of-- I think this is one way that I came up to explain this without too much jargon. But I, of course, I don't know whether this works for everyone. Again, in the lecture notes, there is a slightly different way to introduce this, which requires a bit more jargon. I don't know. Any questions? [INAUDIBLE] Sorry. My bad. Why did I write sigma here? Phi. Yeah. All right, sorry. OK. Yeah, feel free. So [INAUDIBLE] number of neurons [INAUDIBLE]?? Right, OK, yeah. Can I have uncountable-- So the question is whether we can have uncountable for the number of neurons? Here, this is not super-- this is just a concept, for example, the same question-- you could ask the same question about why you can-- when we define the integral, actually, you are using the countable number of discretizations. And you take the limit, and you can still get something uncountable. So this is kind of the same thing. And also, in some sense, eventually this thing-- it's just a language, in some sense. It's not really like you implement the integral in practice. So yeah. That make some sense? OK. So and basically, I think if you see this, this is kind of like-- what you can-- the way you can view this is that you can say this is w tilde inner product with a phi. And a phi acts as a universal feature map. So basically, you think of this-- each of these is a feature. This is a feature. And this is like the coefficient in front of the feature. And this feature is-- now the feature is-- the difference between all-- the difference here is that this feature is a predefined feature. It's no longer something learned. Because you have all the possible U bar in this world in your feature set. So basically, you can view this phi acts as really just this gigantic feature-- phi feature vector, where you have all the possible U bars in your feature set. So and w tilde is the coefficient in front of feature. So basically, if you take C as doing that, and this is the feature in the kernel part, and this is the theta, or the weight vector, or the parameters in front of features. So now it's a linear function in the features. So but the thing is that the complexity measure corresponds to, as we argued, corresponds to w tilde 1 norm but not 2 norm. So this is why the max margin with the C theta is less than 1 corresponds to max margin with this L1 norm max margin. So basically, you can think of this as the-- so the corresponding questions you are maximizing the margin. The margin is the same. So w tilde phi xi, take the mean over i, and max over w tilde. And then with the constraint that the 1 norm is less than 1. And this is called L1 SVM with feature phi. So and the difference from the SVM you learned in the, for example, CS 239 would be that this is 1 norm but not 2 norm. So it's not doing just a simple kernel SVM. It's doing something different from that. And the interesting thing is that the L1SVM is actually not implementable with infinite number of features. It's not implementable. So when you take CS 239, one of the message we had is that when you use the kernel trick, you can actually work with even dimensional feature. Because you can change-- everything depends on the kernel, the inner product of the feature. So you don't really care about dimensionality of the features. So you can work with infinite dimensional feature. But here, you don't have that kernel trick anymore. So if you have the L1 constraint, the kernel trick doesn't apply. The final solution is not just a function of the inner product of the features. So you cannot apply the kernel trick. So that's why you cannot implement it with the kernel trick. So this part is purely for understanding. This is saying that, OK, neural network is doing something more than what you can do with kernel. Because now you are effectively doing a L1 version of the kernel problem, which was not able to do-- which is something you are not able to do with the standard kernel trick. You have to use the neural network to achieve the same thing. So did we prove that l1 SVM is not implementable or is that just sort of an effect? Yeah, we didn't prove that L1 SVM is not implementable. But I think the-- how do I say this? I guess, how do you prove that it's not implementable? You have to have a-- you have to say what do you mean by implementation. So this is just really just-- we don't know. Maybe the easiest way to say is just, we don't know how to implement it. But it sounds like very unlikely to be able to be done. Also on the other side, on the flip side, for neural networks we are saying that you can implement it. So basically, here is saying that you can effectively use neural network to implement this L1 SVM. But the caveat is that you still don't know whether you can optimize the network. So it's not an end to end result. It's saying that if you assume you can optimize your neural network efficiently and up to global minimum, then you can solve the L1 SVM. But there is a caveat about whether you can really computationally solve neural network. That's something we don't know how to do. We don't know how to prove theoretically. Empirically it sounds like true, you can do it just by gradient descent. OK. So I think this is all I wanted to say about the two-layer network. Next we are going to talk about-- our goal would be-- the next goal would be to prove something about multiple-layer network. And we need more tools. So my plan is to spend the next 10 minutes to talk about some of the tools. And we need to continue about the tools in the next lecture. And then we can talk about how to have better bounds for multi-layer network. But if there's any questions, I can talk about that. I can answer any questions first. It's a little bit awkward. I thought I have 20 minutes. But there is only 10 minutes. But still, I think it's OK. We can start with the simple thing. But it will be kind of like a-- at least for the moment it will be a quite different mindset. We are thinking about the tools again. OK. So now we are talking-- we are getting back to how do we bound Rademacher complexity. And we are talking about the different type of tools. And let's recall, maybe-- OK. I guess before doing that, maybe let's think about a function space view of the Rademacher complexity. So maybe let me write down the Rademacher complexity first. So this is something like if you have a function of class f, a Rademacher complexity is-- this is the empirical Rademacher complexity. So if f is equal to Z1 up to Zn. And let's think about-- let's define the following set Q. This is a set of vectors. And the vectors are the outputs of f on these endpoints. So for every function, you're going to have a vector, n dimensional vector. And so this is basically the set of outputs of f on the data points Z1 up to Zn, right? These are all the possible outputs you can get from applying f on this set of points. And they are vectors. And then, you can rewrite this as the Rademacher complexity as the following. So you can think of this as you are looking at all the possible vectors V in Q. And you look at inner product of sigma with v, right? Just because this 1 over n sigma v is really just the sum of sigma i vi, which is the sum of sigma i f Zi. This is just a rewriting. So the point here is that this RSF only depends on Q. It depends on the outputs. But for example, but not as opposed to the-- for example, the parameterization of f. Let me explain what I mean here. So for example, let's suppose you have function of class F, which is F of x is equal to something like sum of theta i xi. Where theta is in dimension d. And suppose you have another function class, f prime, which is of the following form, say something like sum of theta i. I'm writing something, where theta is in d and w also is entirely in d. This is just a weird example just to demonstrate a point. So suppose you have this two function class. And they have different parameterization. They have different parameter space even, right? So one has d-dimensional parameter space, and the other has 2d dimensional parameter space. But these two functions have the same Q-- corresponding Q. Because they are the family of outputs are the same. Because in some sense, you can have a one to one match between one-- a function in capital F and a function in capital F prime because they are just-- for every possible outputs that can be output by the function F, you can also find the one that can be output by some function in F prime. So they have this different parameterization. But they have the same functionality in some sense. Or it's the same family of functions. And they have the same Q. And then that means they have the same Rademacher complexity. So I guess, I'm just trying to reinforce this idea that the only thing that matters is the outputs of the functions, but not how the functions are represented or parameterized. And this would be useful as a general thing. It's kind of like a change of mindset. So before you are talking about the parameters, right? So what are the parameters of F? How do you discretize your parameters? From now on, we are going to get rid of-- we are not going to think about the parameters that much. We are more thinking about the outputs of the functions. And there's a so-called Massart's lemma, which is actually one of the things you are asked to prove in the homework. So this lemma is saying that if this Q satisfies that Q is, first of all, so I guess maybe let's say for every vector V in Q, the two number of V over square root of n is less than n. So this set of Q contains only bounded vectors in this sense. By the way, from now on, we're going to see these things very often. Just because you want to normalize your vector. You measure the vector by the normalized norm. So the norm itself doesn't matter that much. You want to normalize the norm by the dimension of the vector. [INAUDIBLE] Right. So this is like the range from theta [INAUDIBLE]?? Right. That's right. But I think this is actually a very good question, which I probably should have talked about earlier. So I think probably I mentioned this a little bit at some point. So one of the nice thing about empirical Rademacher complexity is that now you are in this mindset that your Zi's are fixed. So you don't have any randomness in Zi's. They are just the endpoints fixed there forever. And of course, the functions can be changed when you have a family of functions. But you don't have a changing Zi's. So that simplifies things a lot in some sense. And so, in some sense, you can think of this even in some sense, you can think of the functions-- the family of functions are functions that map Zi's to real numbers, but not functions that maps Rd to real numbers. You forget about any other points. This will just have endpoints. And all your functions can be just represented as n numbers, which are the outputs of these endpoints. There's no other point you have to care about. That's kind of the beauty of the Rademacher complexity in some sense. That's why it's powerful. Because you, before the Z is the source of the randomness. But now the randomness come from the sigma. So that's why you can fix Zi's. So is this going to be a statement about if you have-- as long as you have more than [INAUDIBLE]?? As long as-- You have [INAUDIBLE]? Because then you can [INAUDIBLE]?? I think you are-- the exact same problem is not what you said, but you are in the right direction. So basically, the Rademacher complexity depends on how complex this set Q is. That's what I'm going to say. And actually, the next time you can see that-- I think we have actually mentioned this before. So if Q is not very complex. For example, if Q is a finite set, then you have good Rademacher complexity. And of course, how do you measure the complexity of Q, that's a little bit of a-- it's a question that we have to study. But there are, for example, if the Q is finite, then you have a bond on Rademacher complex. That's what I'm going to write. So suppose, so you need two things. One is that Q is finite, and the other thing is that Q is roughly bonded. And there's two things that you have that this expectation over sigma of this thing, which is equivalent to Rademacher complexity, is bonded by square root 2 n square log Q over n. So the sign of Q come into play. And I guess as a corollary, I think this corollary is something I have presented before with other proof. If F satisfies that this function is bonded on this Zi's in the following sense, the average output is bounded by m square, the average output square is bonded by m square, then the Rademacher complexity of F is bounded by 2m square log F over n. OK. So that's the relatively easy thing where you have finite hypothesis class. OK. So and this is a homework question. I thought I made the homework question. I think there's a hint, which is actually pretty important, which is you should consider using something about the moment generating function, which will make the math easier. Actually, there are two ways to prove it. The other way is that you do this quantization plus union bound. And that will give you a-- you will have a relative hard time to do that, just because the constants are so hard to make-- you can work out a similar bound, but just a little messy. The moment generating function is very really cool. The proof is actually pretty short if you use the right way-- you use it in the right way. OK. So I guess, let me just briefly give a quick overview of what we're going to do next. So that you can appreciate why I'm setting up the things here. So the next thing is that what if Q is not finite? What do we do, right? So and our answer would be that you do some discretization and class union bound. So basically you have some epsilon covering stuff, and you have a union bound. Or maybe I should say I have discretization to reduce to the finite case. That's basically the idea. And you probably have seen this idea before, right? You have seen it in the third lecture maybe when we talk about infinite hypothesis class. But here, the difference is that here you are discretizing the output space, the set Q which is a set of n-dimensional vectors. So before you were discretizing the parameter space. You already have a d-dimensional parameter space. You discretize that. But here you are doing a more and sometimes fundamental discretization. Because any way the parameters are argued is probably not the most important thing, right? What's really important is that what's the functionality of this family of functions. So now you are discretizing in the right space, your more fundamental space. And this is the space of the outputs. So what we will do is, we're going to discuss a few techniques to discretize this Q and what kind of discretization you really need, so and so forth. So and there's actually some kind of a pretty deep theorem, which is called the Dudley chaining theorem. Which actually requires you to discretize in your nested way. You have a hierarchical discretization, so that you can have the best discretization. So this is something beyond what we have done before. Even you don't care about the difference between the output space and parameter space. Here you can discretize in a much more efficient fashion. So that's what we're going to do next. And then we're going to use this for the multi-layer network. Sounds good. I think that's all for today.
AI_LLM_Stanford_CS229
Stanford_CS229M_Lecture_17_Implicit_regularization_effect_of_the_noise.txt
OK, cool. Let's get started. So I guess today we're going to talk about implicit regularization of noise. And the plan today is that because this is a pretty challenging topic and I think the research community is still, in some sense, doing research on this-- so we have some results. It's pretty complicated. So what I'm going to do is I'm going to somewhat-- using a relatively heuristic approach. So I'm going to try to convey the main idea without doing the actual rigorous statement. So in this lecture, I don't even think I have a formal statement to state because it's just a little bit too complicated and unnecessary, right? So if I really proved the formal version of the theorem, that probably would take two lectures or three lectures. So that's why instead I'm trying to kind of at least convey the main intuition why the noise is useful still with some math, because without the math, you don't even see the intuition sometimes. But the math wouldn't be always rigorous, and I would not know where it is kind of like not rigorous. And also, some part cannot be made rigorous without additional assumption. And I'll be-- I am clarifying that. And so some part is really just for convenience. I ignore some kind of the jargons, but they can be fixed by just doing more careful math. And some part is actually fundamental challenges, and you have to really use additional assumptions or maybe even change the problem setting to go through those steps rigorously. So I guess the main portion of the talk, the lecture is actually not about any particular loss function. It's about generic class function. We're going to make some special simplification for them, but you don't even need to really think about parameterization in most of this lecture. So the setup is that we have a loss function. Let's call this function g theta. And I'm also going to use x as the variable at certain cases. So the stochastic gradient descent algorithm-- by noise, I really mean the noise in SGD. The stochastic gradient descent algorithm that'll we'll analyze is something like this. So theta t is equal to theta t min. Theta plus 1 is equal to t minus some noisy gradient. So we have the full gradient plus some stochastic noise, where the expectation of this ksi t is 0. So this is really a mean 0 noise. But the distribution, ksi t, in the most general case, the distribution of ksi t can depend on theta t. Right. So the noise distribution depends on which point you are evaluating at, right? So you can see this formulation at least so far, at a very general level, does capture, for example, stochastic gradient descent, as you usually know, like the mini-batch stochastic gradient descent. Because suppose you take a mini-batch gradient with a few samples, then it is indeed can be written as something like the full gradient plus a stochastic variable, which means 0. But we are not going to analyze that particular version because then noise becomes too complicated in some sense, right? So we're going to analyze much simpler noise in most of the cases like something like a Gaussian noise. So this is-- so strictly speaking, this is more about noisy gradient descent than stochastic mini-batch stochastic gradient descent. But they do share a lot of similarities. OK? And what we are trying to do is we're going to gradually build up our intuition about how does this noise affect our optimization algorithm. So we're going to start with various-- we're going to have several levels warmup. So the first warmup is that what if you have a quadratic loss function? So quadratic loss pretty much means that you have a linear model under the hood. But here I don't even have model parameterization. I only have a loss function, g theta. And say we have a quadratic loss function and have Gaussian noise. And also, we have 1D, one-dimensional functions, like theta is one-dimensional. So I guess here I'm going to use-- from now on, we're going to use x as my variable just to make it more consistent with the optimization literature. And g(x) let's just assume is 1 over 2 times x squared. And this 1 over 2 doesn't really matter. It's just to make the gradient cleaner. So then what's the update rule for this case? You are optimizing a quadratic function basically, which we are at global minimum is at 0. But you are using stochastic gradient descent or gradient descent with noise. So xt plus 1 is equal to xt minus eta within g of xt plus some noise. Let's say the noise has some scale sigma and multiply ksi t. And ksi t is not going to have a scale, so it means 0 on standard deviation 1. So basically, a noise has standard deviation sigma and Gaussian distribution. And now, let's compute the gradient. The gradient is really just a xt, right? The gradient of 1 over 2 times x squared is the x. So it would be plus sigma times ksi t. This 1 over eta xt minus eta times sigma times ksi t. This one is not working. Maybe that might have-- so what's happening here is really that this is a contraction, meaning that if you have xt, then it contracts a minimum of make the xt smaller by a factor of 1 minus eta. And this is the stochastic term that may make x get bigger or smaller, depending on whether you are lucky or not, right? So in some sense, what happens is that-- so the interesting thing is that if-- so when xt is large, the contraction is dominating The contraction, or the shrinking, is dominating, right? Because for example, suppose your xt is here, then what happens is that you first contract it to somewhere here by multiplying 1 minus eta. And then you add some stochastic noise. So then maybe at the end of that, you may end up somewhere near the [INAUDIBLE],, right? But still, largely speaking, you are moving towards 0 because of the contraction, or the shrinking just because the shrinking is doing most of the work. However, so let me finish-- so the contraction dominates. However, when xt is small, or maybe xt is very small, or maybe xt is 0 for simplicity, then the noise is dominating the process. I guess this-- suppose you start somewhere very, very close to 0 then maybe shrink it. It doesn't really change much because 1 minus eta times a smaller number is still a small number. And the noise probably make it somewhere either on the left-hand side or the right-hand side. So the noise becomes the dominating part when xt is small. And eventually, basically, you are going to basically converge to this second case to some extent. Because if xt is large, then you are moving towards 0. And what happens is eventually xt becomes somewhat small, and noise is kind of like governing all the process. So eventually, you are just bouncing-- you're bouncing just around globally on a certain level, right? So you cannot bounce around on some very, very high-level values because then your contraction is too large. You wouldn't be able to bounce around that level very much. So eventually, you will be bouncing around on certain level, depending on the noise level. Right. So it's kind of like what happens if you have, I guess, if you think about you drop a ball in a kind of a concave kind of thing without any friction. It's not exactly the same because there you don't have really additive noise, but you still see this bouncing around eventually just because you can overshoot a little bit. Yeah, maybe that's not exactly the right analogy but anyway, So eventually, you will bounce around some-- the valley of a certain level. And how do we kind of precisely-- by the way, this sounds like nothing really to do with implicit regularization because eventually, whatever you do, you always stay close to a global minimum because there's no even two other-- there is no two global minimum, right? But the intuition is very useful for future things when you kind of move away from this. So this is indeed important. And let's try to be more precise. And this is actually our case that we can be precise. So we can solve the recurrence. So when we solve the recurrence, what happens is xt is equal to 1 minus eta xt-- xt plus 1 is equal to 1 eta xt minus eta sigma ksi t. And then you plug in the definition of xt again. 1 minus eta 1 minus eta xt minus 1 minus eta sigma ksi t minus 1 minus eta sigma ksi t. And if you rearrange this, you get 1 minus eta squared xt minus 1 minus 1 minus eta times eta sigma ksi t minus 1 minus eta sigma ksi t. And if you do this for another level, you get 1 minus eta cube xt minus 2 minus 1 minus eta squared eta sigma ksi t minus 2. And if you do this more and more, eventually, what you're going to get is 1 minus eta to the power t plus 1 times x0 minus eta sigma times the summation. Summation looks like this. It's a linear combination because ksi t-- ksi k, but the coefficient in front of it is some power of 1 minus eta. So from this, you can see that structurally there are several interesting things about this formula, which can give you some intuition. So one thing is that this thing is a very strong contraction, right? So this is the contraction part, right? And in some sense, this term comes from you construct that initial value by a lot of 1 minus eta so that basically this becomes negligible. This becomes negligible when eta times t is much, much bigger than 1, because 1 minus eta to the power of t is something like e to the minus eta t. And when eta times t is much bigger than 1, then this term becomes super small. And you can also see from the other term this is, in some sense-- you can view this as accumulation, accumulation of the noise. The noise are not just adding up. The noise are accumulated in a certain way. And maybe it is easier even to look at this. So the noise that you added at last step is scaled by eta times sigma. But the noise that you added at the second to last time step is scaled by 1 minus et-- you have additional factor, 1 minus eta. And how does this 1 minus eta come from? This come from the contraction of the second last step. So basically, ksi t minus 1 is what you added in the second last step. And then because you do another gradient step-- gradient descent step on top of that, we still contract the noise a little bit, right? So maybe you can see this from here as well, right? So this is what you got from here. But this minus eta comes from the contraction in the very last step. And the same thing happens, right? So this comes from the contraction in the last two steps. So basically, every time you add noise in some intermediate step, eventually, this noise will die eventually at some point if you run for a long, and long enough time just because there is always a contraction that is applied after this noisy step, right? So that's why it was-- multiplied in front of the noise is a geometric series. And it depends on when you add in this noise, the coefficient in front of the noise becomes smaller and smaller. So you forget about the very, very long history, right? So if you add a noise at the very, very first step, it doesn't really matter because you multiply that-- so when k is equal to, for example, t minus 1. When that's the noise for c1 then you add it in the very first step. Then that noise becomes much less because you multiply 1 minus eta to the power k in front of it. Because of the contraction, that noise becomes less or less important. So that's one thing. The accumulation of the noise is kind of like-- prefers the closed history and ignore the long-term history. And another thing is that this is a sum of Gaussian. Right? Because each of these term is a Gaussian under our assumption, because ksi is a Gaussian, and ksi times something is still a Gaussian. And you can also compute the variance of this. So the variance of this is equal to eta squared sigma squared times the variance of each of the term, which is something like 1 minus eta to the power of 2k. And the point is that if you take k go to infinity, then you can know what the limiting variance, what's the variance at the end. So if k goes to t goes to infinity, you can compute the variance of xt, is roughly something like eta squared sigma squared if you replace t to be infinity, 1 over 2 eta 2k. And this is eta squared sigma squared 1 minus 1-- sorry, my writing is not very clean. 1 minus eta squared is how you compute geometric series. And this is eta squared over sigma squared 2 eta minus eta squared. But this term can be dropped because that's going to play out the term. So this is approximately just on the order of eta sigma squared. OK? So in other words, the xt-- eventually as t goes to infinity, eventually has this Gaussian distribution with mean 0. And the variance is something on the order of eta sigma squared. So I think-- so here so far, again, we haven't really talked about implicit bias yet, but I think we still already got some intuition about what's happening here with the convex case. So small iterate, small eta, means that your iterate will have small bouncing around, right? So small stochasticity of the final iterate-- in the final-- in the iterate, because your variance in the iterate xt is smaller. And you have small noise, the same thing. Also, it implies the same thing. And so basically, what happens here is that the noise only makes it harder to converge to a global min. So in some sense, if you only care about the quality of the final solution you converge to, the noise is always pertinent especially if you are willing to take t to infinity, right? So here you can see that when t goes to infinity, you see it never converge to exactly the global min, right? So you should always have some variance around a global min. And you want the variance to be as small as possible because you want to be as close to the global min as possible. And a noise is only a hurdle instead of [INAUDIBLE] anything. So this is why in the classical convex case-- so this is why the classical convex case composition, typically, when you think about noise, it's only about two things. It's only about A, the noisy gradient descent that leads to less accurate solutions. So this is the best thing. That's what we discussed. And second, noisy gradient descent is faster to compute. Why the noise come into play? This is because maybe you only sample a few examples to do the sampling, where you do empirical into a minibatch with gradient descent. So noisy gradient descent is faster to compute. And the only thing is that you are feeding off these two factors of-- that's, I would say, the typical way of thinking about stochastic gradient descent when you really think about a convex case, right? So noise is bad because it hurts your final accuracy. But you want to allow some noise in certain cases because you can compute faster, right? So you can trade off in a-- trade off in the right way. You can get the fastest algorithm eventually. And you can kind of imagine how you trade off this, right? So at the beginning of the optimization, you don't care that much about accuracy. You don't care about that much about converting to exactly the global min. You want to go to the global min as fast as possible. So that's why at the beginning, you don't care that much about noise. So that's why in the beginning, you use a large number. And then when you are already close, where your goal changes, because now your goal is to really literally go to the global min, period So then you cannot allow any noise. So that's why you have the decay order rate. So that's why there is always this kind of decay on linear algorithms. So, so far, this is about-- yeah. And also another thing is-- just a side remark, which is useful for us, which is a useful comparison for us and later. So for any fixed-- suppose you fixed eta and sigma. So the expectation of xt is always going to-- convert into 0 as t goes to the infinity, right? So even though there's a stochasticity, there's a bouncing around, your average is always at 0. So this is saying that there's no bias introduced for the stochasticity. You only introduce, in some sense, some fluctuation. Of course, fluctuation is also bad, but at least you didn't need to use any bias or systematic bias against any other directions. So that's another striking remark which we will kind of like compare with in a bit. And also another small remark is that this is also called this process. It actually has a name. It's called Ornstein-Uhlenbeck process. If you are familiar with this process in some other context, you can see this is actually doing the same thing. And we are going to call it OU process just for simplicity. This is going to be kind of a basic building block for us to analyze SGD and more complex case. OK. So we have kind of like understood the quadratic. And now let's do the multidimensional quadratic, which is not really much different. But I think I needed, in some sense, just to evolve for the sake of future, like the future steps. So suppose you have a multidimensional quadratic-- suppose you have some like g(x) is equal to one half times x transpose Ax, where A is a matrix. Dimension d by d. x is the variable in dimension d. And a is PSD. So and then suppose the noise ksi t-- now let's not assume. It's just a square root of Gaussian. Let's assume it has a covariance sigma. And then your update rule, let's say, suppose we care about this process where you do gradient descent with this stochasticity ksi t. And then this becomes xt minus eta. The gradient will just be a times xt. Then you add ksi t. And this rearranging, you get I minus eta a times xt minus eta ksi t. And you can do the same recursion as we did before, but we replace the definition of xt as a function of xt minus 1. And you do this recursively. Eventually, if you do all else, you get i minus eta a to the power of t plus 1 times x0 minus eta times i minus eta a to the power of k times ksi t minus k. And you can see, this is still the same kind of intuition. This is the contraction. Of course, this is a matrix. We are multiplying something less than 1, less than any of these. So we are contracting matrix signs. And here, this is how the noise accumulates and also the noise in the history, in the very far history. Suppose you take, for example, k to be close to t, right? ksi t minus k, this is something very, very far of our history. In a remote history, the noise becomes less important because there's a contraction term applied after noise is added. And this is-- right. And you can-- in some sense-- so this becomes a more complicated formula, but you can still somewhat do the same thing. Suppose if you-- so you can still do a similar calculation if A and sigma are simultaneously diagonalizable. So if they are not simultaneously diagonalizable, you can still do something to solve this to compute the sum, to simplify the sum. But it's going to be even more complicated. So let's only think about case of a in sigma. in the same case, they are diagonalizable. Then in some sense, you can just view this as-- view as d, different is separate OU process in the eigen space. OU process just means one-dimensional problem in the eigen coordinate system. Because when you use the eigen coordinate system, then A and sigma are just both diagonal matrices. And then you are basically just updating as if you are one-dimensional case. And in some sense, more formally, what happens is that's supposed to take A-- suppose A is UD U in transpose, where D is this diagonal matrix, which has the eigenvalue of A. And just suppose sigma is U diag sigma i U transpose. Then what you can do is that, as t goes to the infinity, the xt roughly comes from this Gaussian, which means 0, because this part got contracted. And you can look at the variance, which looks like something like this. The power of k times sigma epsilon 1 minus eta A to the power of k. This is just because we computed the variance of the [INAUDIBLE]. Expectation some matrix W ksi ksi times ksi W. Transpose the covariance of this linear transformation of Gaussian is equals to W. See? Transpose W transpose, which is equal to W times sigma. Both transposed, right? Sorry. That's how you compute the covariance of each of this term, and then you take the sum of them. And A is a symmetric matrix. So A and A transpose are the same. And you can do this, and then you can simplify this when you have the eigendecomposition. So you have the eigendecomposition. Then i minus eta A is U times diagonal 1 minus eta di U transpose. And sigma is U diag sigma i U transpose. And then you can compute this. So you can compute this sum. It could be something like eta squared times-- I guess maybe that's also through the-- let me just do this. If you look at a kth power, you just multiply the k in front because the U and U transpose would cancel if you put the sequence [INAUDIBLE]. So then this becomes eta squared times sum of U diagonal-- I guess I should assume this is sigma. Let's assume this is sigma squared just to make it nicer looking sigma squared times 1 minutes eta di to the power of 2k U transpose, right? That's this matrix. And then this is the beauty of eigendecomposition because everything becomes on a diagonal, and they have u squared eta squared u times. Then this becomes you take the sum. So the infinity-- because i from 1 to infinity would-- this is k. The i is for the coordinate. The k is for the-- sum over k sigma i squared 1 minus eta di 2k U transpose. And this becomes eta squared times U times something like sigma i squared over di U transpose. And it is over eta here, so let's remove the eta here. So you can see that basically you have some noise, and noise is something on this level. So this is the noise level in the i-th eigenvector direction. And noise level depends on-- this is the lab-- maybe let's just be precise. This is the iterate noise here-- iterate stochasticity fluctuation that I will-- because we are competing with the fluctuation of the iterate. And the fluctuation level in the eigenvector direction depends on the noise level in that direction, and also depends on how strong the contraction is. If the contraction is big then you are going to have smaller noise, smaller iterate stochasticity because you have so strong contraction. And it doesn't withstand a lot of noise to build up. And if the noise is big, of course, eventually, you're going to have like a larger fluctuation of iterate, right? And another thing that is useful to realize is that another small remark that is useful is that this matrix U diag sigma i squared di U transpose. This is always in the span of sigma, right? So if sigma has some direction where suppose sigma is lower rank-- capital sigma is lower rank. So in some direction, there's no noise. And then in those direction, xt doesn't have any fluctuation either. So that will be something useful for us in the future. And another thing is that xt-- if you think about what's the rough side of xt, just the norm of xt. This is on order of square root eta because this quantity is something that doesn't depend on eta. So if you want to look at the interdependency, then the norm of the stochasticity or the fluctuation in the iterate will be on all of square root eta. And this is something that's probably good to remember for the moment. It'll be useful for us in the future as well. Any questions so far? [INAUDIBLE] should it be also summed to i? Right. Right. Yeah. But all of those depends on dimension, for example. It depends on how large our sigma i's and how large those di's are. But in terms of dependency on eta, this is on order of square root eta. That's what I mean. [INAUDIBLE] Yeah, yeah. Sure, sure. Yeah. I guess I'm only talking about depends on eta so far. That's like the standard deviation of xt essentially? Sure. Yeah. [INAUDIBLE] square root n. Yes. Well, it's the size of x also takes into account a contraction term. So is this for large t so that the contraction for it turns sufficiently small? Yes. I'm talking about the case where t is infinity. t is going to infinity. So maybe one way to think about this-- I think I kind of sense what your question is. So this is the fluctuation in the [INAUDIBLE] iterate. So in the iterate when t is kind of infinity, it's different from the noise you added at each time. So again, that's actually a very good question. So if you look at the noise that you added at each time-- so this is how large is this. This is on order of eta if you ignore-- of course, you can ignore other dependencies except eta. So each time you add some noise on the order of eta, eventually, all of this noise build up. They got add up together. And they add up to something on order of square root eta. So that's how the noise kind of like accumulate. But it wouldn't accumulate to infinity just because of the contraction, because of this, this term that also contracts the noise to some extent, a little bit. But still the noise build up with one kind of like half order higher in terms of eta, right? So it accumulates from eta to square root eta over time. So, yeah. Order of eta. OK. So we have a pretty kind of good understanding of what's happening basically. Basically, eventually, it's bouncing around with the radius, something like a square root eta in the value of this quadratic. And also, you don't bounce around in those direction where you need to add noise. So that's the [INAUDIBLE]. And now let's look at-- Is there a reason-- can we talk about the idea of noise in this direction back onto minibatch or stochastic gradient descent in a natural way, or is that not the [INAUDIBLE]?? So is there any way for us to map back how-- so you want to connect back to the world where we have the minibatch or gradient descent? So for convex case, it's not that difficult. So what do you do-- basically, what you say is that what is sigma. Sigma is your-- so in our calculations of sigma, in our definition, a sigma is the covariance of the noise in a gradient, right? You can compute what's the covariance of the noise when you use mini-batch gradients. So you can compute that. And that is something that might change over time. But I think you can pretty much say that when you are kind of close to the global minimum, the changes of the covariance of the gradient-- the changes in the covariance of the gradient, of the mini-batch gradient, is negligible. It's even higher long term. You can basically ignore it. So basically, if you want to map this back to the mini-batch gradient, this sigma will just map to the covariance of the mini-batch gradient at the global min theta star. So then you can kind of face everything. But I don't think you get anything super interpretable anyways, so that's why I didn't get into it. [INAUDIBLE] it just seems like, if the global minimum is very flat on some dimension, the variance would have a very large effect. Yes, exactly. Exactly, exactly. That's exactly correct. So this is-- so suppose you have two dimensions. I think this is actually a very good question. So if you have one direction which is like this and suppose you have another direction which is like this. So the question is, how does the noise affect these two dimensions? And also, there's a question about how do you evaluate the impact of the noise? What's the metric you are thinking about? So, so far, I'm thinking about how does the noise change the fluctuation in the iterate, right? So suppose I'm adding noise to the same amount of noise, one unit of noise in both of these cases. I think it's indeed true that stochastic gradient descent itself will kind of fluctuate more in this case. Actually, it probably wouldn't look like this. It'd probably look like something like-- maybe you do some kind of like stochastic things like this. But this is-- you're going to have a larger radius for bouncing around. And here you're going to have a smaller radius. You are going to be more closer to the value. However, even if you have a larger radius here, it doesn't necessarily mean that you have a larger effect on a function value because you fluctuate a lot, but the function is flat, as well. So it's OK to fluctuate more in some cases. So I think let's see whether we can compute the fluctuations. So suppose you have sigma squared over i di squared. This is the radius of the fluctuation. And you multiply-- what do you multiply? You multiply di because di is the curvature of your open function. So this is sigma i squared. This is something that doesn't depend on the curvature. di is the curvature, so this is kind of like x squared, the fluctuation you have. So if you look at the effects to the function value-- and it may not depend on the curvature that much, at least not for the quadratic. Yeah. Right. So that make sense? OK, cool. All right. So now let's talk about nonquadratic function. And then this is kind of where the things become interesting, but it's interesting only on top of what we have discussed. That's why we need to have the warmup. So nonquadratic-- and so far, I'm still doing kind of like-- you can still think of this as a convex function, even one-dimensional convex form so far. I'm going to change that a little bit. And again, for simplicity, let's say, with the loss of generality, let's assume the global minimizer of this g(x) is just the 0 back here right? So we still have 0 as the global minimizer. We are still doing something around 0. And I think I'm using a matrix notation right now here, but I think I realized that in the matrix notation-- oh, I remember. OK. So the reason why I'm using matrix rotation's because I don't have to do the two things there, the scalar case in the matrix case. But for simplicity, you can-- in your mind, you can pretty much interpret all of these as scalars. OK, so I'm also seeing that-- because 0 is the global min, then that means that the gradient at 0 is 0, right? That's a necessary condition. And also, that means that the Hessian g(0) is PSD. OK. And let's also assume-- this is the part where they kind of like become not super rigorous, but we can make this part rigorous. It's just that I wouldn't have time to do all the rigorous stuff. But this part is doable. So suppose we'll assume the iterate are close to 0-- so start from somewhere that's close to 0. And then you can do Taylor expansion around 0. So what you do is you do xt plus 1 is equal to xt minus eta times gradient xt plus ksi t. And you do Taylor expansion to approximate the gradient at ksi t-- at xt. So how do you do a Taylor expansion? So if you take expand at 0, what have you got? You're going to get the nabla g(0) plus nabla squared g(0) times xt minus 0 and plus nabla cube g(0) at xt xt. And maybe let's have also high order terms, which we are going to ignore heuristically, and then we're going to get epsilon t [INAUDIBLE].. So I guess if you're not familiar with the matrixing, then I guess this is really just saying that g prime xt roughly goes to g prime 0 times xt plus g prime 0 times xt squared plus g cubed third order of theta times-- wait, what I'm doing here. So there is no xt here. There's xt. And xt squared plus in high order terms, something like this, right? But I want to use notation that this-- if you do the matrixing that this is the matrix vector product. And this is a tensor vector product. Let me explain that a little bit. So if you do the multidimensional case, this is a third-order tensor of dimensions d by d by d. And suppose you have a t that is a third-order tensor. Then I'm using this t x y, where x and y are all vectors, is defined to be a vector. So this is the multiplication of this tensor with two vectors. First of all, it's a vector. And second, the definition of this is that the i-th part of this is the sum over jk ti jk, xi yj. So xk y-- xj yk. So basically, sum over the remaining part of j and k, and you leave the i alone, left, and that's the outcome. So this is basically the Taylor expansion in multiple dimensions. OK. By the way, just a reminder for the scribe note takers, I think, for this kind of small things I write on the side, please also take notes for those because they are useful for readers, as well. If someone doesn't have time to take the lectures, they read the lecture notes. I think these small explanations are also useful. You can just have a small kind of remark of some writing in the paragraph left bound. So, all right. So we have to do then Taylor expansion. And then we can see that's why-- so we're expecting something somewhat similar to what we had done before, right? And indeed, you will see that, because, A, this is going to be 0 because this is the gradient at 0. And this is 0. So basically, what you can get is that xt minus eta times-- OK, so I guess let me define this for simplicity H to be this like H to be the Hessian at 0. Then you can rewrite this as xt minus eta H xt minus eta ksi t and minus the third term. I guess I'm also going to define-- let me see what's my location here. I define T to be the third other derivative, and then this is T xt xt. And high-order terms, let's ignore that formula from now on. I know we read up on-- we had a formula dealing with eta, but we just have an approximation here. And then this is i minus eta H xt minus eta ksi t minus eta T xt xt. And I think you can see. I guess what I-- I was hoping for you to see is that the third-order term is something new. But this first-order term and second-order term are not new. The noise term and the second-order term are exactly what we had before, right? So if you look at here-- so for the quadratic case, you have extraction, and you have a noise. The extraction is linear, and you have the noise. And now the only difference is that you have additional term from the third-order derivative. And that's expected because if you don't have the third-order, you ignore the third-order term. And it becomes just quadratic. So that's why we wanted to expand out to the third order because we want to really use the fact that this is not a quadratic function. So basically, you can think of this as two process going on, right? So one process is this OU process kind of like-- this kind of like basic one about quadratic. And you have additional term that in fact make a little bit more complicated. Right. And how do we proceed here? So if you really think about-- so in some sense-- so there is one thing-- This is a heuristic derivation. So when in certain cases, you're kind of attempting to even drop the third-order term because maybe you have a small term. And let's try to do that. So just drop the third-order term. Just drop it, all right? So suppose you drop it. Then you have this process, x is updated by something like this. Right? So this is the process. This is something we have analyzed. And we know that, with convergence, xt will be something on the order third eta. So here I'm ignoring all the dependencies, except the dependency on eta. And now, if you look back on what happened with this third-order term, so when xt is on this order. So eta T xt xt. What is this? This is on order of eta squared because xt contributes square root eta. This actually contributes square eta, and there's an eta here. So basically, we have an eta square term which sounds very small. Why this is very small? This is much smaller. So eta squared is much, much smaller than, for example, eta ksi t, right? But that probably is not unfair because ksi t is doing some random stuff. But eta squared is also much, much smaller than even just eta H xt, which is on the order of eta to the 1.5. So basically, the changes of your process where the two other changes of your process-- is these two terms. Right? And this term you can say comparing with that is a little bit like unfair because that term is doing some random stuff, right? So maybe you shouldn't compare it with the absolute value of it just because eventually there will be some cancellation. But at least you can pair it with the other deterministic term eta H xt. You are still-- this eta squared term is still much smaller than the deterministic term. So in some sense, it's very tempting to say that OK, this third order term is tx xt xt thing A is very small. Right? So the conclusion would be that-- the conclusion is that this is kind of negligible. And indeed, it's true when-- and this is indeed true. This is negligible. And indeed, it is true under one condition. When the H, the Hessian, is strictly PSD. So that's when you have contraction in all different directions. So however, when H is not strictly PSD-- so for example, in some direction-- so basically, in other words, if you think about this-- so this term is only on order of eta to 1.5, where H is not 0, right? So if H is 0 in some direction, then this eta H t term is just literally 0. So then the eta squared term is winning, right? So basically in some direction where H is 0 then eta H xt is just a 0 in that direction. And then eta squared becomes the largest update. [INAUDIBLE] Eta ksi t is always the largest if you really kind of look at the absolute value, right? So eta ksi t is on the order of what? This is on the order of just the eta. It's always the largest, but I think I-- I kind of-- I try to-- I'm trying to argue that eta ksi t if you compare with that, it's a little bit kind of like misleading in the sense that eta ksi t is doing random stuff, right? So in one step, it's going positive direction. The other step, it's going to negative direction. So eventually basically, what happens is that if you have a random stochastic term-- suppose you have a stochastic term or min 0 term, such that, if you have one step, it's on the order of eta implies eventually like [INAUDIBLE]-- this will be something like on the square root of eta. That's kind of like what we have discussed in the quadratic case right? If every step is stochastic term, it's going to give you a eta noise perturbation. And then, eventually, they will build up to square root eta. However, when you have a deterministic term-- so if one step is something like eta, then eventually it's unclear what will build up. It probably would build up to eta T because eta times little t is a-- It won't kind of like-- they won't cancel. Maybe this is a little bit-- I'm not sure whether this is the best way to explain it, although I say this. So another way, this is a heuristic because it requires a little bit more-- if you want to formalize all of this, it requires a little more work. But I guess what I'm-- maybe what I'm saying is basically like the low equality. The largest update. But in all cases, by locally the oldest update of course, is eta ksi t. But this one will have cancellation over time, because in future, you're going to have like a-- you're going to move in different directions. So that's why it's probably good to also compare with the deterministic changes, which is the eta H xt. And then when you compare it with that, typically the deterministic changes, is bigger than the eta squared term from the third-order derivative. But when H is 0 in some direction, then it's no longer achievable. But you can-- sometimes you can prove that if H is-- so when H is strictly positive, it's nonzero, then it's negligible. And otherwise, it becomes trickier. So if H has a completely flat direction, it becomes tricky. So I think here is a good point. Maybe let's just continue with this. So when this is the case, then-- so the third-- so in this case, the third-order term will introduce some biases but very small, some bias but very small. And small in the sense that, as eta goes to 0, this becomes negligible. And I think I have some kind of like figures here. And so I have this figure. Let's see whether you can see it. Yes. So this is a little bit small here. Maybe this way. So the function is a one-dimensional function, is a complex one. So I'm in the case where the H is strictly bigger than 0, so H. Because it's one-dimensional, this is strictly convex function. So this is the function, but it's not like a quadratic. It's something like-- I think it's quadratic on both sides, but it is not the same kind of curvature. The left-hand side is more flat, and the right-hand side is more sharp. And if you do gradient descent, so I guess probably the only thing important is this. So if you look at-- this is after you take 100 to 1,000 steps of stochastic gradient descent. And you can see that the mean, the iterate is bouncing around between. This is the distribution of the iterate, distribution of the xt when t is 1024. So 1024 is pretty big, considered to be infinity, right? And you can see that it's bouncing around 0. 0 is the global minimum. But the mean is no longer 0 anymore because you have the third-order kind of like derivative. And the mean is something left to 0. In some sense, you prefer the left-hand side a little bit more than the right-hand side because it's easier to stay on the left-hand side. The left-hand side is lighter, so it's easier to stay there because the contraction is weaker. And the right-hand side is sharper and is-- you add some noise, and you kind of contract it. And you go back to the 0 quickly, more quickly. So that's why the bias is towards that left-hand side, where you have lighter curvature. But the bias is relatively small. You can see like-- you can even say this is negligible because at least you know, if you do take a random point, you're going to take something between minus 0.05 to 0.05 maybe. And the bias is only a very small number. Anyway, you are-- your step-- your fluctuation is bigger than the bias, also. So that's why in the classical kind of like in the classical kind of like optimization settings people didn't really pay too much attention to this. I think they are-- some papers I guess-- so I guess there is this paper by Bach 17 bridging the gap I guess between constant step size SGD. Any more questions? So this paper characterized this effect for convex case. And you can see from the title of this paper, it's talking about constant step size. And why you have to talk about constant step size just because this will go-- if you decay those steps size, then this bias effect will be even smaller. And it will be negligible, just completely gone eventually. So that's why you have to make this even somewhat useful, somewhat can make a difference you have to make the stuff that's not going to 0. That's why in the combat phase, people typically don't care about this that much. In some other cases, you care about this a little bit. This figure is from one of my recent papers with some students at Stanford. Hi, guys. So and here the reason why we talked about this is because you have multiple machines. And for some other reasons, you have to care about it. But typically, you wouldn't really care that much about it. It's just because the bias is small. OK. So now let's go back to-- now let's move on to-- OK. Finally, we are moving to the place of regularization impact. So the more complex case. I'm writing too fast. Too cursory, I guess. With stronger implicit reg. And these are cases where both H and sigma are not full rank. So your Hessian and your noise are both somewhat not four-dimensional. And this is not something to be super surprised. This is called part that comes from overparameterization. Especially, I think, it's easier to think about the Hessian. If you have a manifold of global minimum, then along that-- the direction of the manifold your Hessian will be 0. So I thought you have a lot of different global minimum. Then your Hessian will be flat. It will be 0 in certain directions. And let's suppose-- so for simplicity, I may not discuss when this can happen exactly because you need some calculations so and so forth. But suppose when I say H and sigma are both in a subspace K and the subspace K is low-dimensional or is not full-dimensional and if the loss is quadratic-- for the moment, let's still think about the loss as quadratic. I guess we have to conclude this. We said that the iterate will have 0, something like this. Recall that this is our calculation, sigma squared di U transpose. And so kind of the picture, I think, is that there is no-- so basically, you have no noise and no contraction, nothing in the perpendicular space of k. So in some sense, I think the function look like this. So suppose you have some direction of K. This is the direction of K, and this is the direction of K perp. And suppose your function is quadratic in dimension of K, something like this. I'm not sure whether you can-- I think my drawing is too bad. So imagine a valley. This is a-- I'm drawing a valley like this. But this valley is completely kind of like oblivious to dimension of k per. So this thing is the middle of the valley. This Is the middle of the valley. So basically then what happens is that if you start somewhere here, everything happens in the direction of K, and nothing happens in the direction of k per. So you're basically bouncing around the direction of k. So basically, you maybe go here, here, and go about to do some bouncing like something like this. But you never move in the direction of k per. So in k per, it's kind of like you just know nothing. You know nothing, or you don't move at all. So let's do not kind of like-- that's a little bit implicit bias in place of requisition because the implicit requisition comes from what? Comes from the transition. If you start with this point, then you're going to stay in this part. But if you start here, then you're going to bounce around here, right? And this is exactly what happens when you have overparameterized linear model, because when you have overparameterized linear model, you never leave the subspace. It may never leave some subspace. And in other subspace, you'll never move. So this is not the most important thing about noise because noise doesn't really do much. It's really just that you cannot leave a certain subspace. However, when your loss is quadratic, when your loss is not quadratic, then the third-order term is going to matter. So this is the main thing that I want to kind of like commit today, but, unfortunately, just because this is complicated. So I probably wouldn't be able to do everything rigorously. So I just really can't do everything rigorously. So what happens is that if the loss is not quadratic, then-- recall that what happens is that you have xt plus 1 is equal to 1 minus eta H xt minus eta ksi t minus eta T xt xt plus high-order term. And this is happening. So this is working in K because I assume that H is working in K, and the noise is always in K in a subspace. So this left part is we're always working in K. You are bouncing around in K. And this is working in K perp. And that makes them kind of complete separate, so there's nothing you can control the third-order term. The third-order term can build up for a long and long-- very long time. So maybe this is the one. Let me see. So basically, let's see. I probably will go to this figure multiple times. Right. So this is what's happening here. So I don't think I can-- I don't think I can draw anything here. But maybe first watch it. And then I'm going to go to a static figure so that you can-- I can annotate. So this is a stochastic gradient descent in this valley. And you can see that it's moving in this valley. So now let's look at the static figure. I have one somewhere. So in our mathematical language-- so this direction-- let's see a different color. So this is the direction of the K perp, and this is the direction of K. So this direction is K. OK? But here this is not a quadratic because-- at least this is-- it is not a quadratic because your-- at least the K perp direction doesn't matter to some extent. Because the K perp-- you can see that if you go from here to here, you're going to go to flatter and flatter region, right? So what happens is that most of your work is in the K direction. You are just bouncing around in the K direction. But there is some certain term that drives you in the K per direction. And that can build up eventually for a long time. Recall that you start from here. You do a lot of bouncing around. But eventually, after you bounce for a so long time, you move in the K per direction. And this is because the third-order term is accumulating for a long time until you go to the flatter region. So the min term is doing the bouncing, and the lower term is accumulating in the direction of the valley. Any questions so far? I'll go back to this bigger problem once again just once I do a little bit more math. If we sort of know that this is happening, do we want to do this first [INAUDIBLE] direction and then-- is that a feasible thing you can do with it? Yeah, that's a good question. So if you know this is what's happening. Why not just do something more explicit to make it faster, right? I think there are several-- let me see. So there are several things that-- it's a good question, but this is not something super new. People have thought about it, and I have thought about it. I think there are multiple constraints we have to kind of respect. I still think this is a feasible thing, direction to go, but it's not easy. And I don't think there's an existing paper that can really achieve this very well. So one of the thing is that-- so how do you go to the valley? So what you're going to do? You want to go to the valley. You want to compute the direction of K perp, and you go there, right? So how do you go to the valley? I think that's not too hard because you have to use but not trivial. Because to go to the valley, you have to either decay on your rate or make your batch size bigger so that you have smaller norms. But that requires more compute because you want to be more accurate. Sometimes you want to be more accurate in a K direction, so it requires more compute. So that's one small thing, right? So whether you really can afford to compute to really go to the valley in the first place. I think you can probably. In most cases, you can. But there's not like a-- for free, so you do have to consider the cost. And then you go to the valley, and then you do the-- you do this, right? You move in a K perp direction. But the problem becomes that the real picture is not just one single-- this is only a local view. So once you go to here maybe-- if locally, it sounds great. I'm going to a better place. But maybe there is-- actually, this function has a lot of other parts. So actually, I have to travel really far, far, far away, somewhere else. So then you have to do this again, this local view again, and then try to do it and so on and so forth, right? So then it becomes a-- then you have to also find a new valley and then probably find the direction of the K perp. And also finding the direction of K perp is also not mentioned because it requires completing a third-order derivative. Continuous third-order derivative on one example is still OK. It cost you-- computing high-order derivative on one example takes you a constant factor more than computing the first-order derivative. This is a very interesting thing about deep learning. So computing any derivative give you almost-- requires almost the same time as computing the first derivative as long as your output is a vector. But I do-- you do have to pay a constant factor, something like two or three times more compute. And also, you have to do this for-- and this T, this K per direction, to get it exactly, you also have to do a full batch, full [INAUDIBLE] so that the third-order term is the third-order derivative of the full function, of the population, of the population function, of the full empirical function. So if you use this minimax thing, then maybe you wouldn't get the K perp direction very accurately. So there's a bunch of decisions which makes it complicated. We don't even know exactly which one is the bottleneck, so it's a little bit tricky. So whoo, but that's a great question. Yeah. So we tried very hard to somehow do this for quite a while already. Yeah. So OK, all right. So now let's see. So I think I would do a little more math just to kind of give you a small feeling about how we perceive to analyze this. And the way to analyze this is that you somehow view this, as I said, two things. So you first define a competitive process, which is easier to define, Ut plus 1 to be 1 minus eta H Ut minus eta ksi t. And this is where it's understood because basically you are doing optimization on the quadratic approximation. And we have done this already. And then you characterize the difference between them. So xt minus Ut. We define this to be rt. So basically, the main question is what rt is doing. And we get to take a-- we can compute the recursion of rt. Right. This is equals to you plug in a definition of xt plus and ut plus 1 you get 1 minus eta H xt minus Ut minus eta T xt xt. That's high-order term. And then still-- and then it's 1 minus eta H rt minus eta T xt xt. And the interesting theory is that you still have the contraction. And this is the bias or the regularization effect, but there's no noise anymore. There's no stochasticity, no stochasticity. There are still a little bit of stochasticity in xt, but at least you don't have the ksi t term that you have added intentionally. Just because you are taking a diff with the stochastic trajectory. And you can actually move the xt as well because you can basically claim that this is close to the version where you plug in. You're not plugging xt. You're plugging Ut. So this is just because xt and Ut are somewhat similar. Of course, you want to understand the exact difference. But for this level, especially because you are operating on eta here-- so the further their differences become so you have a higher order term you can jump. So for now, what happens is that if you look at the diff, if you look at the inner subspace, the subspace of K, which is the span of H, this is still contraction because you have some additional biases. But the biases will be corrected by the contraction eventually. However, for the inner K perp subspace, the contraction is gone. You project everything to the K per subspace. Then you got-- all right. So the H doesn't have any effect anymore because H has nothing to do with the K per direction. It's just the projected outcome. So now, the thing is really simple. So in the K per subspace, you are just basically taking the previous rt, projecting the current pair plus something new. So basically, you are just the [INAUDIBLE].. You don't have any contraction even. So if you do this recursively, you are going to get the K per of r0 minus the sum of-- but now it becomes-- the question becomes, how do you understand the sum of the third-order term. And by the way, I've never told you where the third-order term is going. I only claim that there is a third-order term. I didn't really say where it's going. So now the question is, what is third-order terms are going in average, right? So in the long run, over time. So we can kind of ignore this. This is just a restriction to the subspace. So if you look at sum of those-- where the third-order term is going. So from this, you can-- so first of all, let's assume maybe this is a heuristic. Let's say this is a Ut. Let's say s is something like UK. UK transpose is the covariance of UK. This is as K goes to infinity and also assume this UK mixes as a Markov chain. I'm not sure whether you are familiar with this maximum chain mixing, but you just assume that UK is kind of like doing the-- UK is doing the bouncing around. You assume that it's really just doing that. It's kind of like a Gaussian. And S is the covariance of the Gaussian. And then this one, you can rewrite this as T of-- in some sense, this is like-- it goes to little t times-- roughly equals little t times the T with the expectation of u and UK transpose-- or maybe with s. So I guess maybe-- so what I'm doing here is that, suppose you have some variable U that is drawn from s, from Gaussian with covariance S, then the expectation of T u u is equal to T of-- this is expectation-- I guess let's look at i-th coordinate. And this is sum of the Tijk, Uj, Uk and you look at i-th coordinate-- the i-th for this. And then you switch the sum with the expectation. You have j k Tijk expectation Uj Uk. And this is sum over Jk Tijk expectation u, u transpose. You take the jk coordinate. And this is-- If you know this by T of expectation u, u transpose. So you can also apply the tensor on a matrix, and the definition is really just this. So the definition of the matrix is that you have some of Tijk Sjk. Anyway, this just might be a little bit too much for this course. But anyway, you can basically identify what you have is that you use-- I guess there's an eta here. My bad. So this t comes from you have multiple times where you have t steps. That's where you got t. And eta is what you got from this eta. And this is something like you apply the tensor to the average covariance, the mixed covariance of t. So basically, the question becomes, what is x? If you know x, then you know which direction you're going. And you know how far you are going. You are going by t times this direction because you take t steps. So the next sort of-- so the final question i what this TS is. And this is very informal and not even exactly correct. And to fix it, you need something a little bit more [INAUDIBLE]. So this biased direction T of S minus T of S is the biased direction. So this batch direction is equal to minus-- I guess remember what's the t t is the third-order derivative. And S is the iterate of the-- no, S is just some matrix for the moment. And you can rewrite this. So if you think about what is this, this is the gradient of the inner product of the Hessian and S. So this is an equation. And in some sense, you can argue that this is a-- heuristic argument. And actually, it's not even correct, not even a 100% correct argument. So the T(S) is trying to make nabla squared g(x) times S, smaller, because you are moving the gradient of that function, right? So let's define this to be R(x). Right. So you are moving in that negative R(x), negative nabla R(x) direction. That's why you can argue that you are trying to make that function smaller. So the additional bias is trying to make-- this minus T(S) term is trying to make the R(x) smaller by moving in a gradient of this R(x) direction. And eventually, I think if you work out all of this kind of like subtle details with a lot of other like stance, and fixes, and assumptions. And I think I've not time to go through all of this, as we already are running late. But you can somewhat prove in certain cases-- let me just write down what formula you can prove. So you can prove something like SGD with the so-called label noise. I didn't tell you what label noise mean. It doesn't matter. It's one kind of noise, and it's not exactly the min. it's just some additional noise. And convert this to a stationary point of the regularized loss. I plus lambda R-- l hat plus lambda R, where R theta is equal to-- roughly equal to trace of the Hessian of the loss. Yeah, so I guess there's no need to understand any details here. There are some other subtleties. There are other assumptions, so and so forth. I just want to give you a taste on what kind of theorems you may hope to prove. So basically, you are saying that, if you run SGD, OK, this is on the original loss, L hat. So if you wear a certain kind of SGD on the unregularized loss. It converges to the stationary point of a regularized loss. So that's why you get this regularizer for free. And what regularizer it is-- here, the regularizer is the trace of the Hessian is something about the flatness of the loss L hat, right? The Hessian is the curvature. The trace of the Hessian is about the flatness at that point. So you are implicitly encouraging the flatness of the loss function. But this has a lot of things hidden here. And actually, I think I'm missing a few kind of important question, important assumptions. I'm not writing down some of the important assumptions just because they are not-- it takes too much time to write It down. But this is kind of like something we may hope to prove in some other cases. OK, any questions? [INAUDIBLE] That's a great point. So the question-- just to rephrase the question. The question is that whether the even high-order derivative, the fourth-order gradient would increase the bounds. I think on the conceptual level, if your third-order thing is not 0, then I think the fourth-order one wouldn't matter that much. And if the third-order term is 0, I think, indeed, there should be a fourth-order-- the fourth-order term would have an effect. But so far we're not thinking about that. We are assuming the third-order term is doing something non-trivial. so that the fourth-order term will be dominated by the third-order term. I see. It seems like [INAUDIBLE]. In the last theorem, the stationary point is stochastic. So regular [INAUDIBLE] Hessian instead of the [INAUDIBLE].. Oh, I see. Yeah. So the question is, why the regularization is the trace of Hessian. This is because when a regulator is over the second-order term, the second derivative, then the direction you want to move to is the gradient of the regularizer. So when you have a regularizer, what do they really mean? It means that you should move to the direction of the grid of the regularizer. That's how they match up. So actually, the direction you really move to is the third-order derivative, depends on the third-order derivative of the loss function. And then the second-- so the Hessian becomes the derivation and then the corresponding term [INAUDIBLE]. So I guess there are two views. One view is that you look at it on the regularizer level. Then currently it's the second-order term, the second-order derivative of the loss. And another view is that you look at the actual, the iterate space. The current is the third-order derivative, it's about third-order derivative of the loss. And supposing that you're in the iterate space, the third-order derivative manages. Then you have to talk about the fourth-order derivative of the loss. And in that case, the regularizer probably will be above a third-order derivative of the loss. It's because your regularizer is always one order up compared to the direction you move to. This makes sense now? [INAUDIBLE] the SGD [INAUDIBLE]. So what's special about that? [INAUDIBLE] Yeah. So why a flat stationary point is better? Right. So I think-- [INAUDIBLE] Why do we spend or not? So I think I'm going to talk about that immediately next in the beginning of the next lecture. And the answer is that we do believe-- is generally [INAUDIBLE]. It depends on some-- it kind of relates to the Lipschitzness of the models-- I'll discuss more next week-- on Wednesday. OK, bye.
AI_LLM_Stanford_CS229
The_Rise_of_The_Machines_John_Etchemendy_and_FeiFei_Li_on_Our_AI_Future_Uncommon_Knowledge.txt
The year was 1956, and the place was Dartmouth College, in a research proposal, a math professor used a term that was then entirely new and entirely fanciful, artificial intelligence, there's nothing fanciful about AI anymore. The directors of the Stanford Institute for Human Centered Artificial Intelligence, John Etchemendy and Fei-Fei Li on Uncommon Knowledge now. [MUSIC] >> Peter Robinson: Welcome to Uncommon Knowledge, I'm Peter Robinson, philosopher John Etchemendy served from 2000 to 2017 as provost here at Stanford University. Doctor Etchemendy received his undergraduate degree from the University of Nevada, before earning his doctorate in philosophy at Stanford. He earned that doctorate in 1983, and became a member of the Stanford philosophy department the very next year. He's the author of a number of books, including the 1990 volume the Concept of Logical Consequence. Since stepping down as provost, Doctor Etchemendy has held a number of positions at Stanford, including, and for our purposes today, this is the relevant position. Co-director of the Stanford Institute for Human Centered Artificial Intelligence. Born in Beijing, Doctor Fei-Fei Li moved to this country at the age of 15. She received her undergraduate degree from Princeton and a doctorate in electrical engineering from the California Institute of Technology. Now a professor of computer science here at Stanford, Doctor Li is the founder once again of the Stanford Institute for Human Centered Artificial Intelligence. Doctor Li's memoir published just last year, The Worlds I See, Curiosity, Exploration, and Discovery at the Dawn of AI. John Etchemendy and Fei-Fei Li, thank you for making the time to join me. >> Fei-Fei Li: Thank you for inviting us. >> Peter Robinson: I would say that I'm going to ask a dumb question, but I'm actually going to ask a question that is right at the top of my form, what is artificial intelligence? I have seen the term a hundred times a day for, what, several years now, I have yet to find a succinct and satisfying explanation. Let's see, let's go to the philosophy, here's a man who's professionally rigorous, but here's a woman who actually knows the answer. Yeah, and she knows the answer [LAUGH] >> John Etchemendy: So, let Fei-Fei answer, and then I will give you a different answer. >> Peter Robinson: Really, all right. >> Fei-Fei Li: Okay, Peter used the word succinct, and I'm sweating here. So, because artificial intelligence by today is already a collection of methods and tools that summarizes the overall area of computer science that has to do with data, pattern recognition, decision making in natural language, in images, in videos, in robotics, in speech. So, it's really a collection, at the heart of artificial intelligence is statistical modeling, such as machine learning using computer programs. But today, artificial intelligence truly is an umbrella term that covers many things that we're starting to feel familiar about, for example, language intelligence, language modeling or speech or vision. >> Peter Robinson: John, you and I both knew John McCarthy. >> John Etchemendy: Right. >> Peter Robinson: Who came to Stanford after he wrote that, used the term, coined the term artificial intelligence. Now, the late John McCarthy, and I confess to you, who knew him as I did, that I'm a little suspicious of the term because I knew John, and John liked to be provocative. And I am thinking to myself, wait a moment, we're still dealing with ones and zeros. Computers are calculating machines, artificial intelligence is a marketing term. >> John Etchemendy: So, no, it's not really a marketing term. So, I will give you an answer that is more like what John would have given. >> Peter Robinson: All right. >> John Etchemendy: And it's the field, the subfield of computer science that attempts to create machines that can accomplish tasks that seem to require intelligence. The early artificial intelligence were systems that played chess or checkers even, very, very simple things. Now John, who you know him, was ambitious, and he thought that in a summer conference at Dartmouth, they could solve most of the problems [LAUGH]. >> Peter Robinson: All right, let me name a couple of very famous events, what I'm looking for here, I'll name the events, we have in 1997, a computer defeats Garry Kasparov at chess, big moment. For the first time, Big Blue, an IBM project, defeats a human being at chess. And not just a human being, but Garry Kasparov, who, by some measures, is one of the half dozen greatest chess players who ever lived. >> Fei-Fei Li: Mm-hm. >> Peter Robinson: And as best I can tell, computer scientists said, yawn, things are getting faster, but still. And then we have, in 2015, a computer defeats Go expert Han Fei. And the following year, it defeats Go grandmaster, Lee Sedol, I'm not at all sure I'm pronouncing that correctly. >> Fei-Fei Li: It's Sedol, yeah. >> Peter Robinson: In a five-game match, and people say, wow, something just happened this time. So, what I'm looking for here is something that a layman like me can latch onto and say, here's the discontinuity. Here's where we entered a new moment, here's artificial intelligence. Am I looking for something that doesn't exist? >> John Etchemendy: No, no, I think you're not. So, the difference between Deep Blue and-. >> Peter Robinson: Which played chess. >> John Etchemendy: Which played chess, Deep Blue was written using traditional programming techniques. And what deep blue did is it would, for each move, for each position of the board, it would look down to all the possible- >> Peter Robinson: Every conceivable decision tree. >> John Etchemendy: Every decision tree to a certain depth, I mean, obviously, you can't go all the way. And it would have ways of weighing which ones are best. And so, then it would say, this is the best move for me at this time. That's why, in some sense, it was not theoretically very interesting. The AlphaGo- >> Peter Robinson: AlphaGo, which was a Google project. >> John Etchemendy: Which is a Google project. >> Peter Robinson: All right. >> John Etchemendy: This uses deep learning, it's a neural net, it's not explicit programming, we don't know. We don't go into it with an idea of, here's the algorithm we're gonna use, do this and then do this and do this. So, it was actually quite a surprise, particularly AlphaGo. >> Fei-Fei Li: Not to me but, [LAUGH], sure. >> John Etchemendy: No, no, but-. >> Fei-Fei Li: To the public. Yeah. >> John Etchemendy: To the public. >> Fei-Fei Li: Yeah. >> Peter Robinson: But if our colleague, I'm going at this one more time because I really wanna understand this, [LAUGH], I really do. Our colleague here at Stanford, Zhi-Xun Shen, who must be known to both of you, physicist here at Stanford, and he said to me, Peter, what you need to understand about the moment when a computer defeated Go. Go, which is a much more complicated, at least in the decision space, much, much bigger, so to speak, than chess. There are more pieces, more squares, all right. >> Fei-Fei Li: Yeah. >> Peter Robinson: And Zhi-Xun said to me, That whereas chess just did more quickly what a committee of grand Masters would have decided on. The computer in Go was creative, it was pursuing strategies that human beings had never pursued before, is there something to that? >> Fei-Fei Li: Yeah, so there's a famous. >> Peter Robinson: Fei-Fei is getting impatient with me, I'm asking such, go ahead. >> Fei-Fei Li: No, no, you're asking such good questions. So in the third game of the, I think it was the third game of the five games, there was a move. I think it was move 32 or 35, is that the computer program made a move that really surprised every single Go masters. Not only Lisa Doe himself, but everybody who's watching. >> Speaker 1: That's a very surprising move. >> Speaker 2: [LAUGH] I thought it was a mistake. >> Fei-Fei Li: In fact, even post analyzing how that move came about, the human masters would say, this is completely unexpected. What happens is that the computers, like John says, has the learning ability and has the inference ability to think about patterns. Or to decide on certain movements even outside of the trained, familiar human masters domain of knowledge, in this particular case. >> John Etchemendy: So, may I, Peter, let me. >> Peter Robinson: Go ahead, yes. >> John Etchemendy: Let me expand on that. The thing is, these deep neural nets are supremely good pattern recognition systems. But the patterns they recognize, the patterns they learn to recognize, are not necessarily exactly the patterns that humans recognize. So it was seeing something about that position, and it made a move that because of the patterns that it recognized in the board, that made no sense from a human standpoint. In fact, all of the lessons in how to play Go tell you, never make a move that close to the edge that quickly. And so everybody thought it made a mistake, and then it proceeded to win. And I think the way to understand that is it's just seeing patterns that we don't see. >> Fei-Fei Li: It's computing patterns that is not traditionally human, and it has the capacity to compute. >> Peter Robinson: Okay. I'm trying to, we're already entering this territory, but I am trying really hard to tease out the, wait a moment. These are still just machines running zeros and ones, bigger and bigger memory, faster and faster ability to calculate. But we're still dealing with machines that run zeros and ones, that's one strand. And the other strand is, as you well know, 2001 Space Odyssey, where the computer takes over the ship. >> Dave: Open the pod bay doors, Carl. >> Carl: I'm sorry, Dave, I'm afraid I can't do that. >> Peter Robinson: Okay, we'll come to this soon enough. Fei-Fei Li, in your memoir, the worlds I see, quote, I believe our civilization stands on the cusp of a technological revolution with the power to reshape life as we know it. Revolution, reshape life as we know it, now you're a man whose whole academic training is in rigor. Are you going to let her get away with this kind of wild overstatement? >> John Etchemendy: No, I don't think it's an overstatement. I think she's right. >> Fei-Fei Li: He told me to write the book, [LAUGH] >> John Etchemendy: Mind you, Peter, it's a technology that is extremely powerful, that will allow us, and is allowing us to get computers to do things we never could have programmed them to do. And it will change everything but it's like what a lot of people have said, it's like electricity or it's like the steam revolution. It's not something necessarily to be afraid of, it's not that it's going to suddenly take over the world. That's not what Fei-Fei was saying. >> Fei-Fei Li: Right, it's a powerful tool that will revolutionize industries and human the way we live. But the word revolution is not that, it's a conscious being, it's just a powerful tool that changes things. >> Peter Robinson: I would find that reassuring if a few pages later, Fei-Fei had not gone on to write. >> Fei-Fei Li: No. >> Peter Robinson: There's no separating the beauty of science from something like, say, the Manhattan project. Nuclear science, we can produce abundant energy, but it can also produce weapons of indescribable horror. AI has boogeymen of its own, whether it's killer robots, widespread surveillance, or even just automating all eight billion of us out of our jobs. Now, we could devote an entire program to each of those boogeymen, and maybe at some point we should. But now that you have scared me, even in the act of reassuring me, and in fact, it throws me that you so eager to reassure me that I think maybe I really should be even more scared than I am. Let me just go right down, here's the killer robots. Let me quote the late Henry Kissinger, I'm just going to put these up and let you, you may calm me down if you can. Henry Kissinger, if you imagine a war between China and the United States, you have artificial intelligence weapons. Nobody has tested these things on a broad scale, and nobody can tell exactly what will happen when AI fighter planes on both sides interact. So you are then, I am quoting Henry Kissinger, who is not a fool after all. So you are then, in a world of potentially total destructiveness, Fei-Fei. >> Fei-Fei Li: So, like I said, I'm not denying how powerful these tools are. I mean, humanity before AI has already created tools and technology that are very destructive. Could be very destructive, we talk about Manhattan project, right? But that doesn't mean that we should collectively decide to use this tool in this destructive way. >> John Etchemendy: Okay, Peter, think back before you even had heard about artificial intelligence. >> Peter Robinson: Which actually, is it, five years ago? >> John Etchemendy: No, no. >> Peter Robinson: This is all happening so fast. >> John Etchemendy: Just five years ago, or ten years ago. >> Peter Robinson: Right. >> John Etchemendy: Remember the tragic incident where an Iranian passenger plane was shot down flying over the Persian Gulf by an aegis system? >> Peter Robinson: Yes. And One of our ships. >> John Etchemendy: One of our ships, an automation, an automated system because it had to be automated in order to be fast. >> Peter Robinson: Humans can't react that fast. >> John Etchemendy: Yeah, exactly and in this case, for reasons that I think are quite understandable now that you understand the incident. But it did something that was horrible. That's not different in kind from what you can do with AI. So we, as creators of these devices or as users of AI, have to be vigilant about what kind of use we put them to. And when we decide to put them to one particular use, and there may be uses, the military has many good uses for them. We have to be vigilant about their doing what we intend them to do rather than doing things that we don't intend. >> Peter Robinson: So you're announcing a great theme, and that theme is that what doctor Fei Fei Li has invented makes the discipline to which you have dedicated your life philosophy even more important, not less so. >> Fei-Fei Li: Yeah, that's why we're [CROSSTALK] >> Peter Robinson: Makes the human being more important, not less so. Am I making that? Am I being glib? Or is that onto- >> John Etchemendy: Let me tell you a story about, so Fei-Fei say, used to live next door to me or close to next door to me. And I was talking. >> Peter Robinson: I'm not sure whether that would make me feel more safe or more exposed. >> John Etchemendy: And I was talking to her, I was still privileged at this time. And she said to me, you and John Hennessy started a lot of institutes that brought technology into other parts of the university. We need to start an institute that brings philosophy, and ethics, and the social sciences into AI, because AI is too dangerous to leave it to the computer scientists alone. Nothing wrong with computer science. >> Peter Robinson: There are many stories about how hard it was to persuade him when he was provost. You succeeded, just one more bogeyman briefly. >> Fei-Fei Li: Yeah. >> Peter Robinson: And we'll return to that theme that you just gave us there, and then we'll get back to the Stanford Institute. I'm quoting you again, this is from your memoir, the prospect of just automating all billion of us out of our jobs. That's the phrase you used? Well, it turns out that it took me mere seconds using my AI enabled search algorithm. Search device to find a Goldman Sachs study from last year predicting that in the United States and Europe, some two thirds of all jobs could be automated, at least to some degree. So why shouldn't we all be terrified, Henry Kissinger, world apocalypse. All right, maybe that's a bit too much, but my job. >> Fei-Fei Li: So I think job change is real. Job change is real with every single technological advances that human civilization has faced, that is real, and that's not to be taken lightly. We also have to be careful with the word job. Job tends to describe a holistic profession, or that a person attaches his or her income as- >> Peter Robinson: As an identity. >> Fei-Fei Li: Identity with, but there is also, within every job, pretty much within every job, there are so many tasks. It's hard to imagine there's a one job that has only one singular task, right? Like being a professor, being a scholar, being a doctor, being a cook. All this job have multiple tasks. What we are seeing as technology is changing how some of these tasks can be done. And it's true, as it changes these tasks, some of them, some part of them could be automated, it's starting to change how the jobs are. And eventually it's gonna impact jobs. So this is gonna be a gradual process, and it's very important we stay on top of this. This is why human centered AI institute was founded is, these questions are profound. They're by definition multidisciplinary. Computer scientists alone cannot do all the economic analysis, but economists not understanding what these computer science programs do will not by themselves understand the shift of the jobs. >> Peter Robinson: Okay, John, may I tell you. Go ahead. >> John Etchemendy: But let me just point something out. The Goldman Sachs study said that such and such percentage of jobs will be automated, or can be automated, at least in part. >> Peter Robinson: Yes. >> John Etchemendy: Now, what they're saying is that a certain number of the tasks that go into a particular job. >> Peter Robinson: Filing, research. >> John Etchemendy: Exactly, so, Peter, you said it only took me a few seconds to go to the computer and find that article. Guess what? That's one of the tasks that would have taken you a lot of time. So part of your job has been automated. >> Peter Robinson: Okay, now let me tell you a story. >> Fei-Fei Li: But also empowered. >> John Etchemendy: Empowered. >> Peter Robinson: Empowered, okay, fine. Thank you, thank you, you're making me feel good. Now, let me tell you a story. All three of us live in California, which means all three of us probably have some friends down in Hollywood. And I have a friend who was involved in the writers strike. Okay, and here's the problem, to run a sitcom, you used to run a writer's room. And the writer's room would employ seven, a dozen on the Simpson show, the cartoon show, they'd had a couple of writers rooms running. They were employing 20. And these were the last kind of person you'd imagine a computer could replace, because they were well educated and witty and quick with words. And you think of computers as just running calculations, maybe spreadsheets, maybe someday they can eliminate accountants. But writers, Hollywood writers, and it turns out, and my friend illustrated this for me by saying, doing the artificial intelligence thing, where it had a prompt draft a skit for Saturday Night Live in which Joe Biden and Donald Trump are playing beer pong. 15 seconds. Now, professionals could have tightened it up or made it, but it was pretty funny and it was instantaneous. And do you know what that means? That means you don't need four or five of the seven writers. You need a senior writer to assign intelligence, the artificial. And you need maybe one other writer or two other writers to tighten it up or redraft it. It is upon us. And your artificial intelligence is going to get bad press when it starts eliminating the jobs of the chattering classes. And that has already begun. Tell me I'm wrong. >> John Etchemendy: Do you know, before the agricultural revolution, something like 80, 90% of all the people in the United States were employed on farms? >> Peter Robinson: Right. >> John Etchemendy: Now, it's down to 2% or 3%. And those same farms, that same land, is far, far more productive. Now, would you say that your life, or anybody's life now was worse off than it was, say, in the 1890s, when everybody was working on the farm? No, so, yes, you're right. It will change jobs. It will make some jobs easier. It will allow us to do things that we could not do before. And, yes, it will allow fewer people to do more of what they were doing before. And consequently, there will be fewer people in that line of work. That's true. >> Peter Robinson: That is true. >> Fei-Fei Li: I also want to just point out two things. One is that jobs is always changing, and that change is always painful. And as computer scientists, as philosophers, also as citizens of the world, we should be empathetic of that. And nobody is saying we should just ignore that changing pain. So this is why we're studying this, we're trying to talk to policymakers. We're educating the population. In the meantime, I think we should give more credit to human creativity in the face of AI. I started to use this example that's not even AI. Think about the advanced, speaking of Hollywood graphics technology, CGI and all that, right? >> Peter Robinson: The video gaming industry or- >> Fei-Fei Li: No, just animations and all that, right, one of many of our, including our children's favorite animation series is by Ghibli Studio. Princess Mononoke, my neighbor Totoro, spirited away, all of these were made during a period where computer graphics technology is far more advanced than these hand drawn animations. Yet the beauty, the creativity, the emotion, the uniqueness in this film continue to inspire and just entertain humanity. So I think we need to still have that pride and also give the credit to humans, let's not forget our creativity and emotion and intelligence is unique, it's not going to be taken away by technology. >> Peter Robinson: Thank you, I feel slightly reassured. I'm still nervous about my job, but I feel slightly reassured. But you mentioned government a moment ago, which leads us to how we should regulate AI, let me give you two quotations, I'll begin, I'm coming to the quotation from the two of you. But I'm going to start with a recent article in the Wall Street Journal by Senator Ted Cruz of Texas and former Senator Phil Graham, also of Texas. The Clinton administration took a hands-off approach to regulating the early Internet. In so doing, it unleashed extraordinary economic growth and prosperity. The Biden administration, by contrast, is impeding innovation in artificial intelligence with aggressive regulation. That's them this is you, also a recent article in the Wall Street Journal, John Etchemendy and Fei-Fei Li. President Biden has signed an executive order on artificial intelligence that demonstrates his administration's commitment to harness and govern the technology. President Biden has set the stage, and now it is time for congress to act. Cruz and Graham, less regulation, Etcemendy and Li, Biden administration has done well, now Congress needs to give us even more. >> John Etchemendy: No. >> Peter Robinson: All right, John, so. >> John Etchemendy: No, I don't agree with that. So I believe regulating any kinda technology is very difficult. And you have to be careful not to regulate too soon or not to regulate too late. Let me give you another example, you talked about the Internet, and it's true. The government really was quite hands off, and that's good, it worked out. >> Peter Robinson: It worked out. >> John Etchemendy: But now let's also think about social media, has not worked out exactly the way we want it. We originally believed that we were gonna enter a golden age in which-. >> Peter Robinson: Friendship, comedy. >> John Etchemendy: Well, and everybody would have a voice and we could all live together, Kumbaya and so forth. And that's not what happened. >> Peter Robinson: Jonathan Haidt has a new book out on the particular pathologies among young people from all of these social media, and not an argument. It's an argument, but it's based on lots of data. >> John Etchemendy: Yeah, [COUGH] so it seems to me that I'm in favor of very light handed and informed regulation to try to put up sorta bumpers. I don't know what the analogy is. >> Fei-Fei Li: Guardrails. >> John Etchemendy: Guardrails for the technology. I am not for heavy handed, top down regulation that stifles innovation. >> Peter Robinson: Okay, here's another, let me get onto this, I'm sure you'll be able to adapt your answers to this question, too. >> Fei-Fei Li: Okay. >> Peter Robinson: I'm continuing your Wall Street Journal piece. Big tech companies can't be left to govern themselves around here, Silicon Valley, those are fighting words. Academic institutions should play a leading role in providing trustworthy assessments and benchmarking of these advanced technologies. We encourage an investment and human capital to bring more talent to the field of AI with academia and the government. Okay, now it is mandatory for me to say this, so please forgive me, my fellow Stanford employees. Apart from anything else, why should academic institutions be trusted? Half the country has lost faith in academic institutions, DEI, the whole woke agenda, antisemitism on campus. We've got a Gallup, recent Gallup poll showing the proportion of Americans who expressed a great deal or quite a lot of confidence in higher education this year came in at just 36%. And that is down in the last eight years from 57%, you are asking us to trust you at the very moment when we believe we have good reason to knock it off. Trust you? Okay, Fei. >> Fei-Fei Li: So I'll start with this first half of the answer, I'm sure John has a lot to say. I do want to make sure, especially wearing the hats of co directors of HAI. When we talk about the relationship between government and technology, we tend to use the word regulation. I really, want to double click, I want to use the word policy. And policy and regulation are related, but not the same. When John and I wrote that Wall Street Journal opinion piece, we really are focusing on a piece of policy that is to resource public sector, AI to resource academia. Because we believe that AI is such a powerful technology and science and academia and public sector still has a role to play to create public good. And public goods are curiosity driven knowledge exploration. Our cures for cancers are, the maps of biodiversity of our globe, our discovery of nano materials that we haven't seen before, different ways of expressing in theater, in writing, in music. These are public goods, and when we are looking, when we are collaborating with the government on policy, we're focusing on that. So I really want to make sure regulation, we all have personal opinion, but there's more than regulation in policy. >> Peter Robinson: Let me make one last run at you theory here, although I'm asking questions that you'd, I'm quite sure you'd like to take me out and swap me around at this point, John, but this is serious. You've got the Stanford Institute for Human Centered Artificial intelligence and that's because you really think this is important. But we live in a democracy and you're going to have to convince a whole lot of people. So let me take one more run at you and then hand it back to you. John, your article in the Wall Street Journal, again, let me repeat this, we encourage an investment in human capital to bring more talent to the field of AI. With academia and the government, that means money, and investment means money, and it means taxpayers money. Here's what Cruz and Graham say in the Wall Street Journal. The Biden regulatory policy on AI has everything to do with special interest rent-seeking. Stanford faculty make well above the national average income, we are sitting at a university with an endowment of tens of billions of dollars. John, why is not your article in the Wall Street Journal the very kind of rent seeking that senator Senator Cruz and Senator Graham are saying, are you kidding? >> John Etchemendy: Peter, let's take another example. So one of the greatest policy decisions that this country has ever made was when Vannevar Bush, advisor to, at that time, President Truman, convinced- >> Peter Robinson: He stayed on through Eisenhower, as I recall. So it's important to know he's bipartisan. >> John Etchemendy: Exactly, no, it was not a partisan an issue at all, but convinced Truman to set up the NSF for funding- National Science Foundation. Right, for funding curiosity based research, advanced research at the universities, and then not to say that companies don't have any role, not to say that government has no role. They both have roles, but they're different roles. And companies tend to be better at development, better at producing products and tapping into things that can, within a year or two or three, can be a product that will be useful. Scientists at universities don't have that constraint. They don't have to worry about when is this going to be commercial. >> Peter Robinson: Commercial, right. >> John Etchemendy: And that has, I think, had such an incalculable effect on the prosperity of this country, on the fact that we are the leader in every technology field. It's not an accident that we're the leader in every technology field, we didn't used to. >> Peter Robinson: And does it affect your argument, if I add, it also enabled us or contributed to a victory in the Cold War, the weapon systems that came out of universities? All right. >> John Etchemendy: Well, no, absolutely, and President Reagan- >> Peter Robinson: It ended up being a defensive democracy, kind of, you could argue from all kinds of points of view, as it was a good ROI for taxpayers money. >> John Etchemendy: So we're not arguing for higher salaries for faculty or anything of that sort. But we think, particularly in AI, it's gotten to the point where scientists at universities can no longer play in the game because of the cost of the computing, the inaccessibility of the data. That's why you see all of these developments coming out of companies. That's great, those are great developments. But we need to have also people who are exploring these technologies without looking at the product, without being driven by the profit motive. And then eventually, hopefully, they will develop discoveries, they will make discoveries, will then be commercializable. >> Peter Robinson: Okay, I noticed in your book, Fei-Fei, I was very struck that you said, I think it was about a decade ago, 2015, that you noticed that you were beginning to lose colleagues to the private sector. >> Fei-Fei Li: Yeah. >> Peter Robinson: Presumably because they just pay so phenomenally well around here in Silicon Valley. But then there's also the point that to get to make progress in AI, you need an enormous amount of computational power, and assembling all those ones and zeros is extremely expensive. >> Fei-Fei Li: Exactly. >> Peter Robinson: Chat GPT what is the parent company? >> Fei-Fei Li: OpenAI. >> Peter Robinson: OpenAI got started with an initial investment of a billion dollars. An initial, friends and family capital of a billion dollars is a lot of money, even around here. Okay, that's the point you're making. >> Fei-Fei Li: Yes. >> Peter Robinson: All right, it feels to me as though every one of these topics is worth a day long seminar, actually, I think they are. >> John Etchemendy: And by the way, this has happened before, where the science has become so expensive that university level research and researchers could no longer afford to do the science. It happened in high energy physics. High energy physics used to mean you had a van de Graaff generator in your office [LAUGH] and that was your accelerator. >> Peter Robinson: Or you can do what you needed to do. >> John Etchemendy: And then the energy levels were higher and higher. And what happened? Well, the federal government stepped in and said, we're gonna build an accelerator, Stanford- >> Peter Robinson: Stanford Linear Accelerator. >> John Etchemendy: Exactly. >> Peter Robinson: Sandia Labs, Lawrence Livermore, all these are, at least in part, federal establishment experts. >> Fei-Fei Li: CERN. >> Peter Robinson: CERN, which is European, right. >> John Etchemendy: Well, Fermilab, the first accelerator was SLAC, Stanford Linear Accelerator Center, then Fermilab and so on and so forth. Now, CERN is actually late in the game, and it's European consortium. But the thing is, we could not continue the science without the help of the government, in government [INAUDIBLE]. >> Fei-Fei Li: Well, there's another, and then in addition to high energy physics and then bio, right? Especially with genetic sequencing and high throughput genomics, and biotech is also changing. And now you see a new wave of biology labs that are actually heavily funded by the combination of government and philanthropy and all that. And that stepped in to supplement what the traditional university model is. And so we're now here with AI and computer science. >> Peter Robinson: Okay, we have to do another show on that one alone, I think. The singularity, good, this is good. Reassuring, you both are rolling your eyes. Wonderful, I feel better about this already, good. Ray Kurzweil, you know exactly where this is going. Ray Kurzweil writes a book in 2005, this gets everybody's attention and still scares lots of people to death, including me. The book is called The Singularity is Near. And Kurzweil predicts a singularity that will involve, and I'm quoting him, the merger of human technology with human intelligence. He's not saying the tech will mimic more and more closely human intelligence, he is saying they will merge. I set the date for the singularity, representing a profound and disruptive transformation in human capability as 2045. Okay, that's the first quotation. Here's the second, and this comes from the Stanford course catalog's description of the philosophy of artificial intelligence, a freshman seminar that was taught last quarter, as I recall, by one John Etchemendy. Here's from the description. Is it really possible for an artificial system to achieve genuine intelligence, thoughts, consciousness, emotions? What would that mean? John, is it possible? What would it mean? >> John Etchemendy: I think the answer is actually no. >> Peter Robinson: Thank goodness you kept me waiting for a moment. >> John Etchemendy: The fantasies that Ray Kurzweil and others have been spinning up, I guess that's the way to put it. Stem from a lack of understanding of how the human being really works and don't understand how crucial biology is to the way we work, the way we are motivated, how we get desires, how we get goals, how we become humans, become people. And what AI has done so far, AI is capturing what you might think of as the information processing piece of what we do. So part of what we do is information processing. >> Peter Robinson: So it's got the right frontal cortex but hasn't got the left frontal cortex. >> John Etchemendy: Yeah, it's an oversimplification, but yes. >> Peter Robinson: Imagine that on television, all right. >> John Etchemendy: So I actually think it is, first of all, the date 2045 is insane. That will not happen. And secondly, it's not even clear to me that we will ever get that. >> Fei-Fei Li: Wait, I can't believe I'm saying this. In his defense, I don't think he's saying that 2045 is the day that the machines become conscious beings like humans. It's more an inflection point of the power of the technology that is disrupting the society. >> Peter Robinson: He's right, we're already there. >> Fei-Fei Li: Exactly, that's what I'm saying. >> John Etchemendy: I think you're being overly generous. [LAUGH] [LAUGH] I think that what he means by the singularity is the date at which we create an artificial intelligence system that can improve itself and then get into a cycle, a recursive cycle, where it becomes a super intelligence. >> Peter Robinson: Yes. >> John Etchemendy: And I deny that. >> Peter Robinson: He's playing the 2001 Space Odyssey game here. Different question, but related question. In some ways, this is a more serious question, I think. Although that's serious, too. Here's the late Henry Kissinger again, cool. We live in a world which has no philosophy. There is no dominant philosophical view. So the technologists can run wild, they can develop world changing things, and there's nobody to say, we've got to integrate this into something. All right, I'm going to put it crudely again, but in China a century ago, we still had Confucian thought dominant among, at least among the educated classes. On my very thin understanding of Chinese history. In this country, until the day before yesterday, we still spoke, without irony, of the Judeo-Christian tradition, which involved certain concepts about morality, what it meant to be human. It assumed a belief in God, but it turned out you could actually get pretty far along even if you didn't believe in. Okay, and Kissinger is now saying it's all fallen apart. There is no dominant philosophy. This is a serious problem, is it not? There's nothing to integrate AI into. You take his point. It's up to the two of you to-. >> Fei-Fei Li: You are the philosopher. >> John Etchemendy: You're the Buddhist. >> Fei-Fei Li: You're the philosopher. >> Fei-Fei Li: I think this is a great first of all, thank you for that quote. I didn't read that quote from Henry Kissinger. I mean, this is why we founded the Human Centered AI Institute. These are the fundamental questions that our generation needs to figure out. >> Peter Robinson: So that's not just a question. That's the question. >> Fei-Fei Li: It was one of the fundamental questions. It's also one of the fundamental questions that illustrates why universities are still relevant today. >> John Etchemendy: And Peter one of the things that Henry Kissinger says in that quote is that there is no dominant philosophy. >> Peter Robinson: Yes. >> John Etchemendy: There's no one dominant philosophy like the Judeo-Christian tradition, which used to be the dominant. >> Peter Robinson: It's a different conversation in Paris in the 12th century, for example, the University of Paris. >> John Etchemendy: In order to take values into account when you're creating an AI system, you don't need a dominant tradition. What you need, for example, for most ethical traditions, is the golden rule. >> Peter Robinson: Okay, so we can still get along with each other. Even when it comes to deep, deep questions of value such as this, we still have enough common ground. >> John Etchemendy: I believe so. >> Peter Robinson: I heave yet another sigh of relief. Okay, let's talk a little bit. We're talking a little bit about a lot of things here but so it is. Let us speak of many things as it is written in Alice in Wonderland, the Stanford Institute. The Stanford Institute for Human centered artificial intelligence, of which you are co-directors. And I just have two questions and respond as you'd like. Can you give me some taste, some feel for what you're doing now and in some ways more important, but more elusive, where you'd like to be in just five years say? Everything in this field is moving. My impulse is to say ten years, because it's a rounder number. It's too far off in this field, Fei Fei. >> Fei-Fei Li: I think what really has happened in the past five years by Stanford HAI, among many things. >> Peter Robinson: I just wanna make sure everybody is following you H-A-I Stanford HAI is the way it's known on this campus. >> Fei-Fei Li: Yes. >> Peter Robinson: All right, go ahead. >> Fei-Fei Li: Yeah, is that we have put a stick on the ground for Stanford as well as for everybody that this is an interdisciplinary study. That AI, artificial intelligence, is a science of its own. It's a powerful tool. And what happens is that you can welcome so many disciplines to cross pollinate around the topic of AI or use the tools of AI to make other sciences happen or to explore other new ideas. And that concept of making this an interdisciplinary and multidisciplinary field is what I think Stanford HAI brought to Stanford. And also, hopefully, to the world, because like you said, computer science is kind of a new field. The late John McCarthy coined the term in the late 50s. Now it's moving so fast, everybody feels it's just a niche computer science field that's just like making its way into the future. But we are saying, no, look broad, there's so many disciplines that can be put here. >> Peter Robinson: Who competes with the Stanford Institute in Human Centered Design? Is there such an institute at Harvard or Oxford or Beijing? I just don't know what those- >> John Etchemendy: So in the five years since we launched, there have been a number of similar institutes that have been created at other universities. We don't see that as competition in any way. >> Peter Robinson: If these arguments you've been making are valid, then we need them. >> Fei-Fei Li: Yeah, we see that as a movement. >> John Etchemendy: We need them. And part of what we want to do, and part of what I think we've succeeded to a certain extent doing is communicating this vision of the importance of keeping the human and human values at the center when we are developing this technology. When we are applying this technology. And we want to communicate that to the world. We want other centers that adopt a similar standpoint. And importantly, and one of the things that Fei Fei didn't mention is one of the things we try to do is educate. And educate, for example, legislators so that they understand what this technology is, what it can do, what it can't do. >> Peter Robinson: So you're traveling to Washington, or the very generous trustees of this institution are bringing congressional staff, and they're both? >> John Etchemendy: Both. >> Peter Robinson: Both are happening. >> Fei-Fei Li: Yeah. >> Peter Robinson: All right, so Fei-Fei, first of all, did you teach that course in Stanford HAI, or was the course located in the philosophy department, or cross listing? I'm just trying to get a feel for what's actually taking place there now. >> John Etchemendy: Yeah, I actually taught it in the confines of the HAI building. >> Peter Robinson: [LAUGH] Okay, so it's in HAI. >> John Etchemendy: No, it's a philosophy course. >> Fei-Fei Li: It's listed as a philosophy course, but taught in the HAI. >> Peter Robinson: He's the former provost, he's an interdisciplinary walking wonder. >> Fei-Fei Li: Yeah. >> Peter Robinson: And your work in AI-assisted healthcare. >> Fei-Fei Li: Yep. >> Peter Robinson: Is that taking place in HAI, or is it at the university medical school? >> Fei-Fei Li: Well, that's the beauty, it's taking place in HAI, computer science department, the medical school. It even has collaborators from the Law School, from the Political Science Department. So that's the beauty, it's deeply interdisciplinary. >> Peter Robinson: If I were the provost, I'd say this is starting to sound like something that's about to run amok. Doesn't that sound a little too interdisciplinary, John? Don't we need to define things a little bit here? >> John Etchemendy: Let me say something, so Steve Denning, who was the Chair of our Board of Trustees for many years and has been a long, long time supporter of the university in many, many ways. In fact, we are the Denning co-directors of Stanford HAI. Steve saw five, six years ago, he said AI is going to impact every department at this university. And we need to have an institute that makes sure that that happens the right way, that that impact does not run amok. >> Peter Robinson: All right, where would you like to be in five years? What's a course you'd like to be teaching in five years, what's a special project? >> Fei-Fei Li: I would like to teach a freshman seminar called the greatest discoveries by AI. >> Peter Robinson: Okay, a last question, I have one last question, but that does not mean that each of you has to hold yourself to one last answer, because it's a kind of open-ended question. I have a theory, but all I do is wander around this campus. The two of you are deeply embedded here, and you ran the place for 17 years. So you'll know more than I will, including, you may know that my theory is wrong, but I'm going to trot it out, modest though it may be, even so. Milton Friedman, the late Milton Friedman, who when I first arrived here was a colleague at the Hoover Institution. In fact, by some miracle, his office was on the same hallway as mine and I used to stop in on him from time to time. He told me that he went into economics because he grew up during the Depression, and the overriding question in the country at that time was how do we satisfy our material needs? There were millions of people without jobs, there really were people who had trouble feeding their families. All right, I think of my own generation, which is more or less John's generation. You come much later, Fei-Fei. >> Fei-Fei Li: Thank you [LAUGH] >> Peter Robinson: And for us, I don't know what kind of discussions you had in the dorm room, but when I was in college there were bull sessions about the Cold War. The Cold War was real to our generation. That was the overriding question, how can we defend our way of life, how can we defend our fundamental principles? All right, here's my theory. For current students, they've grown up in a period of unimaginable prosperity, material needs are just not the problem. They have also grown up during a period of relative peace. The Cold War ended, you could put different. The Soviet Union declared itself defunct in 1991, Cold War is over at that moment at the latest. The overriding question for these kids today is meaning, what is it all for, why are we here? What does it mean to be human? What's the difference between us and the machines? And if my little theory is correct, then by some miracle this technological marvel that you have produced will lead to a new flowering of the humanities. Do you go for that, John? >> John Etchemendy: Do I go for it? I would go for it- [LAUGH] If it were going to happen. >> Peter Robinson: Did I put that in a slightly sloppy way? >> John Etchemendy: No, I think it would be wonderful, it's something to hope for. Now I'm going to be the cynic. So far what I see in students is more and more focus, for Stanford students more and more focus on technology. >> Peter Robinson: Computer science is still the biggest major at this university. >> Fei-Fei Li: Yeah. >> John Etchemendy: Yeah, and we have tried, at HAI we have actually started a program called Embedded EthiCS, where the CS at the end of ethics is capitalized so it's Computer Science. >> Peter Robinson: That'll catch the kids' attention [LAUGH] >> John Etchemendy: No, we don't have to catch their attention. What we do is virtually all of the courses in computer science, the introductory courses, have ethics components built in. So you have a problem set this week, and that'll have a whole bunch of very difficult math problems, computer science problem, and then it will have a very difficult ethical challenge. And it'll say here's the situation, you are programming an AI system, and here's the dilemma. Now discuss, right, what are you gonna do? So we're trying to bring, I mean this is what Fei-Fei wanted. We're trying to bring- >> Peter Robinson: This is new within? >> John Etchemendy: Ethics, within, yeah, the last couple years. >> Peter Robinson: Okay. >> John Etchemendy: Two, three years, we're trying to bring the attention to ethics into the computer science curriculum. And partly that's because students tend to follow the path of least resistance. >> Peter Robinson: Well they also, let's put it again, I'm saying things crudely again and again, but someone must say it, they follow the money. So as long as this valley that surrounds us rewards brilliant young kids from Stanford with CS degrees as richly as it does, and it is amazingly richly, they'll go get CS degrees, right? >> Fei-Fei Li: Well, I do think it's a little crude. [LAUGH] I think money is one surrogate measure of also what is advancing in our time. Technology right now truly is one of the biggest drivers of the changes of our civilization. When you're talking about what does this generation of students talk about? I was just thinking that 400 years ago, when the scientific revolution was happening, what is in the dorms? Of course, it's all young men [LAUGH] in Cambridge or Oxford, but that must also be a very exciting and interesting time. Of course, there wasn't Internet and social media to propel the travel of the knowledge. But imagine there was, the blossoming of discovery and of our understanding of the physical world. Right now, we're in that kind of great era of technological blossoming. It's a digital revolution. So, the conversations in the dorm, I think it's a blend of the meaning of who we are as humans as well as our relationship to these technology we're building. And so it's a- >> Peter Robinson: So properly taught technology, can subsume or embed philosophy literature. >> Fei-Fei Li: Of course, can inspire, can inspire. And also think about it. What follows scientific revolution is a great period of change, of political, social, economical change, right? And we're seeing that. >> Peter Robinson: All for the better, that's right. >> Fei-Fei Li: And I'm not saying it's necessarily for the better, but we haven't even peaked the digital revolution, but we're already seeing the political, socioeconomic changes. So this is, again, back to Stanford HAI when we founded it five years ago. We believe all this is happening, and this is an institute where these kind of conversations, ideas, debates should be taking place, education programs should be happening. And that's part of the reason we did this. >> John Etchemendy: Let me tell you. Yeah, so, as you pointed out, I just finished teaching a course called Philosophy of Artificial Intelligence. >> Peter Robinson: About which I found out too late, I would have asked permission to audit your course John. >> John Etchemendy: No, you're too old. [LAUGH] And about half of the students were computer science students or planned to be computer science majors. Another quarter planned to be symbolic systems majors, which is a major that is related to computer science. And then there was a smattering of others. And these were people, every one of them, at the end of the course, and I'm not saying this to brag, every one of them said, this is the best course we've ever taken. And why did they say that? It inspired, it made them think. It gave them a framework for thinking, a framework for trying to address some of these problems, some of the worries that you've brought out today, and how do we think about them? And how do we not just become panicked because of some science fiction movie that we've seen or because we read Ray KurzweiI [LAUGH] so. >> Peter Robinson: Maybe it's just as well I didn't take the course. I'm sure John would have given me a C minus at best. >> John Etchemendy: Grade inflation. [LAUGH] So it's clear that these kids, students, are looking for the opening to think these things and to understand how to address ethical questions, how to address hard philosophical questions. And that's what they got out of the course. >> Fei-Fei Li: And that's a way of looking for meaning in this time. >> Peter Robinson: Yes, it is. Dr. Fei-Fei Li and Dr. John Etchemendy, both of the Stanford Institute for Human Centered Artificial Intelligence, thank you. Thank you, Peter. >> John Etchemendy: Thank you, Peter. >> Peter Robinson: For uncommon knowledge and the Hoover Institution and Fox Nation, I'm Peter Robinson. [MUSIC]
AI_LLM_Stanford_CS229
Stanford_CS330_I_Unsupervised_PreTrainingContrastive_Learning_l_2022_I_Lecture_7.txt
So to start to get into today's content, so far, we've been talking a lot about few-shot learning by using meta learning. And the problem setup for this was we were given data from some number of training tasks. And we wanted to quickly solve a new task more quickly, more proficiently, or more stably. And we reviewed a few different methods for doing that-- black box meta learning methods, optimization-based meta learning methods, and nonparametric methods. And one big assumption that these algorithms make is that you have access to a set of training tasks. And there may be scenarios where you don't have a large number of training tasks. And so that's really the motivation for the lectures that we're going to be talking about-- the topics we're going to be talking about this week. And in particular, we're going to be considering scenarios where you only have one large batch of unlabeled examples. And we want to kind pretrain models that allow us to perform well with small amounts of data, perform well on new tasks with small amounts of data by pretraining on this unlabeled data. And so in particular, this week is all about unsupervised representation learning. And the lecture today will be one class of methods for doing that, which is called contrastive learning. And the lecture on Wednesday will be another class of methods for doing that which are methods based off of reconstruction. And at the end of this lecture, I'll talk about-- it should be apparent as we kind of go through the lecture but also talk about how these methods relate to meta learning methods. It actually turns out that there actually is a pretty close relationship between the methods that we'll talk about today and the methods that we've actually already been talking about for the past two weeks. Cool. So the goals for the lecture are to understand contrastive learning, including the intuition, design choices, and how to implement them, and how these algorithms relate to meta learning. Cool. So unlike meta learning, the main data that we'll have access to an unsupervised pretraining is a large unlabeled data set. And so that will have a large set of examples denoted as xi without their corresponding labels. And the goal of this unsupervised pretraining process is to take this unlabeled data and produce a pretrained model such that when we take that pretrained model and fine-tune it on a much smaller label data set, we can do well on our new task. So you can in many ways think of this as the same setup as the meta learning algorithms you've seen before, except instead of having access to a large number of tasks, we have access to this diverse unlabeled data set. And then what we'll be talking about this week is this first arrow on the slide here where, how do we basically go from that diverse unlabeled data set to a pretrained model? The unlabeled data set could be a bunch of images that you found on the internet. It could be a bunch of sentences or text. It could also be something more domain specific. Like if you're in an education domain, maybe it's a lot of student solutions to a problem. But you don't have feedback or labels on those solutions, things along those lines. Cool. And so today, we'll be talking about contrastive learning for unsupervised pretraining. And really, the key idea behind contrastive learning is that we want to learn our representation and specifically a mapping from inputs to a vector representation such that similar examples have similar representations. So examples that are semantically related to one another should map to a-- map to points in space that are closer than examples that are semantically different from each other. And so, for example, maybe you have two examples with the same class label. We want to learn the representation space such that these examples have a very similar representation. Of course, if we did something like this with examples with class labels, we actually would need labels for those examples in order to train those examples to have similar representations. And this is very closely related to things like Siamese networks and prototypical networks where we are training a network to predict whether or not two examples had the same class label or whether they had different class labels. And so yeah, the stuff that we talked about on Wednesday last week in some ways can be viewed as a form of contrastive learning. But it requires labels. And so what we want to do is think about how we might do something like that without access to labels. And so there's a few different things that we could imagine doing, in particular a few different ways that we could imagine creating examples that might be semantically similar to one another. One thing that we could do is we could take an image and say that patches of that image that are nearby to one another probably should have a similar representation. Because if they're nearby to each other, that means that they're probably maybe from the same object or from a similar part of the same object. And so that's what was done in the CPC paper. They tried to encourage those patches to have a similar representation. Instead of taking patches of an input, we could also augment an example. And so in this case, we could take our image. And then we could flip it and also crop it and say that the augmented example should have a similar representation as the original example. And if your augmentations-- if your augmentations preserve the class of the image, then they should produce representations that correspond to these kinds of semantic categories that you may want it to correspond to. So something like this was done in the SimCLR paper. There's also a version of this where we could take videos and say that images that are nearby in time from the same video should have a similar representation. So really, the key idea behind contrastive learning is to take things that we intuitively think should have similar representations, encourage them to have similar representations such that we can then use that representation space in order to do transfer to different downstream tasks. Cool. So then there's the question of, how do we actually implement this intuition in practice? So we can use a running example of trying to say that the two images at the top have similar representation and the two images at the bottom have similar representations. Now, one thing you could do is you can say that maybe the first image is x, the second image is x prime. And you could train for a model f that encourages the representation of the first image and the representation of the second image. We could basically encourage this to have to be very close, maybe in Euclidean space or in some other space. And we could basically optimize for our representation functions such that these are close together. Now, does anyone see kind of a problem with an objective like this? Yeah. Yeah, [INAUDIBLE] presentation, regardless of the input. Yeah, exactly. Yeah, so there's a degenerative solution to this loss function, which is to basically just map everything, all of the images to a single constant vector. And then you would minimize this loss function very nicely because you would achieve 0 loss here. But it means that you don't get a very good representation space. And so we can't simply minimize the difference between these representations because of that collapse. And instead of only comparing examples and saying that examples should have similar representations, we also need to say what needs to be different. We need to also contrast the images as well. And so that's one of the key ideas behind contrastive learning. And so in particular, if we take these three examples here, we have our embedding space. Then we could try to train for our embedding space in a way that first embeds these examples and then brings together the representations of similar examples while also pushing apart the representations of different examples. And so this is basically the key idea behind contrastive learning. And from here, there's really just only two key design choices behind these algorithms. The first is how do you actually implement the loss function. We'll go over two different loss functions in this lecture. But there's actually a number of loss functions that people have used in practice. And then also choosing what to compare and contrast. We talked about some options at the very beginning with the pictures of the dogs. But there's other considerations there as well. Cool. So let's first get into the implementation of the loss function for contrastive learning. So the first-- I guess in terms of-- for some terminology, if we want to bring two examples together and push apart two examples, typically, we will refer to the first example as the anchor because that's what we're going to be comparing and contrasting to. And then we'll refer to the next example as a positive example. And the third example is the negative. So we're trying to bring the positive towards the anchor and push the negative away from the anchor. And really, the simplest form of loss function is referred to a triplet loss. And we can start by just taking the loss function that we wrote down before. And instead of only minimizing the distance between the embedded x and x prime, we can also add a term that basically maximizes the distance between-- actually, using the notation here, we'll call this x plus. That will also maximize the distance between the embedded x and the negative. So this would correspond to f theta of x minus f theta of x minus squared. And because we have a negative here, we're going to be maximizing this distance when we minimize this overall objective function. Yeah. If there were a bunch of case losses that you want to contrast against, you just extend this loss function for all the different classes or just [INAUDIBLE]?? Yeah, so the question is, what happens if you actually have a lot of different negatives that you want to contrast against? And we'll get into that actually after this slide. Yeah. Is there a way so that you can control how much you push away the contrasting examples? Yeah, so the question is, is there a way to control how much you push away from the contrasting examples? And there's actually two different aspects of that question. One is that maybe for some examples, you want to push away more than others. And in practice, contrastive learning algorithms will push away the same amount for all of the negatives. But that doesn't mean that all of them will end up at the same distance because some of them will naturally be harder to push away than others. And so oftentimes, even if you push apart all of them the same amount, they'll still give you a meaningfully-- the distances in the space will be meaningful. But the second part of that question is, there's actually an issue with this loss function to some degree, which is that this term of the loss function is somewhat unbounded, which is that you can kind of basically put this to infinity. You can make them maximally far apart. And so when designing either your embedding function or your loss function, you need to make sure that you don't have an unbounded loss function. Otherwise, it will go to negative infinity. And there's a few different ways to accomplish this. One thing that you could do is you could make sure that your embedding space is normalized in some way, it's bounded itself. But one thing that's common to do with a triplet loss is to actually use a hinge loss here, which is that instead of rewarding it more and more, the more that it pushes away, it says that once it's a certain distance away, then it no longer gets rewarded for pushing it any further. And so you probably have seen something like a hinge loss in a machine learning class before. The way it looks like is, if we look at the distance, we want to, in this case-- or the difference of these distances, basically, as this increases, we want it to get a lower loss. So the y-axis here is going to be the loss value. So as it increases, we want it to get a lower loss. But at some point, we want it to not continue getting rewarded because it doesn't need to push it all the way to negative infinity. And so what a hinge loss would do is basically give you a shape that looks like this, where up until some point, it's going to be rewarded for increasing the distance. And after that, it will be-- it will just sit at 0 loss. And this distance right here is referred to as your margin. And this is something that you can control. This is like a hyperparameter. And this controls how much you are-- how far apart you want your examples to be. And so the way that you actually will implement this is instead of having your loss function be the difference between these two things, you're going to-- first, you will add your margin. Maybe your margin is referred to as epsilon. And then you will basically bound this below by 0. And so you can take the max between this and 0. And this will basically just apply this function to what we had before. And then we'll minimize this whole thing with respect to theta. And so that loss function is written out right here. Yeah. Does the xx plus x minus come from the same mini batch? I mean, I don't know how we, I mean, move over the two x, x plus x, x minus eta. Yeah, so the question is, where do xx plus and x minus come from? Oh, yeah. Yeah, so they can come from-- these triplets can come from a number of different places. In one case, it could-- if you have labels, then it could come from the-- whether or not examples have the same label. So you could sample two examples of the same label and one example with the different label. And that would give you an anchor, a positive, and a negative. One of the other things that we talked about is instead of using class labels, you can also use augmentations. And so you could sample an example and sample a different example. That will be the anchor and the negative. And then to get another positive, you can augment and create another view of the anchor by, for example, applying a random crop or flipping the image or doing something like that. [INAUDIBLE] mini batch. So it means all of them from the same mini batch. Is my understanding correct? So in practice, you'll sample a mini batch of these triplets. So you'll sample-- you'll sample not just one triplet. But you'll sample a mini batch of them. OK, how do we choose an anchor? Is it for each of the elements in the mini batch, we choose the anchor then take a break on the x plus and x minus relatively? So yeah, the sample and anchor, you can basically-- when you sample an image from your unlabeled data set, that can be the anchor. You sample a different image, and that could be the negative. And then the positive, you can augment the anchor to get the positive. So we'll also render an algorithm too, which should maybe give a little bit more intuition for this. Yeah. What if I-- yeah, negative examples are accidentally created in the same class or something or from the same class as the anchor. How bad is that in the training process? Yeah, so the question is, what happens if the negative example is accidentally kind of the same class as the anchor? And this is generally a great question. And in practice, especially when you have unlabeled examples and you're contrasting against augmentations, you will sample negatives that may actually be somewhat similar to the example that the anchor that you sampled. This is OK. The most important thing is that-- well, the most important thing is that happens somewhat rarely. And so if-- general, if you have a really huge data set and you just want to-- yeah, if you have a huge data set, then the number of examples that you have from a particular class will be somewhat small. And the likelihood of actually an anchor and a negative being very similar to each other will also be small. Yeah. Does the choice of the embedding space in the distance metric make a big difference to the results here? You might imagine that if you embed in some non-Euclidean space, you can get different combinations of distances. And I'm wondering if that's helpful. So you're asking, does the distance function here, is that important? Or you're asking something different? Not so much. Like, how helpful it is to do that? Yeah, I think that there's really two common choices. One is to use Euclidean, and another is to use basically negative cosine similarity. To my knowledge, it's not critical. And you probably want to consider both of those two options. But beyond that, I don't think that people get super creative. Yeah. [INAUDIBLE] Yeah, so one thing that's different here is that these positives and negatives, they're actually quite different from a class label. And they may not correspond exactly to your class labels. They're going to instead correspond to some other kind of notion of relatedness and so forth. Sorry, what was your question? So if we're given an example [INAUDIBLE],, how do we know [INAUDIBLE]? Yeah, so I'll go back to the slide that I had here. So augmented versions are the top right. Beyond that, there's-- I have two other examples here. One is to use image patches or to use basically-- in this case, they actually often use overlapping image patches from the same image. Or if you have a video, you can use nearby frames. And these are also somewhat popular choices as well. In computer vision, augmented versions is definitely the most popular example of this. But using things like basically things that are nearby in some space, either in image space or in time or something else is also very popular. I guess to also provide some intuition for that, for example, with video frames, the intuition there is that things that co-occur in time will be related to one another. But it does end up being fairly data dependent. And if your data-- if your data for whatever reason doesn't obey that, like if you have videos that are constantly changing over time and are more random in terms of their sequence, then that may yield representations that aren't as good as something that has a little bit more temporal coherence. Yeah. [INAUDIBLE] Is it possible to stretch [INAUDIBLE]?? Yeah, so you can view augmentations essentially as a form of hyperparameter or as a form of domain knowledge that's going into the algorithm. And the choice of augmentation is actually really, really important to actually how the performance ends up-- of these algorithms ends up being. There's actually a lot of literature on different forms of augmentations that work well often for images. And there's also some work outside of computer vision domains that look at other augmentations as well. But it does end up being quite important. And so if you look at, for example, the SimCLR paper, I know that they included a study of different kinds of-- basically, how well different kinds of augmentations work. [INAUDIBLE] There is actually a paper that discovers augmentations. And I'll cover it later in the lecture. Cool. So we've talked about the triplet loss. And so that's what we went over on the board. And this is really the simplest form of contrastive loss. And it actually works pretty well. And yeah, it's pretty nice, a good place to start. Compared to something like Siamese networks, which we talked about on Wednesday, it's actually extremely similar to Siamese networks, especially if you're using class labels as your positives and negatives. And you could essentially think of it as Siamese networks from the standpoint of if this distance in your embedding space is small, then classify the examples as being the same class. Otherwise, classify them as being in a different class. Really, the key difference between this triplet loss and Siamese networks is that when you use this triplet loss to learn a representation, you're going to be learning this kind of metric space, this representation space from which you can measure the distance between examples. Whereas the Siamese networks were only just learning a classifier. And it was just giving you the probability that they were correct or not rather than a representation where you can measure these kinds of distances. Cool. Now, one thing that comes up with this loss function is that choosing good negatives is difficult. And in particular, if you're in your embedding space and you sample a negative that's already very far away, then you're just going to have 0 loss at that space. And you won't actually be learning anything from that negative. And so you really want to try to find negatives that are difficult. One thing that you could do is what's called hard negative mining where you explicitly search for negatives that are closer and use those negatives to actually allow it to continue to learn and continue to getting good gradients from this loss function. But we can also just basically-- essentially, instead of just sampling one negative, we can sample multiple negatives and incorporate that into the loss function as well. And this gets at the question that was asked before, which is, what if I don't want to contrast against one thing? What if I want to contrast against multiple different things? And so the second version of the loss function that will look like-- the second version of the loss function is something that is going to do more of an n-way classification rather than these binary comparisons. And so instead of thinking about just having one negative, we're also going to think about having multiple other negatives. And essentially, what we're going to want to be able to do is classify among the turquoise and pink and yellow and salmon-colored dots which of those is the positive and which one is the negative-- or which one is a negative. And so if we want to perform that kind of classification, it's going to look a lot like a softmax. So we can measure the distance between the positive and the anchor and the negative and the anchor-- or the negatives and the anchor. And what are the probability that an example will be? The positive will be something like e to the negative distance between x and x plus divided by basically a softmax. So we're just going to exponentiate and normalize these distances where we are dividing by the distance between the anchor in each of the negatives. And so this basically gives you the probability that x plus is a positive example, rather than being a negative example. And when you actually then compute the loss for this-- oh, actually, sorry. These should technically, I guess, be f of x. Or equivalently, you could write these as d of z or d of z and d of z plus and d of z and d of z minus. So d will just correspond to either the Euclidean losses before or something like negative cosine similarity. And then what the loss function looks like is-- this is a probability. We'll just minimize the negative log probability. So we'll take the log of this and minimize this with respect to theta. Yeah. Is there something over the negative examples? [INAUDIBLE]? Oh, yeah, sorry. This is something over the negative examples. And so yeah, great catch. So this is over n. And then this is z minus n. [INAUDIBLE] Yeah, so we need to know-- during training, we need to know what the positives and negative examples are. Yeah. [INAUDIBLE] We don't a priori know how good of a negative they are. But this loss function will basically take into account all of them. Yeah. Could we also sum over all the negative examples using the triplet loss? Yeah, so the question was, can we also sum over all the negatives using the triplet loss? This actually ends up being very, very similar to doing that. So if you actually write this out with the log, the numerator will become log of e to the negative distance. And so-- or negative log of e to the negative distance. And so this actually just becomes d of z and z plus. And then with the denominator, you get something like plus log of sum of e to the negative distance of z and z minus. And so you have a log sum x up here. But if you basically think of the log and the e canceling out, this is basically just like summing up the distances there. Yeah. So I didn't get that. Can you basically explain what you're trying to achieve here? Yeah, so the question was just-- the question was, can we just basically take the triplet loss and just add up all the negatives? And what I was simply saying is that this loss function is actually already very similar to the triplet loss but where you sum over the negatives here. And so the question is, why don't we just do something like this? And this loss is actually already doing something a lot like that, except instead of something, we're going to do a log sum x. [INAUDIBLE] Oh, sorry, yeah, what is the loss function doing itself? So yeah, there's a few different intuitive ways to think about this loss function. One is very similar to the triplet loss, which is that it's pulling together the things, the positive and the anchor, and pushing apart or maximizing the distance between the anchor and the negatives. The second intuition that you can think of it as is basically classifying whether or not an example is a positive example or a negative example given the anchor. And so this looks a lot like a softmax where your logits correspond to this negative distance between your example and the anchor. Yeah. I think I still just have a hard time grasping why the top is positive and the bottom is some other negative examples like the-- sorry, for this loss function on the-- yeah, it says that z plus on the top. But [INAUDIBLE] negative examples for the bottom part. And I'm not sure why that's the case. Yeah, so the-- well, so I guess one other form of this loss function that I can mention is one where you use sum over all the examples on the bottom. And so you also add something where the positive example also comes on the bottom. I don't know if that makes more sense to you. So yeah, this is also a version that you can do. This, I think-- this version actually, I think, is what is done in this first paper. Whereas in the second paper, they actually didn't include this. And I think that the intuition perhaps for not including it is that you really only want to push apart the-- you really only want to push apart the anchor and the negatives. You don't really want to push apart the anchor and the positive. And this denominator is basically pushing them apart. Yeah. I'm sorry [INAUDIBLE]. Do we have any knowledge about the class of z? The question was, do we have any knowledge about the class of z? In the unlabeled case, you don't have any knowledge about the class of any of the examples. Then why would we want it to be closer to z plus rather than z minus? Yeah, so the question is, why would we then want the anchor to be closer to z plus versus z minus? This comes down to how we sample positives and negatives. And so on the next slide, I'll talk about how we sample it. Yeah. Going back to hard negative mining, could you interpret that as adversarial paradigm where you're trying to find the hardest examples [INAUDIBLE]?? Yeah, so question is, can we interpret hard negative mining as a form of adversarial loss? And yeah, exactly. So basically, if you're-- the way that it would look like in the case of the triplet loss, for example, is you find the negatives that maximize this loss function rather than minimize those and specifically pick those. And so in some ways, it is a form of an adversarial-- or a form of an adversary. Cool. So here's the loss function just kind of written out on the board. I talked a little bit about how this is in some ways a generalization of the triplet loss to multiple negatives. Now, let's actually walk through one algorithm that explicitly walks through also how we sample the positive and negatives. And I think that that should help clear up what has been-- clear up some of the questions that have been asked. So the input to this algorithm is just a set of examples x, just unlabeled examples. And so what the SimCLR algorithm does is it samples a mini batch of those examples. So it samples n of those examples. And to generate positives, it's going to augment those examples with some augmentation function. And so here, we're sampling some images, which are our mini batch of examples. And we're going to augment each of those examples twice to get x tilde and x tilde prime. And so, for example, the augmentations here correspond to changing the color, cropping in different ways, and distorting or flipping and so forth. And so, for example, we take our example. Then we augment it in two different ways and do that for every example in our mini batch. Then once we have these augmentations, we're going to embed our augmented examples into our embedding space. So this corresponds to just running each of these augmented examples through our encoder f theta. And then from there, the positives are going to correspond to augmentations of the same image and the negatives are going to correspond to augmentations of different images. And so the things that we're going to try to bring together are these augmentations at the same image. And the things that we're going to try to push apart are basically everything else in the batch. So this is one way to generate positives and negatives. There's also other ways to generate positives and negatives that we'll talk about on a future slide. And so intuitively, we don't know the classes of these images. But we do know that if we design good augmentations, then the augmented versions-- these are actually very different images. But they have the same class because they're generated from the same image. And likewise, these have different classes and so forth. As was mentioned before, it's possible that you could also sample a chair in this batch. And you might end up pushing apart other chairs. That's OK as long as the-- as long as not every example you're pushing apart is also a chair. You could also think of this as doing something a form of more fine-grained classification where you're basically trying to discriminate different instances rather than different classes. Did that answer your question? Yeah. So eventually, this kind of thing will lead to an embedding space where even if there are two in the class that's in the chair and if we had two different images of chairs-- so they might be separate. But the relative distance within two chairs would be lesser than the relative distance between the chair and the dog? Yeah, exactly. So the result of this algorithm should be an embedding space such that chairs are closer to each other than chairs and dogs. And part of that also relies on your augmentation function. And ideally, your augmentation function will generate other things that look like chairs. And so by pulling together those things, it will make it more difficult for the network to push apart an example of a different chair because you have-- because you have to have these have a similar representation. Yeah. Could you explain the choice why to augment twice and train only on augmented examples, as opposed to augmenting once and using your original data? Yeah, so you could also just augment once, and basically use the unaugmented example as the anchor, and the augmented example as the positive. One thing that this does give you is it gives you more negatives and more anchors and more positives. And so if you have a good augmentation function, I think that augmenting twice and using augmented examples, both as anchors and positives and negatives, should work well. I would also expect that if they include some of the original examples as positives, anchors, and so forth, or basically have the identity function be part of your augmentation class, that would also be a very reasonable choice. Yeah. if it is not part of augmentation, you could say, isn't the natural [INAUDIBLE] distribution for the [INAUDIBLE],, kind of, because it's only been [INAUDIBLE]? Because it's something quite different from the natural. Yeah. So the question was if the identity function isn't in your augmentation class, then will these images be out of distribution in comparison to these images? In general, these augmentation classes are designed such that they are exhaustive and that they cover a much wider space than the original space. And such that they aren't completely disjoint from the original space. And so, in practice, that ends up not being an issue if you design your augmentations well. But it does mean that when you design your augmentations, you shouldn't design it to basically be disjoint from your original space. Yeah. So we're not embedding the original examples at all? Yeah, in this case, we're not actually-- well, if identity is included in your augmentation class, then we are. But if it's not, then we're not actually doing that. Yeah. So if this is completely unsupervised, then can't we accidentally sample two dogs instead of a dog and a chair? Yeah, so if this was completely unsupervised, can't we accidentally sample two dogs rather than a dog in a chair? And yes. This relies on the fact that kind of sampling to dogs is less likely than sampling a dog and something else. It also relies on the fact that when you augment, you'll create things that you have to push together. And that will make it harder to push apart two dogs than it is to push apart a chair and a dog. Cool. And so just to finish out the algorithm, we talked about attracting and repelling. Attracting things of the same image, orientations of the same image, and repelling augmentations of different images. What this ends up looking like is once you embed into your z-space then-- in this case, they're using cosine similarity to compute the distances between all of the different pairs of augmented examples. And what the loss function ends up looking like is basically exactly what we have written on the board here. Where we take the distance between the two augmented versions of the same image. So that's going to be z and z plus, here it's written as z tilde and z prime tilde. So the i is the same, that means that they're from the same original image. And then the denominator is over examples that are of different initial images. And so those are an augmentation of one image versus an augmentation of another image. Yeah. Isn't this loss function algorithm kind of assumes a sort of balance between classes? Because if there's a huge imbalance this might not work. Yeah, so the question is does this algorithm assume that your unlabeled data is unbalanced? And in particular, if it's unbalanced and you have a ton of dogs and only a small number of chairs then maybe it wouldn't work well. That's a great question. I don't know of any-- I don't know of any works off the top of my head that have analyzed that. Although I vaguely remember some works showing that in general these algorithms can work much less well if your data is not balanced. And this is actually a really important thing to keep in mind. Because data sets like images that are balanced because we know the labels of the data set, or they're more balanced because we know the labels. But in cases where we don't know the labels, we have no way of telling if they're balanced or not because we don't know the labels. And so in general, in practice when putting these algorithms into practice, it's important to keep that in mind. If you also want to add, I can also potentially try to point you to things that have specifically looked at unsupervised learning within balanced data. Cool. And so you apply this process iteratively to update your encoder. And once you have your encoder, you can then either train a classifier on top of that representation, or fine tune the entire network. One other design choice I'll mention, it's not kind of written in the equations. But the SimCLR paper found it pretty helpful to actually use the representation right here as your pre-trained representation, rather than the one right here. And that means that there's this kind of additional projection head that's taking the representation and projecting it into another space. And then doing the compare and contrast in that other space. And they found that this made performance somewhat better. And you could imagine that it gives a little bit more flexibility to the network. Although, it also introduces additional hyper-parameters in determining where should this representation be in the network. Yeah. On the loss function, I see a similar, the two samples. And if [INAUDIBLE] come from a different class of group because, I'm thinking, for example, for example, an instance, suppose [INAUDIBLE] maybe in the first and second, both are for a similar cat, right. But in this formula, they say once you, I mean, make the distance larger between these two. So the denominator here is always examples that have-- always augmentations of different examples. Although they may be examples of the same class, which we had mentioned before. And ideally, that happens with lower probability than examples of different classes. Yeah. [INAUDIBLE] before the final, they're actually going to make an adequate performance for this algorithm. So this Unet, can UNet perform better for this contrastive algorithm? So the question is, can UNet perform better for these contrastive algorithms? [INAUDIBLE] So we'll talk more about architectures that reconstruct the input in the Wednesday lecture. I don't think you necessarily need something like a UNet here, because this representation doesn't need to be this very large image. It can still be much lower dimensional or lower dimensional-- You could try [INAUDIBLE]. Yeah, you can also imagine having like skip connections here for example. Yeah, I think that you could imagine doing something like that, I guess. I don't know of any papers that do that. But my intuition would say that that could be bad from the standpoint of like, it could just completely ignore this part and just use the representation-- use the information do those skip connections to do the contrastive learning. But that's somewhat speculative. And I don't know of any papers that do that. Cool, so how will these algorithms actually work for learning representations? So here are some results from the paper from the algorithm that we just went over. And here they're comparing to ImageNet classification, the top five accuracy. And they're looking at if you only use 1% of the ImageNet labels or 10% of the ImageNet labels. And here, 1% corresponds to about 12.8 images per class. And 10% corresponds to about 128 images per class. And so once you get down to 1%, you're almost in few-shot learning regime, or possibly, you can consider that in the few-shot learning regime, if you kind of have 10 to 15 examples per class. And we're comparing to a baseline that only trains with supervised learning on those labeled examples. As well as some semi-supervised learning methods and some other representational learning methods. And the results show that first you get really substantial improvements over just supervised training from scratch. You go from-- in the 1% case, from 48% accuracy to 85% accuracy. Which is pretty significant. And you also see pretty significant improvements over other semi-supervised and unsupervised methods. We see this kind of especially as being the case in the 1% label setting. So for example, in comparison to, well, CPC, which is another contrastive method, we see about a 7% performance improvement. Compared to BigBiGAN, you see a 30% improvement, and so forth. These methods work pretty well. And overall, 85% accuracy on ImageNet is, I feel like, not too shabby. And it's nice that we can get that while using only 1% of the labels in the data set. Yeah. [INAUDIBLE] Oh, yeah, what does the 2x and 4x mean? So ResNet-50 is the standard ResNet-50 architecture. 2x means that the width of the hidden layers is 2x larger. And 4x means it's 4x larger. So it's just a larger network. And we see that it does better with larger networks. But what about when using 10% of the labels? What about when using 10% of the labels? [INAUDIBLE] Yeah, so we see that the performance, it also does quite well in the 10% setting. The performance improvements are somewhat smaller. And there's a semi supervised method that gets 91.2, whereas this gets 92.6. But it is still kind of the best method compared to the methods that were state-of-the-art in 2020. You're saying that you only get 12.8 labeled images, but you also get other images that are unlabeled. Or this is only 12.8 images? Sorry, yeah. We're using the entire ImageNet data set as unlabeled. And then they're only using 1% of the labels. Yeah. [INAUDIBLE] So now we're trying to pre train the network, and we're also training to the same images that we pre-trained here, so-- Yeah, so the question is are the representations useful beyond just ImageNet classification? Do the representations generalize well? The short answer is they do generalize to some degree, and we'll see some results towards the end of the lecture on that. Cool. And then there's one other experiment that I think is actually quite important in the SimCLR paper that was looking at not just performance, but how performance varied with the number of training epochs and the batch size. And the results are here. So the x-axis is the number of training epochs, or how long you're training for. And the different bars within each of those correspond to the batch size. With the blue on the left being the smallest batch size of 256, which is still a decently large batch. And the bar on the far right being a batch size of 8,192. And it's important to train for 600 plus epochs. I think this isn't too surprising in unsupervised learning settings because you need to learn a lot about the data. One thing that I think is perhaps more important to note here is that it requires a large batch size. So if you train with a batch size of 256, that does a lot worse than if you train on a batch size of 1,000, for example. And the differences can be-- if you train for a really long time, that difference is only 2%. If you train for a short amount of time that difference can be more than 5%. Yeah. Any intuition on why does it need a large batch size? Yeah, so why does it need a large batch size? So we'll talk about this both intuitively and more mathematically. So one way to interpret the contrastive loss function is basically you're trying to classify between-- classify, kind of, one image from all the other images in the data set. And you can sort of think of this, kind of, the denominator of this loss function is trying to sum over all the other examples in the data set. You're trying to classify between one example and everything else. And in the summation right here, the examples, the negatives, that have the smallest distance are really going to dominate this sum. Because we're exponentiating here, and so something that has a distance of, say, like 0.01 versus a distance of 10. Because we're exponentiating this, this is going to play a much like e to the -0.01 versus e to the negative 10. This is a much, much larger number. And so this is going to play a much larger role in the summation right here. And this means that because these examples with a really small distance are dominating, if you're subsampling and having a much smaller batch size, then you might miss the examples that actually dominate that loss function. And so you're not actually going to get a very good estimate of your loss function if you're missing the examples that actually have played the largest role in the summation. So that's part of the intuition. And then more mathematically, if we want to think about this, let's think about whiteboard space. So I guess this is-- OK. So this is our loss function right here. I'm going to also remove this just because this is what-- or this is what SimCLR does. So this is our loss function. You can essentially think of what we do when we subsample as sort of bringing the summation outside of the log. And I guess specifically, maybe one thing to first note is that in normal supervised learning, we subsample all the time. It's not a problem. And it's totally fine to have small batches. And the reason for that is our kind of typical loss function might be something like log probability of y given x. Or this is kind of a summation over x comma y. And when this is our loss function, if we subsample and sample a smaller batch of x and y, we'll still, in expectation, get the correct gradient. Now things are a little bit different when we actually try to subsample from this summation right here. And in particular, if we write out this loss function, it ends up looking like first the distance between z and z plus. This is all fine. Plus the log of the sum over n of e to the negative d of z, and z at minus. And the challenge here is right now this summation is inside of the log. And essentially when we sample mini batches, what we're doing is we are saying that, well, maybe this is approximately equal 2 d of z of z plus. Plus a sum over our mini batch, times log of some over n in our mini batch of e to the negative d of z, zn. So this is really what we're optimizing when we do mini batch sampling. Because instead of sampling all of the negatives in our entire training data set, we're sampling a mini batch of them. And now there's a question of what actually happens-- what's the relationship between these two equations? And there's something called Jensen's Inequality, that's actually super useful. And one of the things that Jensen's Inequality can tell us is it tells us that log sum of x-- I always forget which way it goes-- is greater than or equal to the summation of log x. We write this. And so what this means is that it means that when we take the summation on the outside, that gives us a lower bound compared to when it's on the inside. And so that means that this approximation right here, when we actually are doing this sort of mini batch sampling, we're getting a lower bound on our original objective. And that means that when we actually minimize this objective we're actually minimizing a lower bound on our original objective. And it's not good to minimize lower bounds on things because then you may not actually be minimizing your original objective. And so that's, I guess, some additional intuition for why having a larger batch size is helpful. Because, basically, we're not actually getting a bound on our objective. And so if we sample something closer to our original data set, then we're actually going to come closer to optimizing our original objective. Any questions on that? Yeah. Could we also control somehow how close is-- how tight is the bound? The question is, is there a way to control how tight the bound is? I mean, you can-- one thing that you could do is just sample your whole data set rather than sample mini batches, although that's sort of what we were trying to get away from. Or to sample larger batches. So, I mean, that's kind of why larger batches make sense. I don't know of any way to make it tighter. But if you do, let me know. Cool. So yeah. The kind of summary is that in normal mini batch, or normal mini batch supervised learning, the summation is already on the outside. And we're all good and so we can estimate this with mini batching. When the summation is on the inside of a log, then mini batching that is not really the correct thing to do. But we do it anyway because that's what deep learning is like. And then I also wanted to go through solutions to requiring a large batch size. So one thing that you could do is instead of trying to sample your entire data set at every single iteration, you can basically store the representations from the previous batches with something that looks a lot like momentum. This is not exactly correct because your encoder is going to be changing over the course of training. And so if you store those previous representations, they're going to be a little bit stale. But it allows you to somewhat decouple the batch size from this estimate. It allows you to get away with smaller batch sizes because you're basically accumulating batches over multiple iterations. This is called momentum contrast, or MoCo. And they were able to get good results with a mini batch size of 256. Another thing that you can do that was proposed in the literature, is instead of having any negatives whatsoever, simply try to predict the representation of your image under a different augmentation. And so it's kind of more of a predictive method. It's called Bootstrap Your Own Latent. It actually doesn't require any negatives. It just requires examples of positives. And when they plotted performance with respect to batch size, they found that it dropped off much less especially up until 256 in comparison to the SimCLR method. And so it's more resilient to batch size, although it's something that's also-- doesn't have some of the kind of nice contrast of interpretation that we've talked about so far. And then overall, in terms of the performance of these methods, if you actually look at how good they are in terms of self-supervised learning for ImageNet accuracy, we see that some of the papers that we covered were from a couple of years ago. But they really remained near state-of-the-art for self-supervised pre-training. And in particular, you can see MoCo V3 is very close to the state-of-the-art. I haven't actually looked at what is exactly the state-of-the-art. So I'm sure if it may also be a contrastive method or maybe something that's a little bit different. Yeah. [INAUDIBLE] So can't we just try to get some of [INAUDIBLE]?? Can we just take the logs on these [INAUDIBLE] say that it is under expectation? Wouldn't that mean much of the same thing?? So you're saying that why do we have this loss function in the first place? Why not have this on outside of the log or-- [INAUDIBLE],, then it could be for one of these loss functions. That is correct. If you move the sum outside of the log then it becomes a lower bound, according to Jensen's Inequality. So if the sum is on the outside, it's a lower bound in comparison to if it's on the inside. [INAUDIBLE] Yeah. And unfortunately, sometimes I wish Jensen's Inequality went the other way. But, yeah. Yeah. [INAUDIBLE] But would this [INAUDIBLE] approach be applicable when we apply multiple objects in the image [INAUDIBLE]? Yeah, so the question was a lot of ImageNet images just have one example or one object in the image. And would this work if you have multiple objects in an image? I guess I don't know of any specific detailed experimental studies on that. But I'll show a couple of instances where contrastive learning has been used in settings outside of ImageNet and that might provide some answer to your question. Cool. And so I guess moving towards that to some degree, these methods have been really good for visual categorization, things like ImageNet. And one challenge that comes up is that we've mostly been focusing on augmentations. And there's a lot of scenarios where we don't have a good hand-engineered augmentation function that we can use for these methods. And even in those cases, the general framework of contrastive learning can still be very useful. And so kind of alluding to something that came up earlier, you can actually try to learn the augmentation function. And this is a really cool paper that actually basically formulated an adversarial optimization where for your augmentation function, you tried to optimize it in a way that maximized the contrastive loss. Of course if you do this in a completely unbounded space, then it will just give you arbitrary images out. But what you can do is you can say that that augmentation function, it can only change the image by a small amount, within some L1 sphere. And this paper was actually competitive with SimCLR on image data, which means that it was able to find augmentations that were as good as the hand-engineered augmentations. And it was also able to get good results on domains that aren't images. So domains like speech data and sensor data. Cool. And then one other thing that we had mentioned earlier is instead of using augmentations, we could use, basically, our positives could be frames in a video that are nearby in time. And the negatives could be things that are further away in time, or from other videos. And there have been a number of papers that have done something like this and they've been able to get good results on tasks like robotics tasks. And so in this paper can take, basically, a data set with a ton of diverse videos of humans. Do this sort of time contrastive learning, where you pull together frames that have similar representations and push apart frames that have different representations. And there was also an additional loss that sort of did a form of contrastive learning between videos and language. And it found that if you use this to pre-train or representation, you can give a robot 20 demonstrations or less than 10 minutes of supervision. And get a policy that looks something like this, gets around 60% success rate on putting lettuce in a pan. And 40% success rate on folding a towel. And then the last thing I will highlight is this isn't really fully self-supervised, but you can also apply contrastive learning between images and text. And what this looks like is you learn a representation of the images, you learn a representation of the text. If you have data that tells you whether or not-- that basically kind of captions an image. Then you can tell it that captions and images should go together and images and other captions or images and other captions should be pushed apart. And so this is the key idea behind a model called CLIP. And it's actually a really performant model and it gives you very useful representations. It also can give you very good zero-shot classification performance. So on ImageNet, it was able to give you ImageNet accuracy that matches a supervised resonant. But even more interestingly, if you give it images that don't look anything like ImageNet-like sketches or it's more adversarial, ImageNet data set is able to get performance that's much higher than something that is trained in a supervised fashion on ImageNet. Yeah. Could this be because they had massive amounts of data? Yeah, so are these results because of the magic of contrastive learning, or is it because of the data set? Certainly the data set plays a huge role here. And the diversity of the data set that is given will help it be able to do well on these kinds of things. That said, there are a number of works that I've kind of tried to ablate the role of self-supervised learning versus the data set. And it found that things that are more self-supervised are more similar to something like CLIP, do better than something that's trained in a purely supervised fashion. Cool. So to summarize contrastive learning, it's a very general and effective framework. And we've seen how it can be used to compare and contrast lots of different things. One thing that's nice about it is that you only need an encoder, f theta of x. You don't need a generative model of your data. And this means that you can probably get away with a smaller model than if you had used a generative model. The other thing that can be viewed as a pro is if you have some domain information, like augmentations, that can be incorporated into the algorithm to generate other positives. The challenges is that negatives can be hard to select. And as a result, it often requires a large batch size. And so this is a little bit the kind of other side of the coin, with respect to the generative modeling not being required. You can usually get away with a smaller model, but you need a larger batch size oftentimes. And so you might still need a large amount of GPU memory, for example, to train these kinds of models. And then the other challenge is that it has been most successful with augmentations, like we talked about. And there might be domains where you don't have augmentations available, and there are a couple solutions for that. But it's something that is still an open area of research. Yeah. So if you don't need the modelling part, is contrasted learning sort of a good pre-training regime for generative modelling? The question is, is contrastive learning a good pre-training scheme for generative modeling? I guess I often view generative modeling as something that you do with data that's unlabeled. And so if you're-- one of the nice things about unsupervised training is you can get a lot of juice out of unlabeled data. And so if your task is already something that can be done with unlabeled data, like generative modeling, then it may be that you don't need a good pre-training method because, maybe, you already have a lot of data that you can use. I guess more specifically, I don't know of any works that pre-trained with contrastive learning and then fine tune on a generative modeling task. But it's possible that there are some works out there. Yeah. Can this be combined with what fine tuning if, for example, you have a multi label problem. So different examples can either be in the same class or not depending on the label that you're interested in. Can this be combined with fine tuning? So we talked about learning representations. And then you do often fine tune the representation. You often put it like some sort of classifier on top of the representation. And then you can fine tune end-to-end. Are you thinking about something other than that? I guess I'm thinking about the way that you would generate the embedding. And whether you want to change that, you know. Yeah. So you could also imagine fine tuning with the contrastive loss. Similar to how things like prototypical networks are doing classification with this kind of-- looks very similar to a classification loss. And so you could fine tune with something like that. Yeah. So you could certainly do something like that. Definitely the default that I've seen the vast majority of work do is to either freeze the representation and put something on top of that. Or additionally fine tune end-to-end. But I could certainly imagine that you could fine tune with the contrastive loss as well. And if you can fit your loss function in the form of something like a contrastive loss, you may actually do better than that other approach. Cool. Then the last thing I'd like to talk about in the last 7 minutes is how this relates to some of the meta-learning algorithms that we have talked about in class. And as you might have guessed, some of these equations look a lot like the equations that we saw in the non-parametric few-shot learning lecture. And you can, in many ways, formulate an algorithm, a meta-learning approach, that looks a lot like the contrast of approaches we've talked about today. And so if you're given an unlabeled data set, say you're doing something like SimCLR, where you generate positives and negatives by augmenting, then you can basically create a labeled data set where you create an image class by augmenting your example. And you create multiple examples from that class by augmenting it multiple times. That will give you a data set for that image class. And then to create a task, what you'll do is you'll want to be able to classify between different image classes. Now once you have this labelled data set, you can create tasks that actually just run your favorite meta-learning algorithm on this data. And it turns out that there's actually a very kind of close mathematical relationship between doing something like this with an algorithm like prototypical networks, and the SimCLR algorithm that we talked about in lecture. This paper in the bottom right goes in depth on that. But there are really two key differences. And they're relatively small differences in my mind. The first is that SimCLR sample is only one classification task per mini batch, and usually meta learning algorithms will sample a mini batch of tasks. And the second difference is that SimCLR is really a look at all pairs of negative examples, or all pairs of examples in your batch. Whereas meta-learning only compares the query examples to the support examples. And never compares and contrasts, query examples with other query examples. So in this way, you could perhaps view the SimCLR class as being a little bit-- the SimCLR loss as being a little bit more efficient with its batch because it's going to compare and contrast everything in the batch. But otherwise, these algorithms end up being extremely similar. And they also end up doing something extremely similar in practice as well. So if you-- here's just one experiment in that paper. You take an unlabeled data set. I think they took the ImageNet unlabeled data set. They augmented with the SimCLR augmentations, and compared using SimCLR versus using this approach with prototypical networks and R2-D2. Where R2-D2 is an optimization-based meta-learner that has a number of kind of bells and whistles to it. And you see that when you then kind of pre-train these representations on ImageNet, and then fine tune them on these other image classification problems, you see extremely similar performance between SimCLR and prototypical networks. You also see very similar performance between SimCLR and R2-D2. And in some cases, R2-D2 is able to do a little bit better as well. Yeah. So far meta-learning method you see is mostly few-shot. But this problem is a zero-shot problem, how does that- Yeah, so this is actually not a zero-shot problem. So it's pre-training representations with this approach, and then fine tuning on the entire data set from-- the entire training data set from these data sets. And so the other difference that I sort of didn't mention on this slide is-- on the previous slide-- is that meta-training and the training time is very similar. But what happens at test time is actually a little bit different. So typically in few-shot learning, we'll give it a few examples, embed them, and make a classification. And actually, in this case, at test time what they're doing is they're learning the representation with meta-learning. And then they're just fine tuning the whole network. And that's a little bit different from what we've been doing in the previous lectures. Yeah, so they're really just comparing the representations that were learned. Rather than the specific few-shot learning approach that happens at test time. Yeah. If I imagine the path sampling, I think the meta-learning one, they would have probably more samples instead of one sample. [INAUDIBLE] smaller number of samples per anchor. So how does the optimization compare in terms of [INAUDIBLE]? Yeah, so the question was how do we-- in meta-learning like n and k, you choose n an k, usually, you choose an n that's maybe smaller than 256, at least that's certainly what a lot of papers have done. I can't remember exactly what they did in this paper. Happy to follow up on it. or you could take a look at the paper. And I'm guessing that they probably used something similar to 256 and so maybe the values of those hyperparameters do actually affect performance. Yeah. So [INAUDIBLE] they do augmentation, right? So if you assume that each class has you augment that same four or five samples for each class, it's like four or five shot training for that class, in a way. So I wanted to ask this mixing these two, so like when you're supervised a few-shot model, can we include contrast to rhythm into training model approaches [INAUDIBLE]?? Yeah, so I guess, first, I do think that it's very similar to one-shot learning, because there's one positive [AUDIO OUT] anchor. And generally, one-shot learning is harder than five-shot learning. And so I would suspect that you get better representations from one-shot learning than five-shot learning. The other question is, can you use contrastive learning-- [INAUDIBLE] do the training, but we don't specifically do anything with support samples or query samples. Like, we just compare the stuff [INAUDIBLE].. But we don't do anything in trial, in the support space or query space. Just doing an unsupervised learner in introducing contextual learning there that would help in the performance. Yeah, so prototypical net works, it does this sort of like aggregation in the case where you have more than one shot. And the question is, if you do something like that and additionally include elements of this-- which I think would sort of be similar to training like both one-shot and five-shot in a way. Because one-shot is very similar to this. Does that help? And I could see that improving performance. And I think that the "Prototypical Networks" paper showed that, actually, sometimes one-shot training can actually be better than five-shot training when you test on five-shot, because you're training on something harder. You never actually get the information out of the support templates. Like, we don't train them in between, like in an unsupervised manner, and we also don't do anything with the query samples. Yeah. So if we could use that [INAUDIBLE] information [INAUDIBLE]. Right. Right. Yeah. So you could also use these augmentations in addition to the labels that we have in the meta-training data set. And that would be kind of like a semi-supervised model learning thing as well. And we'll probably talk about that a little bit on Monday next week. Cool. So to wrap up. We talked about contrastive learning. On Wednesday this week, we're going to be talking about another form of unsupervised pre-training which are reconstruction-based methods. Yeah, so that's the rest of contrastive learning. As a couple of reminders, your project proposal is due on Wednesday. And the homework is due next Monday.
AI_LLM_Stanford_CS229
Stanford_CS330_Deep_MultiTask_Meta_Learning_What_is_multitask_learning_I_2022_I_Lecture_1.txt
So today, we will jump into the goals of the course and the logistics of the course and also talk a little bit about what multi-task and meta-learning is and why we might want to study it. Before we get started, I want to make some introductions. So my name is Chelsea. I am the main instructor for the course. And we also have seven really awesome TAs. Great, so I love to welcome you to the course. And I guess as a first question to get a sense of where everyone's at, how are you doing? If you want to answer briefly, do you want to raise your hand and share how things are going. Yeah. So you're my first class. I went to the wrong class. That was interesting. Like half-- like five minutes through, the class started out and I see public policy when I was supposed to be in AI. OK, great, well, I'm glad you made it to the right class this time. Yeah. Anyone else? Yeah. I didn't realize classes were like 30 minutes away, so I left my dorm like 10 minutes for class. Cool. Cool. Good to figure everything out. Anyone else? OK, well, I guess one thing I wanted to share or say is that it's feels great to be closer to something that's a little bit more normal. And that's, Yeah, that's really awesome. I'm really excited about that. At the same time, it's not like every-- it's like there's lots of other stuff going on in the world right now that's not great. And so we acknowledge that and we try to set course policies that make give you a little bit of flexibility in the course. And Yeah, recognize that this course isn't the only thing going on in your lives. And so hopefully, those policies will help with that. Great, so we have a lot of information and resources about the course. Your first place to go is the course website. We put a lot of information here, so please read through it. And if you have any questions, make sure that you read through it first, but then feel free to post any questions that you have on Ed. Ed is connected to Canvas. And you can also reach out to the staff mailing list as well. We encourage you to post any questions that you have to Ed because that makes it so that other students can see your questions because other people probably have the same question as you. But you are allowed to make a private post on Ed, or email the staff mailing list, in cases where you don't want it to be shared with others in the class. For example, if you have an OAE letter, you can send this either to the staff mailing list or in a private Ed post. We also have office hours that are all posted on the course website. And the Zoom links are on Canvas and office hours will start on Wednesday. Cool. So what will you learn in this course? There's really three main things that we're going to hope you'll be able to learn in this course. The first is the foundations of modern deep learning methods for multi-task learning and generally learning across tasks. The second is not just learning about those methods but actually getting experience implementing them and working with them in code and in PyTorch, and trying to understand how these systems actually work in practice beyond just how they are supposed to work from lectures. And lastly, I also want to try to give you a glimpse of the process behind actually building these kinds of methods. I think that a lot of courses will present you knowledge and present you ideas as they are and not actually talk a little bit about the scientific process and the engineering process of arriving upon those ideas. And I'm hoping that maybe by giving a little bit of a glimpse into the process of building these algorithms and understanding these algorithms that will encourage you to not take them as them is, to challenge them and also to help you learn about the process of developing these kinds of algorithms in the first place. Cool. So along those lines, we'll cover a wide range of topics in the course. We'll start with the basics of multi-task learning and transfer learning. We'll move into three broad classes of meta-learning algorithms, including black-box approaches, optimization-based approaches, and metric learning. We'll also cover more advanced meta learning topics such as overfitting in meta-learning, unsupervised meta-learning, and Bayesian meta-learning methods before moving into other approaches for few-shot learning and adaptation including unsupervised pre-training and domain adaptation and domain generalization. All throughout this will be an emphasis on deep learning techniques. And we'll also study a number of different case studies in real applications. This includes things like multi-task learning in recommender systems like the recommendation system behind YouTube. Also meta-learning for land cover classification and education as well as few-shot learning in large language models. Now, one thing that's a little bit different from the last time we offered this course is these topics right here are all new to the course. And one thing that's different is we're not going to have any lectures or homeworks that cover reinforcement learning topics. And so essentially, we're removing the reinforcement learning topics from the previous quarters and adding in this new content, including few-shot learning with unsupervised pre-training, how this relates to foundation models, as well as domain adaptation and domain generalization. Now you might ask why are you removing the reinforcement learning content? What if I want to work on reinforcement learning? For that, we're introducing a new course in the spring quarter on deep reinforcement learning that I think will do a nice job of complementing some of the other reinforcement learning offerings on campus. Removing reinforcement learning will also make the course more accessible to people who don't have a lot of background in reinforcement learning. We found that in the previous quarters just one kind of refresher on reinforcement learning often wasn't enough to get to the more advanced topics. That said, if you're really excited about applying some of the ideas in the course to reinforcement learning topics, you can still explore that in the final project and get support from the course staff, many of whom also are very familiar and experts in reinforcement learning problems. Awesome. So for lectures, all the lectures are in person. They're also live streamed and recorded. And you'll have access to the recordings on Canvas. We're also going to have two guest lectures as well around the end of the quarter. And those are not fully sorted out yet, but we'll announce those shortly. I really encourage you to ask questions during the lecture. This serves a lot of different purposes. You can ask questions by raising your hand. You can also ask them by entering questions in the Zoom chat. And we a TA who will monitor the chat and make sure those questions get answered during the lecture. I find it really helpful when people ask questions because it helps me understand and gauge if you're understanding what I'm saying. And if you don't understand something, some concepts, or want to learn more, or I'm not covering something important, that's my fault. That's not your fault. And chances are other people have the same sort of misunderstanding. And so if you ask that question, that will help everyone in the course and help me help you basically. My goal is to help you learn the topics in the course rather than standing up and listening to myself speak. Office hours, we'll have a mix of in-person and remote. Mostly in-person but there will be two remote options, especially for SEPD students who are in the course. Great. And then our prerequisites, the main prerequisite is to have sufficient background and machine learning. So something like CS229, or equivalent, because we'll be building a lot on the basic concepts of machine learning. And we're not going to kind of cover topics like cross validation, training sets, and test sets, and so forth, or the basics of neural networks. All the assignments are going to require training neural networks in PyTorch. If you really hate PyTorch for whatever reason, you could implement the assignments in some other framework. But all the starter code is in PyTorch, and you'll be able to get more support, if you do everything in PyTorch. So we'd encourage you to do it in PyTorch. A few quarters ago, the course used TensorFlow instead and people seem to really like the switch to PyTorch. But hopefully, yeah, we provide some flexibility there. We're also going to have a PyTorch review session on Thursday at 4:30 PM in this room. And so if you want a refresher or get some of the kind of concepts in PyTorch that will be useful for the assignments, you can come to that review session. It will also be live streamed and recorded on Zoom as well. Cool. So digging in a little bit more into the content, we'll have a number of different assignments. The first assignment is just going over-- it's basically just a warm up to make sure you're familiar with PyTorch and going over some of the basics in multi-task learning. Then we'll start to go into our core assignments. The first one will be on black-box meta-learning and how do you set up the data to make few-shot learning with these black-box models more effective. The second will go into gradient based meta-learning and metric learning. Both these assignments will include kind of few-shot character recognition kinds of problems. And then the third homework will go into fine tuning pre-trained models with an emphasis on natural language processing and language models rather than image recognition. And then the last homework is going to be optional. And it's going to cover some more conceptual aspects of the course. So the first four homeworks are all going to be implementation based whereas this last homework will be just kind of solved on paper. The grading for the course is 50% homework, 50% project. The first homework is just a warm up, so it's only 5% of the grade. Whereas the remaining three-- the next three homeworks are each 15% of the grade. So this adds up to 50%. I mentioned that we wanted to provide some flexibility to students in the course. And so if you complete this fourth homework, it can be used to replace one of the previous homeworks, or it can be used to replace part of the project grade. And if you do it, we'll always do whatever is best for your grade. And so if you don't do well on that homework but you still try to complete it, then it won't hurt your grade. It will only help. And then the second aspect of giving some flexibility, we'll give 6 late days across homeworks and project related assignments. You can use up to two late days per assignment. No questions asked. Anyone can use these up to the six. Of course, if you have other extenuating circumstances that make it difficult to submit courses on time, then feel free to send us a note, send us an email, and we may be able to make accommodations beyond these six days. Great, and then lastly, for the collaboration policy, please read the course website and the honor code. For the homeworks, you're allowed to talk to other people about the homeworks. But you should document your collaborators and please write up the homework solutions on your own without referring to other student solutions and without referring to solutions on the internet. Cool, for the final project, this is basically the main parts of the course are in terms of your work are the assignments and the project. The project is a research level project of your choice that you can do in groups of 1 to 3. If applicable, if you're doing research on campus, we really encourage you to use your research for this project. And as long as it's applicable to the topics that are covered in the course. And you can also share the project with other courses. Although, we'll have slightly higher expectations in that case. And then same late day policy as the homeworks. But there's no late days for the poster session because the poster session will be a live event. And you can't-- there won't be a late poster session or anything like that. And the poster presentation, the poster session is on December 7 which is basically the last day of classes, which will have instead of a lecture. Cool, so any questions on course logistics? Yeah. What type of topics can we work on for the final project? Yeah, so I mean, basically, the question is what kind of topics can you work on for the final project. Basically, whatever you want as long as it pertains to the course content. It's very open ended. I guess, one other thing that I can mention there is that we're going to have-- we are soliciting ideas from the broader Stanford AI community. And so we'll post a list of ideas for the project on Monday next week. And so if you're not sure what kind of project do you want to do, you can look at that list for some nice ideas. But if you have something in mind already, or if you want to be creative and think of something else, it just needs to pertain to the topics of the course. We also have detailed guidelines on the project and what we expect posted on the course website. And so you can refer to that document for a lot more details. Yeah. Are There examples of what people have done in the past for their course project? Yeah, that's a great question. So we haven't posted examples yet. But we want to post some examples of some previous projects. One thing that you can see already is that if you look at a previous offering of the course, I think, it was two years ago, we posted titles of all the course projects. And then some of them have links from students who are willing to make them public. And so you could already take a look at that. Although, we're planning to provide some more explicit examples in the coming week. Cool. So in terms of initial steps, homework 0 has already been posted. This should be pretty lightweight. It's due in a week. All the assignments are due at 11:59 Pacific time. And I'd encourage you to start trying to form groups, if you want to work in a group, and posting-- making posts on Ed and so forth can be helpful for that. And we're also like happy to try to help you connect with other students in the course as well, if that would be helpful. Yeah. Would you strongly recommend working, or do you have reasons why you might want to or might not want to work in a group? Yeah, I think that there can be pros and cons to working in a group. I think that the benefit is that there's a little bit more that you can do in a group. And you might also have complementary expertise. It can also be fun to work with other people. The downside is that you are kind of relying on other people a little bit. And you want to make sure that the people you're working with are-- you're kind of compatible and can rely on them to some degree. Generally, we recommend it, but it's certainly not required. And you're welcome to work alone. Cool, so let's dive into why we might want to study multi-task learning and meta-learning? So the first thing I'll cover here is a little bit of my perspective, and why I find multi-task learning and meta-learning really cool and exciting. And in particular, a lot of the research that I do in my lab is trying to think about this question of how we can allow agents to learn a breadth of different skills in the real world. And what I mean by agents is actually working with real robots and allowing them to learn skills like this. So here I'm holding the block in front of the robot, and it's learning to place the red block into the shape sorting cube. But not just something like that, being able to do something like watching a video of a human place an object into a bowl and having the robot figure out how to do the task as well, or by figuring out how to use tools to complete a task, even if it's not explicitly told that it should use that tool to complete the task. And I think that robots are really cool and interesting because I think that they can teach us things about intelligence. They are faced with the real world. They aren't just-- they aren't just kind of looking at images in a static [INAUDIBLE] have to contend with the complexity of the real world. In order to be useful in the real world, they have to generalize across lots of different tasks, objects, and environments. And I think they also need some sort of common sense understanding. They need to understand what will happen if they try to pick up objects or try to move their arm in a certain way. And then the last thing is not clear exactly what the supervision should be here as well. And I guess beyond these two things, beyond trying to teach us things about intelligence, there's also this aspect that if we can't actually build robots that are very useful, then they could help out in a wide range of aspects of society where people are doing jobs that are dangerous, or people that are people are doing jobs that are tedious, or jobs that they would rather not be in. And so I guess from that standpoint I would like to tell a little bit of a story, which is that at the beginning of my PhD. I was a PhD student on the other side of the bay. And I was working with this robot right here. And this is a project that was happening the robot was learning through trial and error. It was trying to figure out how to insert the wheels of this toy airplane into the corresponding hole. And what you can see is that at the very beginning it didn't really know anything about the task. And over time it gets better and better at trying to figure out how to kind of assemble this part into the plane. This seems pretty cool. Yeah, I found it like really, really cool to see this whole process. But one caveat here is that the robot effectively had its eyes closed. So the robot couldn't actually see anything it wasn't using the camera in any way. It was just using the position of the joints. And so the next step that I really wanted to explore in a follow up project was to think about can we allow the robot to complete these kinds of tasks but with its eyes open and actually use vision in order to solve these tasks. And this was the resulting project. And here we are trying to learn a neural network policy that maps from images taken from the robot's camera directly to torque supplied at the robot's joints. And you can see that over time, it's getting better at the task. At the beginning it was just moving its arm around pretty randomly. And it gets closer and closer to inserting the block into the red hole. And not only can it insert the block in the red hole for one position of the cube but it can do it for multiple different positions of the cube. And this is why it needs vision in order to succeed. This is pretty cool. This was-- I mean, these days this maybe isn't that impressive. But six years ago, no one had really ever applied neural networks for this kind of task before. And here's actually kind of the result of the final policy so you can actually see the robot's perspective right here. And we can see that if I held it in different positions the robot was able to insert the block into the correct place. This is pretty cool. Now I think what was exciting about this wasn't that the robot had figured out how to do this one particular task, but rather that we had a reinforcement learning algorithm that could allow robots to do lots of different tasks. And so if you took the same exact algorithm and gave it a different reward function, then it could figure out how to do other tasks. So it could figure out how to place the claw of the toy hammer underneath the nail. Or if you gave it a task right here, it could figure out how to screw a cap onto a bottle. And as an example of one more task, this is in a follow up work, we got the robot to use a spatula to lift an object into a bowl. This last test is actually surprisingly challenging because the robot has to fairly aggressively maneuver the spatula underneath the object in order to lift it up. So this was really exciting. This was like first or second year of my PhD that we got the robot to do these kinds of tasks all with learning neural network policies. And other people use the same kind of algorithm and built upon it, extended it to other tasks and to other robots to learn things like hitting a puck into a goal, like opening a door, and throwing an object to hit a target. And around the same time, people are also using deep reinforcement learning algorithms to play Atari games, and to play the game of Go, and to learn how to walk in simulation. So in general, around kind of 2016, 2017 was a very exciting time for reinforcement learning and for deep learning. The catch, though, is that we have a bit of a problem here in general. So it was all very exciting progress. But there was this problem that in each of these cases, when we train the robot to do a task like when we trained it to lift the object into the bowl, we didn't train it to use spatulas generally on how to lift objects into bowls. But we trained it to lift that particular object with that particular spatula into that bowl. And so if you gave the robot a different spatula, put it in a different environment, the robot wouldn't successfully complete the task. And this is a huge problem because it means that if we actually want to put the robot into real world situations, it hasn't learned something that will actually work in general. And you might say that, OK, maybe we can just give the robot a lot more spatulas and train it with more data. But the tricky part is that when you train these systems, behind the scenes, if you actually look at the learning process, often it looks something like this, where the robot attempts the task. And then it needs these attempt to task again. And so for it to attempt the task again, you need actually put the environment back to where it was and so that it can attempt the task again from that state. In this video, this is my friend Yevgen. And one of the things you probably notice here is that Yevgen is doing more work than the robot is doing. And this doesn't seem like maybe the right way to be going about things. Importantly, doesn't seem very scalable. It's not practical to collect a ton of data in this fashion. And so this is starting to get into why multi-task learning and meta-learning matters which is that we're training these systems to do one very narrow thing. And this very narrow thing requires detailed supervision, a lot of extensive human effort, in order to get the system to do that one particular thing. And then if we want to do something else, we also need another again a lot of human effort to train from scratch on that new thing. And this isn't just a problem with reinforcement learning in robotics. And so if you look at problems in kind of speech recognition or object detection, these systems are trained on more diverse data. But they're still learning one task starting from scratch with a lot of supervision and engineering for that one task. And so I'd refer to all these systems as kind of specialists. We're training a machine learning system to do one thing. And what in many cases would be more useful is systems that are not trained on a single task but on many different tasks. And so for example, if we look at what people can do. People aren't trained from day one to lift up spatulas, or use spatulas to lift up objects. But they're trained to learn much more broadly about things in the world. And in that sense, I would refer to humans as generalists. And I'm interested in this question of how we can build machine learning systems that are more general. And I guess as maybe one more note on this. If you take a system like AlphaGo, which became kind of champion at the game of Go, this is another example of a specialized system. And it's maybe somewhat analogous to training a baby on day one to try to figure out how to play go without teaching them lots of other things about the world. And in fact, it turns out that even like training a robot to pick up Go pieces and place them into the configuration is actually still beyond the capabilities of AI systems. And so when you watch maybe-- if you ever watched any of the AlphaGo matches, you'll notice that the AlphaGo player is a human who is just watching this computer screen and lifting the pieces for the system. Cool, so that's my perspective in terms of why I'm excited about these algorithms. I guess, any questions on all that before I move on to things beyond robots in general purpose machine learning. Yeah. So you might go into this question in the next section. But the question has to do with all of the work that's happening on these large pre-trained models now especially in an LP where maybe the models are kind of implicitly learning a lot of tasks internally. So would you say that it's still important to explicitly teach a model how to encode a bunch of these different tasks, or can be implicit learning kind of get us to where we need to go? Yeah, so the question is about all of these large pre-trained models are trained on very broad data sets. And they aren't explicitly trained to do multiple tasks. But they're implicitly trained to learn very broad things. And in some ways, I guess, we'll talk about this a little bit later in the course. But there are ways to connect that to multi-task learning. And I kind of view that as an example of something that's more of a generalist rather than something that's learning one very narrow task. And so we'll definitely connect to that. And I think that also gets to some of the motivation behind using generalists, trying to train generalist systems as well, which is that if we can train a pre-trained model on very broad data and have it learn something more general about the world, then if we want it to do something narrow after that, we can use that as initialization. We don't have to start from scratch. We can start from this more general understanding of the world and use that as initialization to learn much more quickly for a new task. And so some two-- the two of the lectures that we're adding this year will be precisely on training these pre-trained models in a more general way with unsupervised pre-training and then fine tuning them with a small amount of data to a new task. Yeah. So for a lot of these NLP models, the pre-training tasks are like fill in the blanks [INAUDIBLE] so when you're thinking about robotics, how do you think of the task that will lead to fundamental basic building blocks which can generalize later. How do you design task that can [INAUDIBLE]?? Yeah, so the question was a lot of general NLP pre-training tasks are things like fill in the blank. Is there an analog in something like robotics? And in general, I mean, there are some things that are somewhat analog. So you can take video data from the robot's experience and have it interpolate frames, like say predict this frame. Or you can mask out part of an image and say predict where-- basically fill in this part of the image. And so you can make very direct analogs like that. And the approaches like that have shown some success in robotics. Although, there are also other aspects of robotics that make it very challenging. For example, in NLP we have all of Wikipedia. And we don't have Wikipedia for robotics. We don't have data of robots tying their shoes, or robots learning how to pour water just lying on the internet in massive quantities. And that brings up another challenge that is more readily solved in NLP. Yeah. Is there any fundamental difference between this multi-task learning, or will I call this multi-task a single task when I do this single task? Yeah, I'll get to that at the very end of the lecture. Yeah. Cool. So why should we care about multi-task learning and meta-learning beyond robotics and general purpose machine learning systems? And specifically, why should we care about deep learning in this context? So I don't think deep learning needs too much motivation these days, if you've taken a machine learning class. But in terms of a couple of slides, historically the approach to things like computer vision was to design some features to try to design mid-level features and then tried to kind of train a classifier on top of those mid-level features. And there are many aspects of this kind of pipeline that were designed by hand. And then the more modern approach to computer vision isn't to try to hand design low level features and mid-level features and so forth. But rather to try to just treat a single neural network end to end, train the parameters end to end to do the entire task. And there are some benefits to the former approach. You get some notion of interpretability, for example. But in general, the second approach here works a lot better, which we'll see on the next slide. And it allows us to handle unstructured inputs, things like pixels, things like language, sensor readings, really any input that you can imagine, without having to understand, without having to engineer like really good features for that particular domain. So it allows us to perform tasks without a lot of domain knowledge. And as we saw over the years on the ImageNet benchmark, this is showing performance or error rate on the ImageNet benchmark between the years 2011 and 2016. Overall, we see a downward trend. But what's notable here is that this dot right here is AlexNet which was the first end to end approach for the ImageNet competition. And everything after that is also deep learning based approaches. And so we saw this really striking paradigm shift and also very striking shift in the performance that you can get on these kinds of computer vision tasks. In a completely different domain in natural language processing, if we take-- this is a paper from, I think, 2016 or 2017. They're trying to use deep learning for machine translation. Before this paper, Google Translate was using something that wasn't doing end to end deep learning. It was called a phrase-based system. And so that's called PBMT whereas GNMT was standing for Google's neural machine translation, which is end to end approach. And we again see really large improvements like 60% to 87% improvement on these different translation tasks. And now systems like Google Translate use exactly these kinds of models when making predictions. And they work-- well, there's still obviously lots of room for improvement in the translations, but they work far better than the previous systems. Cool, so that was some brief motivation for why we might focus on deep learning systems. Now why might we focus on deep multi-tasking and meta-learning systems? So in deep learning, we've seen that if we have a large and diverse data set and a large model that leads to good generalization on the task that I showed on the previous slide. We saw this with ImageNet, with things like transformers, machine translation. But there's a lot of scenarios where you don't have a large and diverse data set at the outset. There's scenarios like medical imaging where there are privacy concerns with sharing lots of data, or robotics where we don't have Wikipedia for robotics, or things like personalized education, or translation for rare languages where we don't have a large data set already sitting on the internet. And it would be very expensive and costly to try to collect a large data set. And so these are the scenarios where this kind of recipe will start to break down. And it's impractical to learn from scratch for each of these-- for each of these different circumstances, like for each rare disease, or for each robot, or for each person, or each language. Now beyond that, there's also scenarios where maybe you have a large data set but that data set is very skewed. So you have a long tail distribution where this is showing kind of a histogram of the number of data points for different aspects, or different parts of your distribution, different slices of your distribution. And these different slices could correspond to objects that the system has encountered, or interactions with different people, or the words that it has heard, or driving scenarios and so on and so forth. And these kinds of data sets don't arise in a lot of machine learning benchmark problems. But they actually come up all the time in real world applications. There's a lot of words, for example, that you hear all the time and a very, very long tale of words that come up much less frequently. And this long tail of edge cases actually presents a major problem for modern machine learning systems. And I would argue that's why we don't have self-driving cars on the road today because there are so many edge cases that come up in self-driving situations. And multi-task learning and meta-learning won't solve this problem in and of itself. But there is some signs of life that kind of indicate that if you can leverage some priors from the big data and try to translate that to the tail of situations, then you might be able to better handle these kinds of distributions. Cool. Beyond that what if you want your system to quickly learn something new? This is again a scenario where you don't have a lot of data because you want to learn very quickly. You want to learn something very quickly about a new person, like a new user, or about a new environment, like a new environment that you've placed your system into. And for this, I'd like to actually give you guys a little test where I want you guys to learned something new. So in particular, for this test, I'm going to give you a training data set. The training data set is the six images on the left. And the far left images are all paintings painted by Braque. And the next three columns are paintings painted by Cezanne. And so your goal is to learn a binary classifier between paintings by Braque and paintings by Cezanne. So I'll let you train a little bit your classifier. [LAUGHTER] And now that you've hopefully learned a decent classifier, your goal is to classified this test data point. And so raise your hand, if you think this is by Braque, OK. And raise your hand, if you think this by Cezanne, OK. Cool. So most people got it right. So this is indeed by Braque. And I tried to give you a little bit of time to train your classifier but maybe some of you didn't converge. [LAUGHTER] And you can see that it's by Braque because you can look at some of the styles of the edges here for example. I pick this example not to be one that's a little bit harder maybe closer to the decision boundary. And yeah, so this is an example a few-shot learning. And so you took a really tiny training data set with six data points and were able to generalize to a new data point. So how were you able to do that? So if you were to train a machine learning system from scratch, like a convolutional neural network, on those data points from scratch, it probably wouldn't have gotten the right answer like many of you did. But the way that it was able to do that-- the way that you guys were probably able to do that is, well, you may not have seen these paintings before, or maybe even paintings by these painters. You've learned how to see. You've learned how to recognize patterns in images, how to recognize paintings, and all of your previous experience allows you to learn new tasks with small amounts of data. You weren't starting from scratch on this problem. And so this is what's called few-shot learning where your training data set has a few data points. It's small. And few-shot learning is something that you should be able to achieve, if you leverage prior experience rather than starting from scratch. Cool. So each of these four things that we went over. If you want a more general purpose system. You don't have a large data set. You have a long tail. Or you want to learned something new. All of these are scenarios where ideas from multi-task learning and meta-learning might be useful and where these elements can come into play. Cool. So now beyond why we should study it, there's a question of why we should study it now? And in particular, if you take some papers from the '90s, maybe-- I think probably everyone was born by '97. But if you take some papers by the late '90s-- maybe not, is anyone born before '97, or sorry, after '97? Oh, Wow. [LAUGHTER] I'm getting a little old here. Cool, well, if you take a paper before most of you were born, It says things like we can try to train tasks in parallel using a shared representation, or we can try to do multitask inductive transfer adding extra tasks to a back propagation network. So they're already doing deep multi-task learning in this paper. You can take a paper from '98 talking about few-shot learning. So the ability to generalize correctly from a single training example. When faced with a new things to learn, humans can usually exploit an enormous amount of training data and experiences that stem from other related learning tasks. Or from even earlier from 1992, some folks like Samy Bengio and Yoshua Bengio who you may have heard of we're talking about the possibility of a learning rule to learn to solve new tasks so ideas from meta-learning. So a lot of these ideas aren't that new. They've existed for a pretty long time at this point. And yet even though, they've existed for a long time, they're continuing to play a major role in the AI systems. I had a question. Does meta-learning also help model generalization when you have a large data set? Yeah, so the question-- Yeah, I guess, it's in Zoom, I guess, for the people who are remote. So does meta-learning help generalization even when you have a large data set? So in general, these methods will give you the most bang for your buck, if you have a small data set. Because that's where leveraging previous experience will be the most useful. If you have a really massive data set, then you will probably do pretty well just training from scratch on that data set. It's possible that some prior knowledge might come in handy if you have some distribution shift. Basically, if you have a large data set, but your test set is actually from a slightly different distribution, then you might be able to learn invariances from your previous data in a way that allow you to do better, even when you have a large data set. But in general, if you have a standard IID problem, and you have a large data set, then things like prior experience will be much less useful. Cool. So now what are some examples of actually looking at these kinds of systems in [AUDIO OUT] folks at DeepMind, they were training a system to do lots of different kinds of vision and language tasks. And they found that they could actually train a single model that could do something like object recognition, where, in this example, they describe a chinchilla, then they describe a shiba, and then they give it an image of a new animal. And it's able to recognize that that new animal is an image of a flamingo and also describe where the flamingo is found. So it can do object recognition. And the same exact model can also do things like reading. So you can give it a few examples of images of numbers like arithmetic problems 2 plus 1, 5 plus 6, and so forth. And then if you give it a new image of, in this case, 3 times 6, it's able to both read the numbers and also complete the arithmetic problem. And again, the same model can also do yet another task. In this case, it's trying to count animals. And so here, it's given an image of a few pandas. And it's told that this corresponds to 3 pandas. And at the end, it's given an image of several-- four giraffes. And it's able to count the number of giraffes, and identify that they're giraffes. So [AUDIO OUT] cool about the system is that it's not just specialized for one thing. You can do lots of different things. And it can do lots of different things in a few-short way. So you're giving it a few examples of how you want to perform the task and is able to leverage that in order to figure out what you want it to do and ultimately do that task. So this is one modern example that was pretty exciting. One other modern example is a paper that I co-authored in 2021 that I personally was very excited about where we were using meta-learning in an education application where we're looking at trying to provide feedback to students on work that they did, open ended student work in a Intro CS course. And so in particular, the code in place course was a really massive online course that was offered by a few folks at Stanford. And at the end of the course, there were 16,000 solutions that students wrote up to the course. And grading 16,000-- giving feedback and grading 16,000 solutions is a massive undertaking. And we were able to get volunteers to give feedback on around 1,000 of the student programs. But if you train on those 1,000 programs from scratch, you're not able to get a very good system because 1,000 programs from scratch, these are Python programs that we're trying to give feedback on, that doesn't work very well. But with meta-learning, we were able to train on previous data, previous student data of taking exams, and giving feedback on those exams. And use that meta-learning system, adapts to this new course with these new problems, and ultimately give feedback on the remaining. And so in particular, what it looked like is we were able to give a student program like the program right here. And there's a syntax error in this program that would make unit tests not useful at all. And so actually, there were a few thousand cases where unit tests were helpful. But most of the solutions, they were not very useful. And they were able to actually generate feedback. So in this case, the feedback is that there is kind of a minor error on getting the input from the user which could be something like forgetting to convert the user input to a float. And this system actually worked really well, and it was actually-- worked well enough to actually deploy in this course to give feedback to real students, which wouldn't have been possible without some of the ideas that we'll cover in this course. So those are two examples the flamingo model and the education model. Yeah. Can we compare the learning outcomes for people who got the real feedback [INAUDIBLE]?? The question is can you compare the learning outcomes for the two groups. So we did actually do a blind AB test with the system. And so 1,000 of them got human feedback, 15,000 got the feedback from the AI system. And the students agreed with the AI-- with the meta-learning system feedback slightly more. It was like 1% more. I think they agreed with it around 97% of the time, and they agreed with the human feedback around 96% of the time. Of course, they might just be agreeing with it because they like the feedback or whatever. And so we actually asked them also how useful it was. And they rated it as useful, on a scale of 1 to 5, I think they rated it around an average of 4.6 out of 5. So they found it useful. We weren't able to measure learning outcomes between the human feedback and the feedback from the system because this was towards the end of the course. It was kind of a diagnostic at the end of the course. We also didn't feel like it would make sense to withhold feedback from students to compare no feedback in the meta-learning system. But the I think the real win is trying to-- is basically trying to give feedback on scenarios where it was otherwise would be very difficult to provide that feedback. Yeah. [INAUDIBLE] common objective, is that possible using multi-task learning? Yeah, so multi-objective learning is-- we'll cover it in the lecture on Wednesday. But it's basically you can think of it as a subset of multi-task learning, like a special case. Yeah. Are there any applications for real time or streaming data? Applications for real time or streaming data? I guess, there's nothing-- I could certainly imagine there being actually a lot of applications making sense because if you have streaming data, you may want to adapt very quickly to the current circumstance that you're in with a small amount of compute and a small amount of data. So from that standpoint, things like few-shot learning may be very applicable. But I don't have-- I haven't encountered-- there's nothing that comes to mind immediately in terms of a specific application I've seen in the literature. Yeah. [INAUDIBLE] have some examples that adapt the models [INAUDIBLE] like demonstration prompting. [INAUDIBLE] it's basically the same network that has been generalizing to all the tasks, but it's not learning something new, which is the same networks with different inputs that are given to it. So is it the same thing or like how do we think about these two things? Yeah, so the question is what's the difference between few-shot prompting and few-shot learning. And there is, I think, there's a very gray line. I think it's-- I don't think this is black and white. We'll talk about this a little bit in some of the future lectures. You can think of-- yeah, we'll talk about it more in future lectures. I think, it's unclear. It's fuzzy. Yeah. Does meta-learning always require a model that has previously been trained on some data because it feels like now learning is adapting this model to a new set of smaller data, or-- yeah. Yeah, so you're asking does meta-learning always require some previous data? Or like is meta-learning just essentially adopting a model towards your data, if that makes sense? Yeah, so kind of transfer learning and meta-learning are in many cases trying to adapt to new circumstances. Maybe I should-- I'll move on actually a little bit because we'll all give kind of some definitions of what multi-task learning, and meta-learning, and transfer learning are, or at least what the problem statements are. And that might answer your question. Awesome. As a few more example applications of very recent use cases of multi-task learning and meta-learning, one from 2019 is looking at machine translation. It turns out if you instead of translating between just one pair of languages, if you translate between lots of different languages in this case 102 languages, you're able to surpass very strong baselines that are just trying to train on a pair of languages. So there's a lot of shared structure and a lot of shared information that you can leverage from those other data sets, or from data sets of other languages. And people have also been using multi-task learning systems actually for multi objective optimization where you have multiple competing objectives in a YouTube recommendation system and thinking about how do we optimize these objectives. And we'll actually consider a case study of this paper in the next lecture. So these are a little bit more on the applied side. A little bit more on the research side, there is also-- I mean, there's lots of papers on these topics these days. And so I'm just highlighting a few but one example is a paper called A Generalist Agent that was training on like really a really, really wide range of tasks, so ranging from dialogue systems to playing Atari games to controlling a quadruped robot in simulation to controlling a real robot arm. And so they were able to find that you could actually stuff the data of all these different tasks into a single model to have it do all of these different tasks. And lastly, I guess, I showed this example on one of the earlier slides but you can also apply this in examples in real robotics where you want a robot to take experience from previous tasks-- or sorry, from previous objects and ultimately perform a task with a new object. So in this case, the robot hadn't seen this Red Bull before or this peach before, or the distractor objects for that matter, and is able to figure out that it should place the object into the red bowl in this case. Cool. So those are kind of a few modern examples that are quite interesting. Yeah. Is there any study looking into trying to understand those generalist kind of models, like the different classically trained models, or maybe disentangle them, or are they something that like a true generalist kind of a model that is able to [INAUDIBLE] understanding of the task performed? Yeah, so the question is, is there work that tries to understand what these multi-task learning systems are learning, and if they're disentangling the tasks in some way? In general, I guess, there's no particular paper that comes to mind. And I think that it's also a very challenging question because we don't have good tools for interpreting neural networks and what they're learning. I think the biggest tool is actually just to observe its behavior on new inputs. And so for example, if it's able to generalize to new tasks effectively, then that's an indication that it's not learning the task completely separately. There's that actually learning the shared structure. Whereas if it's completely unable to generalize to that new task, it's maybe an indication that it's not learning kind of a unified representation of those tasks, or that the new task is just too far out of distribution compared to the previous tasks. Yeah. [INAUDIBLE] why is it considered impressive from a research standpoint? So in this case, this paper is from a few years ago. But it's certainly, in this case, overfitting to the task of placing, I guess. I mean, overfitting is a term that means many, many things oftentimes. But you can think of it as that. I think what's interesting about it is-- at least, this paper was one of the first examples of just looking at a raw video and actually interpreting that raw video to figure out how to do the task. And I think that we've also seen, certainly, since then, seen more interesting and more impressive things being done with few-shot learning. Yeah. I was just wondering about this one shot imitation learning because in general, how do you know that the human intended to put the ball into the cup even when the rotation was changed versus like always putting it in the bottom left place, right? Like how should an AI consider like these differences in cups? Yeah, so the question is, it seems like this has a little undefined. It's unclear from a single example, whether the goal was to put the peach exactly right here, or if it was to put it into the red bowl, or maybe there's other ways to interpret the video as well. And the reason it's able to do, at least in this case what at least to me aligns with human judgment is by having prior data. And so if you only gave it this one example and learn from scratch, the problem of inferring the intent is actually mathematically under defined in many different ways. You could like-- there's also, especially from images, it's especially under defined because it could be that you wanted to change the pixels here to be orange, for example. And maybe that doesn't involve moving the peach there. So it's undefined, if you learn from scratch. But when you have previous data and previous experience with other objects, you're able to leverage that previous experience to figure out what exactly was intended here. So in this case, the previous example involved placing into containers. And in each case, it was trained to generalize to place into the container rather than place the object in the same position. And instead, if you gave it previous experience that said, if I see a demo like this, then place it in that exact same position, it would instead learn that from the previous experience. But yeah, cool question. OK, and then one other thing that I think is important in the context of why we might study these methods now is, I think, that it's important for making methods and deep learning accessible to many different people. I mentioned that deep learning works really well when we have a large data set. And if we take some of the most common data sets that are out there, like ImageNet, or these machine translation data sets, and so forth, they have a lot of data in them. ImageNet has 1.2 million images. This data set of English to French translation has 40 million paired sentences. And the switchboard data set has 300 hours of label data. And so this is a lot of data. And so if these are exactly the problems you care about, or the data distributions you care about, you're in great shape. But a lot of problems that we look at don't have this amount of data. And so for example, if you look at Kaggle's Diabetic Retinopathy Detection data set, this only has 35,000 label data sets-- sorry, 35,000 labeled images. And this is something where deep learning isn't going to work as well if you train from scratch on this. And likewise, there's a data set on trying to adaptively treat epilepsy and this has less than an hour of data. And one of the papers that I showed before from the beginning of my PhD where we are learning the spatula task, this had less than 15 minutes of data. And so this is much, much less data. And in many applications, we don't have tons of data. From-- or maybe we're looking at a population that has a lot less data of it. And one thing that-- one reason why I think that things like multi-task learning and meta-learning is important is if we can extract prior information about from other data sets that are larger that we might be able to actually start to better solve tasks that have much less data which will in turn make this kind of technology accessible to people who don't have the money to collect a huge data set, or don't already have a data set collected for their problem. Yeah. Is there an extended way to quantify how useful data from one task would be to learning a new task? Yeah, so the question is, is there a way to quantify how useful data for one task might be useful-- how useful it is for another task. And I guess the short answer is that's an open problem. It's a really useful and interesting problem. The short answer is it's unsolved. But there is some work on it, on trying to relate the similarity between two tasks. And it's actually not really a similarity function. It's more of a directional similarity, like how useful is one thing for another thing. There's also some work just purely on data valuation in general, like how valuable is a data point in the context of a larger data set. And so like James Xu, for example, on campus has some work on that as well. Yeah. Well, you talk a lot about [INAUDIBLE] large data set and applying it to the smaller data set. Are there any examples [INAUDIBLE] you take a large number of tasks with each small numbers of data and then leverage the common [INAUDIBLE] together than just one individually? Yeah, absolutely. So the question is a lot of the examples we've talked about have a large data set and a small data set. What if you have lots of small data sets? And absolutely, we've seen examples where these kinds of techniques can be super useful in that scenario. Actually, in your homework, we'll be working with the Omniglot data set which has about 20 examples of each character, but lots of characters. So around 1,200 characters or actually more than that. And so that's an example where these kinds of systems can work quite well. And there are other examples as well where we can amortize the cost of learning. Yeah. How helpful is it to use machine learning models to generate data sets? And then use those generated, synthetic data and try to use the data set, for example, with robots. I thought it would be [INAUDIBLE] where you can actually get the joints of the person, and maybe that's helpful for robotics. So maybe we could generate a ton of data because I'm sure we have a lot of information for humans. So is that something that's applicable or common? Yeah, so the question is, can we generate data sets and is that useful and applicable and common. There's a little bit of work on that. So like Phil Isola, for example, has been doing research, a little bit of research on that topic. In general, there's possibly kind of a no free lunch thing where basically if you're learning to generate data from a data set, then you're not really creating additional information when you train that generative model. And so that might not be more useful than the original data that the generative model was trained on. So from that standpoint, it's a little bit tricky, I think, to get value out of that sort of thing. But if you have domain knowledge that you can put into the generative model that might help. So yeah, I think it's an interesting problem. And there actually there have been a few works that have done kind of interesting stuff along those lines that I could talk about in office hours, like data set dissertation, for example. But in general, it's tricky. Cool. So I've talked about a few different successes and a few different exciting applications of these kinds of systems. But I'd also like to emphasize that there's also lots of open questions and challenges. And we've seen some of these in some of the questions like how do we determine the usefulness of one data set for another. And I think that also makes it equally exciting to study because it means that there's open problems that we can solve and that all of you can solve. In the last 15 minutes or so, I'd like to dive into what actually a task is and what multi-task learning is. And we'll do this fairly informally in this lecture but more formally defined things in the next lecture. So informally, we can think of a task, or rather a machine learning task as something that takes as input a data set, and a loss function, and tries to produce a model. I think this is a useful way to think of-- or a fairly intuitive way to think about a machine learning task because when you want a machine learning system to solve a task, you typically give it a data set, and a loss function, and optimize that to get a model. And different tasks can vary in a number of different ways. At a high level or kind of intuitively, you could have different objects, different people, different objective functions which was mentioned, different lighting conditions, different words, different languages. So the different tasks that you might throw into a multi-task learning system might be fairly varied. And they could vary along lots of different axes. And so the reason why I bring this up is that multi-task learning doesn't just cover what you might think of as different tasks in terms of the English definition of the word task. It could also mean that typically you don't think of different objects as different tasks from the English definition of the word task. But if you want a system that can handle lots of different objects, and you might want to train it across lots of different objects, that might still pertain to this more technical definition of a machine learning task. But as I kind of mentioned this there's one really critical assumption that comes up with these kinds of systems. And the bad news about this assumption is that the tasks that you train on, they need to have some shared structure. If they're completely independent from one another, then you won't get any benefit from training together, or you won't get any benefit from trying to exploit the shared structure. And if you don't have any shared structure, if they're completely independent from one another, you're better off just using single test learning. The good news, though, is that there are many tasks that have shared structure. And so as one example, if you consider the task of unscrewing a jar, a bottle cap, or even like using a pepper grinder, for example, using a pepper grinder and opening a water it might seem very different tasks but they all involve a very similar motion. And even if the tasks are seemingly unrelated, the laws of physics underlie real data. And so there is already a lot of common structure there unless maybe you're on a different planet or something. And I mean, the laws of physics won't change there. But gravity might change, for example. People are all organisms with intentions. The rules of English all underlie all English language data. Languages are all developed for similar purposes. So even across languages, there's a lot of shared structure and so on. So I think that there are actually very few cases where some tasks that you come up with are independent in a statistical sense. And so with that, the model can-- or these kinds of methods can pick up on the shared structure and leverage that in order to do better. Cool. Now-- yeah, question. In a model like Gato that was trained by DeepMind, it was able to handle both, let's say, language and images. What is the shared structure in that and how is the model learning in that case? Yeah, so the question is in a model like Gato, it was doing both tasks involved language and images, and what is the shared structure there? I actually think that in that particular case, the Gato paper didn't show significant signs of being able to-- like significant indications that it was actually learning shared structure between those two things in terms of generalization. And so it's not actually clear to me that it was learning shared structure across images and text. And I could also imagine that it may be easier for them all to learn shared structure if, for example, you gave it images of text rather than tokens of text because those modalities are very different from one another. It's kind of like if instead of-- well, for us humans, we see everything through the same exact embodiment through our eyes and so forth. And we're never like getting one hot vectors of tokens just passed into our brain. And that's kind of what neural networks get. They don't see things in a unified way. And so yeah, the short answer though, is that it's unclear if it was actually learning shared structure between the two. Although, there is some work that is actually found that you are able to learn a shared space-- at least, a shared embedding space with images and text that is more unified. Lots of questions. In the back. [INAUDIBLE] Yeah, so the question is, how do you quantify the amount of sharedness. It's difficult to quantify given a pair of data sets. But one way that's nice to think about that we'll talk about in the Bayesian meta-learning lecture is from the notion of Bayesian graphical models. If you have two random variables, if they're independent from one another, then they don't have any shared structure. Whereas if there is some dependency between them, if there is an edge between them, or either direct or indirect edge, then they do have some shared structure. Quantifying how much shared structure they have is hard. Although, conceptually I think that thinking about it from that Bayesian standpoint can be useful. Yeah. So in the case that we don't find any shared structure, would we argue that the models are actually the two separate independent models or [INAUDIBLE] could be trained separately and still do the same thing? Yeah, so if there isn't any shared structure, the model might just basically learn to use half the model for one task, half the model for another task. And I guess there isn't-- there actually isn't that many downsides to that. But there also aren't any upsides to that as well. Yeah. Shared structures are necessary with the tasks that we're training on our period structure, or would also be fine if we could task that we're trying to generalize to as related to both of the tasks independently. But the task that we're actually training on, they don't have any overlap, if that makes sense. Can you repeat that? Yeah, so if [INAUDIBLE] tasks C and that relates to both A and B in some way. But [INAUDIBLE] may be are not that related, will actually training on A and B [INAUDIBLE] I see. So if you have two tasks that are unrelated, and then you want to learn a new task that's related to both of them. So for example, if you want to-- task A is to pick up a fork, task B is to use the fork to skewer something, and then task C is to pick up the fork and skewer something. Yeah, there are certainly instances where things like that come up. And so I would say that-- so yes, that's a scenario where these kinds of techniques make sense. Although, some techniques will be more useful than others. Yeah. Since we're talking about modalities, do you suggest that when we use a multi-task model, do they belong to the same modality like text or image, or can we have a combination of both of them? Yeah so the question is, can you kind of have a combination of multiple modalities? And in some sense, the flamingo model that I showed before is actually already example of a multi-modal model that takes as input both images and text. And so yeah, you could definitely have task. And Gato is another example where different tasks do have different data modalities. If you are representing those modalities differently, than the shared structure-- it may be harder for the model to find the shared structure. But certainly, something that's possible with these models. Cool, I'm going to try to move on a little bit. I think I only have a few more slides. And then we can take a little bit more questions at the end. Cool, so what are some of the problem definitions that we'll cover in this course. So informally, we can think of the multi-task learning problem as learning a set of tasks and generally trying to learn that set of tasks more quickly or more proficiently than learning them independently. And kind of here the set of tasks is we see the same set of tasks during training and during testing. So we're not trying to handle a new task. In contrast, the transfer learning problem is we're given data on previous tasks and our goal is to learn a new task more quickly or more proficiently. And this is also the problem that meta-learning algorithms also aim to solve. And so basically, in this course, we'll be looking at any kind of method that tries to solve one or both of these problem statements. Now one question that came up earlier is that doesn't multi-task learning reduced to single task learning? So one thing that you could do is you could say you have a data set for each task, Di, and you have a loss function for each task Li, can you basically sum up your loss functions, combine your data sets to create a data set and a loss function, and then you have a single task learning problem where you have one data set and one loss function? So are we done? So in some sense, it can reduce to a single task learning. And aggregating the data across tasks and learning a single model is one very viable approach to multi-task learning. The transfer learning problem where you want to learn new tasks is what we'll focus on more in this course than multi-task learning because it is a little bit more challenging. And this solution, it doesn't just reduce a single task learning. But we will have one lecture on multi-task learning on Wednesday. We'll cover other things like how do we tell the model what tasks we want to do, or what if aggregating the data set and training on all of it doesn't work. So yeah, we'll focus more on the second problem statement. But there is also still some challenges that come up in the multi-task problem statement beyond just training a single model with a single loss function. Cool, so that's it. Let's see we could take a few more questions as a group but I also-- maybe we can also just take questions up in front, if people have additional questions. And we'll end there. As a couple of reminders, homework 0 is out and it's due on Monday next week. And if you want to work in a group for your final project, we'd encourage you to start forming project groups.
AI_LLM_Stanford_CS229
Stanford_CS229M_Lecture_2_Asymptotic_analysis_uniform_convergence_Hoeffding_inequality.txt
OK, cool. Let's get started. OK, so it's kind of complicated, right? It's kind of amazing, right? This technology is so advanced. So you can do all of these things together. But I still have to do them one by one. I have 10 action items-- maybe more than 10. I need to also connect with Wi-Fi. That's actually something I have to do. OK, but oh, let's get started. Oh, I need to have my notes. So what we're going to do today is that we are going to continue with the asymptotics last time a little bit for about 15 to 20 minutes. This is just to wrap up what we have discussed. And as I said, this first lecture is always kind of a little bit tricky for me to teach it, because the tools-- if you want to make it formal, it requires some kind of backgrounds. And if you don't want to make it formal, sometimes, there is a lot of confusion. So from this lecture, the second half of this lecture, I think we are going to talk about things that require less background, in some sense, and more self contained. OK, so the plan is the asymptotics, and then the so-called uniform convergence. I'll define what it is. And uniform convergence will be the main focus for the first few weeks of the lecture. OK, so let's start by reviewing what we have done last time. So what we have-- the last time was this theorem, where we showed that if you assume consistency, which is something that we basically just assume without much justification. It's not always true. And it also depends on the problem. The consistency basically means that theta hat will converge to theta star. Recall that theta hat is the ERM, the Empirical Risk Minimizer. And theta star is the minimizer of the population risk. So you care about recovering theta star or recovering something as good as theta star. And we also assume a bunch of other things, like for example, the Hessian is full rank and also some regularity conditions, which I didn't even define exactly. For example, this requires something like some of the variance is finite, and so you can apply the theorems. And then, under these assumptions, we have that-- actually, it's challenging for me, because this podium operated-- it becomes unstable. It's like I feel like I'm writing while I'm on a boat. [LAUGHTER] But it's probably good for me to practice. What is this called? I would be better with some of the sports guys after we do this. Anyway, OK. So I guess we have discussed that you know the order of the difference between, say, theta hat and theta star. The order is on the order of 1 over square root of n. And formally, you write it like this. You scale back square root of n, and you know that it's on the order of 1. And you also know something about the loss. You know that the excess risk, L theta hat minus L theta star, is on the order of 1 over n. And if you formally write it like this, you scale back towards n. And then, you say it's on order of constant. And also, you know that the distribution of theta hat minus theta star-- this is converging to a Gaussian distribution, which means 0 and some covariance. And this covariance is complicated, but let me write it something like this. This is just revealing what we have written last time. And four, we also the dissolution of the excess risk. This is the distribution of a scalar, because the excess risk is a scalar. If you scale it by n, then you know the distribution is converging to the distribution of this random variable. And this random variable S is a Gaussian random variable with covariance mean 0. And covariance-- something above but not exactly. You don't have to remember exactly what the covariance here is, because I don't even remember them if I don't read my notes. There are some intuitions about this, which I'm going to discuss. But generally, this is just something you got from derivations. So last time, we have kind of roughly justified the number 1, number 2. And today, I'm going to, again, give a relatively heuristic proof for 3 and 4 just very quick so that we can wrap up this. So I guess just to very quickly review what we have done last time, so the key idea to derive all of this is by doing Taylor expansion. And Taylor expansion-- I think the key equation-- let me just rewrite it, what we have done last time-- is this. So you look at the theta hat. The gradient of the empirical loss S theta hat. This is guaranteed to be 0, because theta hat is the minimizer of the empirical loss. And you Taylor expand this around theta star. And you get something like this plus higher order terms. And then, you rearrange this and get theta hat minus theta star is equal to the inverse of the empirical Hessian at theta star times nabla L hat theta hat plus higher order terms. And then, you say that I'm going to replace all the hats by L, like L hat by L, using some kind of large number of uniform convergence. And last time, we have roughly discussed that this is on the order of 1 over square root of N, because you have a concentration. This is the average of-- this is roughly on the order of 1 squared plus nabla L theta. This is theta star. Now, theta star, which is roughly on the other 1 over square root of n-- and this one is converting to a constant. So that's why the whole thing is converging to something on the order of 1 over square root of n. And this time, we are going to make it a little more formal. So we've get the exact distribution of theta hat minus theta star. I'll make this part really quick just so that if you are not familiar with the background, you don't get confused too much. So the idea is that-- so if you look at what's the distribution of this, if you think about this, this is the product of two random variables. And you roughly know what the distribution of each of the random variables is, right? So this one is going to converge to a constant, which is going to converge to a nabla L theta star inverse. And this one is going to be a Gaussian distribution if you scale it correctly, right? And basically, what you need to know is that what's the product of these two random variables. What's the distribution of the product of two random variables? When you know each of them, what happens with each of them? And what happens is from formula, what you do is you first scale by square root of n so that each of these two random variables are on the order of 1 so that you can reason about them easily. So you scale by square root of n, you get this. And then, you have inverse. And now, you scale this empirical grid by square root of n. And also, you get the square root of n. And also, let's fill in the population gradient, which is 0. So this one is 0. I just read it here to make it closer to something you know. And then, this plus higher order terms. This is still higher order terms, even if you multiply by square root of N, because I think there's a typo in the lecture notes somebody pointed out, which is really nice. But still, no matter how you multiply, it's still higher order terms compared to the other terms, right? So and now, this one-- let's call it Z. This Z, by law of large number, or I think, by central limit theorem-- Z is a Gaussian distribution and with some covariance. And what's the covariance? The covariance will be the covariance of nabla L xy theta. Why? This is just because-- what is l hat theta star minus this? This is really just this the empirical version of the right-hand side, the population gradient. So this is really 1 over N times sum of nabla l xi yi theta star minus expectation of this, right? Of the same thing-- maybe you can-- for simplicity, let's just write xy theta star, all right? So when you apply central limit theorem, you know that if you scale this by square root of N, then you get a Gaussian distribution, right? So that's why we know the random variable Z has Gaussian distribution. And we know this one will convert to a constant as n goes to infinity. And there is a theorem that specifically deals with this. But actually, if you think about it, this makes a lot of sense. So if you want to know what's the left-hand side, basically, it just becomes the distribution of the right-hand side. It's a constant times a Gaussian distribution. It's this constant times the Gaussian distribution. So basically, we have to figure out, what's the distribution here? So what is the distribution of a constant times Z? So basically, abstractly speaking, what we are dealing with here is that-- so the question we're dealing with here is that-- so a different color for abstraction. So basically, you're asking, what is the distribution of A times Z if A is a constant and Z is from some Gaussian distribution with covariant sigma? All right. And I'm missing a page. And you know that there is a lemma, which says that under this case, A and Z is also Gaussian distribution with covariance with mean 0 and covariance A sigma A transpose. I think this is a homework question-- homework 0 question. I'm not sure whether it's still there. I forgot to double-check. But this is something you can do-- what's a transformation of a Gaussian distribution? Still Gaussian distribution, it's just the covariance got transformed. And actually, the way to transform the covariance is that you left multiply the transformation. And you right multiply the transpose of it. And you get a new covariance. So this is something-- it's not that simple to derive this, but this is something you can either look up from a book, or you can derive it yourself. All right. So with this small lemma, then we know that the distribution of theta hat minus theta star converges to-- you place the convergence to a Gaussian distribution with mean 0. So here, A corresponds to the nabla l theta star, right? And sigma corresponds to this one. And you just plug in these two choices. Then, what you have is this-- basically, what we intended to prove. We got nabla l, nabla square l theta star minus 1 times covariance of nabla l xy theta star times nabla l theta star. OK? This is convergence intuition. Any questions so far? I realize that my camera is frozen. I don't know why. Something seems to be wrong. For those people who are on Zoom meeting, can you see my video? It's frozen. I see. Thanks. Maybe let me turn it off, and then turn it on. OK, so it's working now? OK, cool. And you can see that hat? You can see everything? OK, thanks. OK, cool. Any questions? Also, if you are in a Zoom meeting, also feel free to just unmute and ask any questions. So at the end of the-- covariance at the end, if that's the Hessian variance through its negative [INAUDIBLE]?? This is inverse. Yeah, the covariant. Sorry, which term are you asking? This one? The one next to it. Here? The one to the right, yeah. Yeah, this is the same-- That's the exact same one? Yes, it's exactly the same one. It's supposed to be the same thing transposed, right? But this is a symmetric matrix. So the transpose is the same as L, so this is minus 1. OK, so I guess what I'm going to do is I'm going to skip the proof for the derivation for the number 4. It's kind of the same thing. It's just that you have to-- because you already know the distribution of theta hat, you should know the distribution of L theta hat. And what you do is you do some Taylor expansion to make it a polynomial of theta hat. And then, you can use what you know about theta hat. All of this is in lecture notes. I guess I'm going to skip this part. So if we wrote it and it looks like-- for example, like the [INAUDIBLE],, is there a reason for that? You mean the covariance seems to like the new-- It's like, instead of the gradient direction module, [INAUDIBLE] about this. Is there a connection between the two? I think there's a connection, but I don't feel like it's-- this Hessian shows up very often in many different cases, right? So there is some connection, but I don't feel like it has to be-- it's not super closely related so that it's important enough to know, yeah. Yeah, OK. So I guess I'll skip the proof for number 4. If you're interested, you can look at the proof in the lecture notes. And what I'm going to do is that I'm going to spend another 5 to 10 minutes to talk about a corollary of this theorem, which is in maybe a more typical setting. Like here, this theorem is very general. Because it doesn't say anything about the loss function. It doesn't say anything about the model. It works for almost everything as long as you have the consistency. And here, let me instantiate this theorem for the so-called well-specified case, where you use log likelihood. And then, we can see all of this covariance become a little bit more intuitive. And things become a little bit easier. So this is the so-called well-specified case. So I guess in addition to theorem 1, let's also assume that-- let's suppose there exists some probabilistic model parameterized by theta such that y is given x and theta. So you assume that y is generated from this probabilistic model, right? So what does it mean? So basically, it mean, so let's say, suppose there exists a theta star. I'm using the subscript here to differentiate from the theta star defined before, which was the minimizer of the population risk. And actually, they are the same. But for now, they are the difference. So basically, you assume that there exists a theta star such that the yi, the data, is generated from-- conditional xi is generated from this probabilistic model. All right. So assume-- so this is why it's called well-specified. It means that your data is generated from some probabilistic model. And also, in this case, suppose you use the loss function is the log likelihood. Right? Before, we didn't really say what the loss function needs to be. It could be anything. And now, let's say the loss function is the log likelihood of this probabilistic model. Think of this as, for example, logistic regression, right? Or linear regression with Gaussian noise. So your log likelihood could be cross entropy loss. Could be mean square loss depending on what the probabilistic model you have. All right, so this is your loss function. And when you do this, then a you know a of things which are nicer, in some sense. So first of all, you know that the theta star is equal to the theta substar, right? So recall that this is the minimizer of the population loss. And this is the ground truth. This is the one that generates our data. And in this case, you can prove that when you have infinite data where theta star is the minimize of the infinite data case. You can recover the ground truth-- theta substar. So they are exactly the same thing. And you also know a bunch of other things. For example, you know that the gradient-- this is kind of trivial. I'm just writing it here because it used to be that I needed prove this in a proof. But if you don't care about the proof, this is just an intermediate step that you know. So you know that expected gradient over the population at theta star is 0. And also, you know what's the covariance of the gradient. The covariance of the gradient is the quantity that we care about, right? Because in the previous theorem, the covariance of the gradient shows up in the variance of theta hat minus theta star. So the covariance of the gradient is x theta star. I guess, from now on, we don't distinguish theta superstar and theta supersuperstar, and theta subscript star, because they are the same. And you know the covariance actually happens to be the Hessian. And where the covariance of the gradient happens to be the Hessian, then the covariance of theta hat minus theta star can be simplified. So because this used to be a Gaussian distribution with something like this, right? The covariance of the theta hat minus-- it used to be this product of three things, three matrices. But now, what's in the middle is the same as the Hessian. That's what we claimed in number 3. So that means that you can cancel this with this. And you get only one term. So what's left is just the inverse of the Hessian. Maybe I should just use black forever. Yeah, and you also know if you plug in this, the covariance of the gradient, you basically plug in 3 into all the statements that you had before. Then, you can also get something like, for example-- well, the important thing is this-- the excess risk. I guess we have claimed that it's on the order of theta star. But actually, here, you can be more precise. You know that this is converging to basically 1/2 times chi square distribution with degree p. So p is a dimension of theta. So suppose you have p parameters. Then, this is the distribution of the excess risk. And if you take the expectation of this so that you get all the randomness, then what you get is the expectation of n times excess risk is equal to the expectation of the chi square distribution. This is equal to 1/2 times p. By the way, chi square distribution-- you don't have to know anything detailed about it. This is basically the distribution of a sum of p normal on Gaussian square. So you know a lot of things about it. You know it's positive, and you know that the chi square with p, the mean is-- if you need to know more about this, just Wikipedia. It's very easy. We don't need anything deep about it. So the important thing is the last equation. So basically, we know that the excess risk and expectation-- here, the expectation is over the randomness of the data set, right? So excess risk-- if you don't scale by theta star-- sorry, if you don't scale it by n, then you get-- this is equal to-- I guess I should write convergent to, because it wouldn't be exactly equal. This is 1/2 times p over n. So basically, you don't even get the dependency on n. But you also get a dependency on p-- on a dimension. So you know what's the order of the excess risk. Of course, these are higher order terms-- theta of 1 over N. And actually, you know the variance of excess risk, which I don't think is super important. The variance is smaller than the mean. OK, so in the lecture notes, I think we have proofs for all of this. But I think I'm not going to discuss the proof. The most important thing, I think, is this one and this one. So the first thing is saying that the shape of theta hat minus theta star, the randomness-- the shape is the same as kind of the inverse of the Hessian. So in those directions where your Hessian is steeper, then you have less stochasticity, right? And in those directions where the Hessian is smaller, then you have more stochasticity. And the last one is saying that it doesn't matter what the Hessian is. The only thing that matters is the number of parameters. If you care about this kind of asymptotic regime, the only thing that matters is the parameter p, the number of parameters. We're going to discuss the limitation of all of these theorems in a moment. But this is what we got from this asymptotic approach. Any questions so far? OK, cool. So I guess if you're interested in more details, you can take a look at the lecture notes. So I guess now, let's move on to uniform convergence. And often, people call this line of research nonasymptotic So let's first discuss that. This is actually the kind of the approach that we're going to take for the rest of the lecture. We are going to care about nonasymptotics instead of asymptotic ones. So let me define what it is and motivate why we care about it. So recall that when you have asymptotic bounds, just like what I wrote above, you know that this L theta hat minus L theta star-- the final outcome is something like, this is equal to p over 2n plus theta of 1 over n. However, the problem is that here, you are hiding a lot of things in this little o of theta of 1, little o of 1 over n. So you hide all dependencies other than the dependencies on n, other than n. So what does it mean? So it means little o notation-- you also hide a dependency on p. So if you tell me in asymptotic regime, you get this bound, what happens is that you could either have p over 2n plus 1 over n squared. Maybe the real rate is this. It could also happen that the real rate could be this. So both of these two cases would be a possible situation if you tell me the bond above, right? I wouldn't have ways to distinguish this, because this one is hidden in this little o notation. Because the little o notation doesn't care about any dependencies or anything else. It only cares about the dependencies on 1 over N, at least in the context of asymptotics. So this is the problem, because clearly, if your rate is something on the right-hand side, then this is very bad rate, right? Very bad. By the way, by rate, I mean how does this depend on-- I guess maybe let's just call it bound, right? So suppose your bound is on the right-hand side. Then, it's a very bad bound, because this requires n to be bigger than p to the 50 so that this bound is smaller than 1, right? Because you need a second term to be smaller than 1. Then, you need n to be bigger than p to the 50. So just [INAUDIBLE] definition of little o of 1 over n. Does that mean that n times the function goes to 0 as n goes to infinity? Yes, yes. Yeah. Exactly. So OK, I'm going back to this. So the bound on the right-hand side is going to be very bad. And the bound on the left-hand side-- this one-- is pretty good in some sense, right? But you have no way to distinguish them, because these two things would be coming towards p over 2n plus little o 1 over n in the asymptotic sense. So that's the biggest problem. And also, in some sense, when you have other dependencies-- for example, the dimensionality-- even the dependencies on n is not the only thing that matters. For example, another more extreme situation is that suppose you compare p over square root of n versus p over 2n plus this. Right? Suppose you have two of these bounds. And if you use asymptotics, if you write in the asymptotic way, then you are going to conclude that this is p over 2n plus little o of 1 over n in the asymptotic language. And this one will be something like p over square root of n plus little o of 1 over square root of n. So it sounds like this is bad, because this one has higher order dependencies on n, right? Indeed, too, when n goes to infinity, the right-hand side is smaller is than the left-hand side. But if you think about a more moderate regime of n, then it's not really true. Because for the bound to be less than 1-- so if you want p over square root of n to be less than 1, this means that N is bigger than p squared. But if you want this p over 2n plus p of 100 and square root to be less than 1, this means n needs to be at least larger than p to the 50, right? So when N goes to infinity, then the left-hand side-- it's worse. It's a worse bound. But in most of the cases, the left-hand side is actually a better bound. So if you want the left-hand side to be a better bound than the right-hand side, I guess if you solve this, maybe you can even ignore the-- if you solve this, this is roughly saying that if N is smaller than-- I think I did this calculation at some point-- N is smaller than p to the 6 6, then actually, the bound on the left-hand side is better than the bound on the right-hand side, just because this p to the 100 is too big, right? So basically, the comparison-- basically, if you use this asymptotic language, things become a little weird if you consider other dependencies on other parameters. For example, if you have a dependency on the a dimension for modern machine learning, it's very high. So this is why I think asymptotics, even though they are very powerful, they don't necessarily always apply to the modern machine learning, just because it has the dependency for other terms in the higher order case, right? So this one has the dependency on p. So that's the main issue, basically. OK, so what we do-- how do we fix this, right? The first thing we need to do is to fix the language in some sense. We need to not only consider n goes to infinity. We have to also consider other quantities in this vault. So basically, what nonasymptotic does is that you care about-- this is just a term, or a kind of approach. This is basically saying that you only had absolute constant in your bound. You have to hide something, because if you have to care about every constant, it's going to be too complicated for theory, right? It's going to be a lot of calculations. But here, we allow us to hide absolute constant. But we cannot hide any other dependencies or any other things. So you are not allowed to have a dependency on p when n goes to infinity. And the absolute constant-- this really means that this is a universal concept connecting 3, 5-- something you can replace by a real numerical number. And actually, to make everything easier, so we are going to introduce this notation-- big O notation. This is actually-- sometimes, this big O notation has a little bit different interpretation. So I wouldn't say I'm redefining it, but I'm going to just be clear about what the big O notation means from now on. So now, big O notations from now on-- only as high as universal constant. And let me have, actually, a more technical definition, which is actually useful in some cases when you're really doing a lot of theories sometimes. I'm not sure whether some of you have this confusion about whether you should use big O or omega-- the big omega. Sometimes, it could be confusing. So let me define what this big O really means. It really means that-- so every occurrence-- at least, this is what it means in this course. It may not be exactly always the same for every paper. But I think people are converging to this interpretation. So every occurrence of big O of x is a placeholder for some function like say, fx, such that for every x in R, fx is less than Cx for some absolute constant C bigger than 0. So basically, this is saying that if you replace-- so maybe more explicitly, it's saying that you can replace O of x by fx such that the statement is true. So basically, if you see a statement with a lot of O of x, O of something, right, it means that you can replace all of these occurrences of big O notations by something more explicit such that the statement is still true. So it seem to be overkill as a definition of big O, which you're probably already familiar with. But in some cases, at least I've seen so many cases where I got confused. I have to kind of really literally verify whether I satisfy this definition. Anyway, OK. And also, just for notational convenience, sometimes we also write a is less than b. This is just equivalent to the exist absolute constant C greater than 0 such that a is less than C times b. And technically, if you really want to be very solid, this statement should only apply to positive a and b-- positive a and b. Because for negative ones, you probably, ideally, you should just write this only for positive a and b. That's my suggestion, because for negative ones, it just becomes a little bit confusing. So the point here is that there is no-- well, I defined this big O thing, right? So it depends on the literature. Sometimes, when people define big O, they have to define some limit. But here in this course, the big O just really means there's no limit taking-- you don't have to think about any limit. So for a and b functions here, because a and b are positive numbers, every number is less than every other number, right? Right. So a and b could be functions of other, more complex quantities. OK, cool. So these are just some notations. OK, so now, the bound we care about is-- so we are interested in this notation. We are interested in bounds of the form, like the excess risk-- so it's actually bound excess risk l theta hat minus l theta star by something like big O of some function of maybe p and n, where p could be a dimension and n could be the number of data points. Of course, you can replace this by a function of other things. But the point here is that after you write this, there's nothing else hidden in the big O, only a universal constant. And once you have this kind of language, you can compare things in a more proper way. And in the next few lectures, our goal is to basically show how to provide bounds of this kind of form. Sometimes, the bound could be more complicated, not only depending on the number of parameters and number of data points. It could depend on the normal parameters and so forth. The point is that we always only had universal constants. Any questions? So [INAUDIBLE] the theta is very [INAUDIBLE] for some function [INAUDIBLE] but could that be for all of them? For some, yes. That's very important, because if you replace it for all-- because here, there's also-- no, I think it's for some. So you literally only need existence of one function that has this such that if you replace your statement by f of x, the statement is true. So yeah, I think this is actually a very good question. Because I got confused by this many times. So maybe let's give an example, right? So you say, the excess risk is less than O of 1 over square root of 10. What does this mean? This means that you can replace this. This is your fn, right? You can replace this by [INAUDIBLE] sum of 5 over square root of 10 such that this is exactly true. But you don't need to say that for every f. So if you say for every f, then it means that if you place that 0.1 over the square root of n, it still has to be true, right? That's too much, right? You only need the existence of one. But of course, if you have existence of one f, then there's always other f, which is bigger, that can also be replaced. But you only need one f. And also, actually, maybe this is a little bit advanced. But this kind of interpretation also allows you to have big O in your condition, even. For example, this could be a little bit advanced. But for example, you can write for all-- if n is bigger than O of p, then excess risk is less than 1. I'm not saying this is a correct statement, but this statement would be interpreted as, if you replace this O of p by 2p, then it's going to be correct. Or if you replace this O of p by some function, some constant times p, it's going to be a correct statement. And it's not omega here. It's really big O, which is sometimes confusing. OK, cool. So now, let's move on to the key idea that we are going to have, right? So to bound these excess rates, how do we achieve a bound like this? The key idea is to somehow say that L hat theta is close to L theta, right? In some way, in some sense. I need to specify what I really mean by these two functions are close, right? Are they close at every theta, or are they close at a specific theta? So here is a small claim which tells what you really need. So what you need is that-- so suppose L hat theta star is close to L theta star. Suppose these two loss functions, empirical and population loss, are close at theta star. And also suppose they are close at theta hat. And here, actually, you only need one step closeness. So suppose you have both of those. Then this implies that L theta hat minus L theta star is less than 2 alpha. So basically, you just need to show that these two loss functions-- the empirical loss and population loss are close at theta star and at theta hat. Then, you can can bound the excess rates by 2 times alpha. And the proof is actually very simple. What you do is that you know that this is comparing theta hat with L theta star, right? And your condition involves comparing L versus L hat. So you have to do some arrangement to link them, right? So what you do is you say I want to compare these two. And I write this as a sum of three terms. L theta hat minus L hat theta hat. You first compare this L theta hat by L hat of theta hat. And then, you have L hat theta hat. You compare this with L hat theta star. And then, you compare L theta star with L theta star. Anyway, I don't know why the video freezes again. Let me restart it. OK, and the reason why this should freeze-- OK. So why don't we do the three things, right? Once you see it, it's kind of obvious, because this one is the condition-- one of the conditions, right? This one is the second condition. And this one is the first condition. And you also have this one, which compares directly theta hat and theta star. But this is comparing them at L hat. And you know that L hat theta hat minus L hat theta star is less than 0, because theta hat is the minimizer of L hat. So this term is less than 0. And this term is less than alpha. This term is less than alpha. So in total, you continue to get 2 alpha. OK? So basically, this is saying that it suffices to show the two conditions. The first condition is that L hat and L is close as theta star. The second condition is that L hat and L are close at theta hat. So it turns out that the challenge to prove these two inequalities are-- the difficulties are completely different. So let's say, if this is number 1, this is number 2, number one is very, very easy to prove. And number 2 will require a lot of work, which takes you a few weeks. Maybe not a few weeks, but two weeks. Is there a reason why in the first inequality [INAUDIBLE] value and the second inequality, it's not? The only reason is that of course, if you put absolute value here, it's still true, right? And actually, you can also bound the actual value if you want. The only reason is that if you don't have the absolute value to show these conditions are satisfied, it's a little bit easier, slightly easier. You need one fewer step. That's why in most of the books, you don't have that step. And also, you save a constant, a factor of 2. So actually, this is a very good question. In my first time I teach this, I just have absolute value. And then later in the lecture, I have to do additional steps to fix that constant, which makes it a little bit annoying. But fundamentally, you are right. There is no real difference. You don't run into that problem when you show the first inequality? You don't run into that problem in the first inequality, yeah, which I'm going to show the first inequality just right now. The first inequality is very easy. And I'll tell you why they are different. It sounds like they are very similar, right? So the difference is that-- let me see whether it's ready for me to talk about difference. Let me not talk about the difference first. Let me first show the inequality 1, and see why it's relatively easy. And so to do that, so the goal is to show 1. And the main tool we are going to use is the so-called concentration inequality. And this is, in some sense, a nonasymptotic version of the law of large number. So it's trying to prove the same things but in a different language and with a stronger form. So this is the nonasymptotic version, I guess, of central limit theorem. And now, you don't have to deal with the limit. You just have a bound that depends on it. And I think probably some of you have heard of this inequality called Hoeffding inequality. I think this thing probably is going to be taught in 109-- CS109 or some of the statistics classes. But anyway, you don't have to know it before as a prerequisite. And let me define the inequality. So this is trying to deal with a sum of independent random variables. So let's say x1 up to xn will be independent random variables. And suppose they are bounded. Each of them is bounded by ai and bi. You can think ai and bi are just constant, maybe 0 and 1. And almost surely, for every i-- and so we care about the mean. So the mean of this random variable is defined to be xi-- sorry, is defined to be mu. And so the central question is, how different is the empirical mean from the average, from the expectation? All right, so we care about how small is this. And this is a random variable. So you have to have a probabilistic statement. So the claim is that the probability that this difference is small is very big. Alternatively, you can say that the probability that this difference is big is very small. They are just the same. So you get how big it is. It's very close to 1. And the difference from 1 is this exponentially small number. And what's in the exponential is something like this. OK, so this is the formal statement. Maybe let me try to interpret it a little bit by instantiating it. It's a special case. So if you define sigma squared to be 1 over n times sum of bi minus ai square i from 1 to n, so then the sigma can be viewed as kind of the variance of 1 over sum of of xi. This is not exactly the variance, right? But it's some upper bound of the variance. Why? Because if you look at the variance over n times the sum of xi, you know that the variance is linear. So first of all, you get 1 N square in front, because the variance is quadratic. It's scaling. And then, in your relatives, you have the sum of the variance of xi. And then, this is equal to 1 over N squared-- the definition xi minus expectation xi squared. And now, because each of these xi-- xi is always between bi and ai, right? So xi is between bi and ai. And expectation of xi as a consequence also is between bi and ai. That means that this thing is smaller than bi minus ai, because both of these two quantities are in this interval. So the difference of them is also smaller than bi and ai. You get bi in a square for each of these terms. So that's why the total, the whole thing-- maybe, I guess, also including the 1 over n squared-- so the whole thing-- is smaller than 1 over n squared times the sum of bi minus ai squared, from 1 to n. Right, let me see why I'm missing a-- I think I have a typo here. OK, so basically, you can think of each of these-- bi and ai squared is the variance. And then, they would take the sum of them and you divide by 1 over n squared. That's kind of the variance of the whole xi. And suppose you take this view. And you can see what is this is saying. What this inequality is saying is the following. So if you take epsilon to be square root, some constant c times sigma squared times log n, so this is something like a constant O of sigma square root of log n. So you take epsilon to be something a little bit bigger than the variance by square root log n. Then, you know that you plug this epsilon into the Hoeffding inequality, where c is a large constant. Then, you plug in this to-- for example, c is larger than 10. And if you plug in this epsilon to the Hoeffding inequality, what you got is that-- so probability 1 over n times sum of xi minus mu. This is actually the most interesting regime of this inequality when you plug in epsilon to be on this level. Typically, when you use it, you always use epsilon to be this level. Because this is the useful regime. So when you apply it, you get this is less than O of sigma log n, because I replaced this for epsilon. And it's bigger than 1 minus 2 times exponential. Now, let's plug in epsilon. So you get, this is 2. Maybe let's first not replace epsilon. Let's first replace sigma, right? So you can see that the right-hand side-- by my definition of sigma squared, so this is the same as the Hoeffding inequality. And then, plug in epsilon, I get 1 minus 2 exponential 2 log n. 2 times big O of log n. I guess 2 is also in the big O, so you get-- right. And now, you choose this big O to be your large constant, right? So recall that this big O is-- you can replace this big O by a large constant, right? So then, you got this to be something like 1 minus, maybe let's say, I guess-- maybe here, it's easier if I just keep the c especially. I got 2 c here. This is c. Then you get n to the minus 2c. 2 times n to the minus 2c. And if you pick this constant c to be something like 10, then you get 1 minus 2 to the n minus 20, right? So basically, this is saying that you have a very, very high probability such that the difference is smaller than sigma log n. So in other words, with high probability-- so with probability, let's say, larger than 1 minus N the minus 10-- you have that the empirical mean is closer to the expectation in the sense that they are close in this sense, right? They're bounded. The difference between them are bounded by big O of sigma times log n. So basically, this is saying that if you think of sigma as the variance, as the quote unquote "variance," then you cannot be-- it's very hard for you to deviate from the mean by something much larger than the variance, right? So this is the deviation from the mean. And this is the variance up to times square root of log n. The log factor in this course is not very important. So this is saying you cannot deviate from the mean by a large factor of the variance. Of course, this variance is not a real variance. It's this perceived variance. Actually, we're going to get back to this concept. This is sometimes called-- there's a concept called variance proxy, which we're going to talk more about it. So in some sense, it's kind of like if you draw this, it's kind of like you are saying that this random variable-- suppose you would call this x hat, a random variable-- if you look at the distribution of this random variable, it's something like this. And the mean is mu, right? Suppose this is mu. And you look at something deviate from the mu by sigma square root log n. And then, you are saying that the mass in this part is extremely small. How small they are? They are smaller than inverse polynomial of n, right? So the mass here is smaller than n to the minus 2c or maybe inverse poly [INAUDIBLE]. So you can see that this bound can now be much, much smaller. And one of the ways to see it is that if this is really a sigma, it's really the standard deviation. Then your bound cannot be improved much, right? Because for any random variable, you always have some probability. So the bound cannot be improved much. Of course, this is a somewhat just intuition, right? Because I need to define what they mean by not improved much. But intuitively, this bound shouldn't be able to improve much, because for any random variable, you always have some mass. There's always some mass within mean plus minus 10 deviation, right? So if you really look at the interval defined by the standard deviation, there's always some mass in that, right? There's actual constant mass in that. So you cannot make these intervals much, much smaller and get the same bound, because if you get it too small, then you have a lot of messiness. So OK, cool. So now, let's interpret this a little more. So let's say we take a, and we instantiate even more. So let's take a to be on the order of maybe minus 1. It's a negative number. And b is on the order of 1, right? So this is typically the important thing, right? So your random variable is between minus 1, maybe minus a constant. But constant-- then what you have is that the empirical mean minus the expectation is smaller than big O of sigma square root log n. This is the same thing I have written. And what is sigma? Sigma is square root 1 over n squared times the sum of bi minus ai square. And this is something-- each of the bi and ai is on the order of 1. So you get 1 over n squared times n, because there are n of these terms. So this is 1 over square root of n, right? So sigma is on the order of 1 over square root of n. And that's the variance of your mean estimate of the empirical mean. So that's why if you plug in this choice of 2 sigma, you get square root n, square root log n. So basically, you cannot deviate by-- and sometimes, people write this as O tilde of 1 over the square root of n just to hide all the log factors. So if you don't care about the log factor, it's basically saying that you cannot deviate by more than 1 over square root of 10. It sounds very abstract for the moment. But in the long run, you'll see that this kind of thinking will be used many times. And it's actually useful to just burn this in your head if you really do machine learning theory for life. But you don't have to. But for me, this is something like-- basically, I already burned this into my head, in some sense. Any questions? Oh. OK, so this is a short review. I'm not sure whether the whole-- I think probably CS109 will get into these kind of details, but this is just kind of a review of Hoeffding inequality with a little bit of kind of additional interpretation. And now, if you apply Hoeffding inequality to our case, let's see what we can get through the empirical laws, right? Recall that our goal is to deal with this. The difference between this and this, right? And this one is 1 over n times the sum of the loss on each of the examples. And this one is really literally the expectation of the sum, right? And so this is a perfect case to use Hoeffding inequality, because this one corresponds to xi. But Hoeffding inequality requires a bound on a random variable. So we just assume that in many cases, the loss is indeed bounded. But here, we assume the loss is bounded between 0 and 1. If the loss is not bounded, you need a little bit more advanced tools to deal with it. But let's say for now the loss is bounded between 0 and 1. For example, if you use classification, your loss is 0 and 1 loss. Loss can only be 0 or 1. So that satisfies this loss for every x and y and theta, let's say. Then, if you apply Hoeffding inequality, what you get is that-- so this is a lemma. But actually, it's really just the application of Hoeffding inequality. So for any fixed theta, so suppose you-- so let me see. So L hat theta-- this is basically a sum of xi, right, where xi is this L xi yi theta. And so you can compute sigma squared, like the fake variance that we are thinking about. So the sigma squared defined was this-- bi minus ai squared, and from 1 to n. And I guess we have done this 1 over n squared times n, which is 1 over n. So that means that L hat theta minus L theta, right, is less than O of sigma square root of log n with high probability, right? And sigma is 1 over n-- sorry, sigma squared is 1 over n, so this is O of square root of log n over square root of n. And you can also write this as O tilde of 1 over square root of n. So basically, for every fixed theta, the empirical loss and the population loss only differ by 1 over square root of n with high probability. So it sounds pretty good, right? So we showed that they are very close. And how close they are? They are close by, well-- the difference is 1 over square root of n, which goes to 0, I think, goes to infinity. So it's supposed to be a small number. And there's no other things hidden here. Of course, you have a log factor in n, but you don't have any factor about, for example, dimensionality. Any questions? So there is a small issue-- go ahead. [INAUDIBLE] with high probability here is 1 minus 1 over n to some positive? Yep, yep. Exactly. So with high probability-- so technically, I should write the probability that this is happening. It's lower than 1 minus n to the O of 1. OK. And this is actually a good time to practice this big O notation. Basically, this is saying that you can replace-- actually, here, wait. Let me see. I think [AUDIO OUT] a big O of 1 and omega of 1. I think I should use omega of 1. But maybe I say c. I say, there exists a c such that a constant-- there exists a constant other than 0s such that this is true. Maybe-- yeah, you see, sometimes, this is confusing. On the fly, I couldn't figure it out. But this is what we mean. Maybe let's say-- It's 1? Maybe let's just say this is 10. I think I think this is definitely a correct statement, because there is the O here. You can hide everything in there. So that's what I mean. OK, cool. OK, so this is a correct statement. But there is a small thing that-- there is an important thing we should note here. So what do I mean by, for any fixed theta? What does this really mean, right? I have this header here. So this really means that you need to first take theta. And then, you draw after you take theta-- you draw iid, xi and yi from this distribution p so that these are-- well, why do you have to do this? Because you want to make sure that L of xi and yi theta-- these are independently distributed-- are independent for different i's. So if you pick theta first, and then you draw the xi's, then indeed, this random variable xi, which is equal to a loss, are independent. But this doesn't really mean that you can do this for theta. That depends on xi, which is actually what I'm going to talk about next. So first of all, you can apply this for theta. You can apply this for theta is equal to theta star. That's a lot. Because theta star is a universal quantity, right? You know what theta star is. The theta star exists even before you draw the samples. Why? Because theta star is the minimizer of the population risk. The population risk doesn't depend on the samples. It only depends on the distribution. So that's why you can apply this with theta equals theta star. So that's why we got this inequality 1, because got L hat theta star minus L theta star. It's less than O tilde of 1 over square root of n. So now the question is whether you can apply this to theta hat. And the answer is, no, you cannot apply it to it. And it's not only just because you have some subtle kind of mathematical rigorousness. It's really just-- it's very far from correctly applied to theta hat. It's not a small mathematical nuances kind of thing. And the reason is that there's a dependency issue, right? So as I alluded before a little bit, so the dependency is that you first have theta star, right? Theta star depends on the population distribution. And theta star is something that has existed before you begin to draw the sample. And then, you draw the sample. And then, you get theta hat, right? And then, you can compute, for example, L theta hat or L hat theta hat, these kind of things. But theta hat depends on the samples. So that means that L of xi yi theta hat are not independent with each other. So you cannot apply Hoeffding inequality, because they are not independent random variables. And this is important, because if you can really apply this, actually, you always-- if you can apply this Hoeffding inequality for theta hat, you'll always get 1 over square root of n. There is no dependency on anything. Then, machine learning would be much, much easier. We don't have to think about sample complexity. It's always small. So basically, the next, well, two weeks we are dealing with this, how do we deal with theta hat, right? So the idea to fix this is called uniform convergence. And the key idea is that you want to apply this-- apply Hoeffding to any theta that is predetermined before drawing data, right? You can apply this to any theta that's predetermined before drawing the data. I guess this might sound a little bit-- by itself, a little bit vague. So what I really mean is that you want to prove that-- so what we know now is that for every theta, probability L hat theta minus-- so for every theta that has nothing to do with our samples, you know this is true for some-- of course, I didn't specify exactly what epsilon delta is, but this is the form of the theorem we are proving right now-- we can prove right now. And we have proved this for-- and you can plug in theta is equal to theta star. That's fine. But this is not the same as-- this is true. So the second statement is what I'm going to prove in the next one or two weeks. But these two are two different statements. The second statement is saying that you first draw the samples, and then after we draw the samples for all theta, these two functions are close. Maybe it's useful to draw a figure, right? So there is a function called L theta, right? And here, this dimension theta in the y dimension is the L theta. And now, let's look at what's the empirical loss. So the empirical loss-- so I guess maybe let me give example where these two statements are different. So let's think about-- there are only two, three cases. So this is a very poor example, right? So consider the case where L hat theta is this function. So it's the right function with probability 1/3. And it's the orange function with probability 1/3. And maybe let's say it's the green-- this is a sign, I guess. Didn't have different color-- green function, with probably 1/3. And so what you know is that for every theta-- so for any fixed theta, if you look at the probability that L hat theta is different from L theta-- let's say they are just different. So what's the chance that they are different? So this chance is something like 2/3, right? Because if you look at any point, any theta-- for some theta, actually, the three functions are always the same, right? They're always the same. But maybe, for example, if you pick a point here, if you look at this point, then with probably 1/3, L hat theta is this right point, which is different from L theta. And with these other two possibilities, right, it's probably 2/3. L hat is equal to L theta, right? So basically, for every theta, you have some-- sorry, I should write this is equal to 1/3, right. So basically, for every theta, you have something like this, right? And on the other hand, if you look at some statement like this, if you look at this, for every theta, L hat theta is close to L theta. Then, what's this thing? This is saying that basically these two functions are the same globally. And clearly in any of these red, yellow, and green cases, this probability is 0. Because in both of these three random cases, the two functions are not always the same, right? There's always some chance that there are some differences. So that shows that you cannot easily switch the probability for all qualifiers. They are just not switchable. I guess you probably have seen that also sometimes, some of you probably would expect this, that this is about union bound, right? When you do union bound, there is always these kind of things-- whether you can switch the probability with all kind of terms, which we are going to talk about in a moment. So basically, this is probably-- I hope this is demonstrating that it's kind of more difficult to prove L theta hat minus L hat theta-- to prove equality 2. So the take home point is that it's more difficult to prove equality to 2. What's equality to 2? Equality to 2 was the difference between L theta hat and L hat theta hat. And the reason is that theta hat is a function of the data set. And you lose the independence. And so the goal of many of the rest of the lectures is to show that this is indeed bounded using the so-called uniform convergence. And by uniform convergence, let me just summarize. I hope you've got some intuition here already. We need to prove something like probability that for all theta, theta hat close to L theta, it's less than epsilon-- is larger than 1 minus delta. So we need to prove something like this using some techniques. And you will see that you're going to get much looser bounds when you prove something like this. The epsilon delta would be different from the epsilon delta that you can get when the [INAUDIBLE] quantifier is outside the probability. So I guess I will show how to prove this kind of bound in the next two lectures. But just, I guess-- so that this suffices, I guess as expected, if I have a claim 2-- so you know that L theta hat minus L theta star is less than-- I guess, by claim 1, this is less than the differences between L theta star minus L hat theta star plus L theta hat minus L hat theta hat. And this is less than 2 times the sup over all theta-- L theta minus L hat theta, right? So if you can show that for all theta they are similar, then you have a bound for the excess risk. So maybe just, in some sense, if you draw the picture here, so basically, what you want to show is that suppose this is the population risk L theta. You want to show that with higher probability, your empirical risk is something like this, which is kind of uniformly close to the population risk. That's kind of the intuition we have. And actually, let's see. So yeah. And actually, in the second half of the lecture, like after week five or week six, we're also going to talk about that it's not actually-- this picture is actually not entirely accurate in the sense that indeed, in many cases, the empirical risk is kind of bounded by some epsilon, right, within the epsilon, within the population risk. But also, it doesn't look like it's kind of fluctuating. So what really happens is something like, maybe you have a function population risk like this. This is population. And the empirical risk is something first of all, close to the population risk, but also in terms of the shape and the curvature, it's also close. So it wouldn't be that fluctuating. It would be something like maybe this. So not only in terms of the value they are close, but also in terms of some other properties-- maybe the curvature, the shape-- they are also somewhat close. And this is useful for certain kind of cases when you especially care about optimization, right? So for example, if the empirical risk is so fluctuating, then it becomes harder to optimize and we care about optimization sometimes, you want to show that the empirical risk can also have nice properties for your computational purposes. OK, I guess that's a perfect stopping time. OK, thanks.
AI_LLM_Stanford_CS229
Stanford_CS330_Deep_MultiTask_Meta_Learning_Frontiers_and_Open_Challenges_I_2022_I_Lecture_16.txt
A couple of logistics. First, the project poster session is on Wednesday next week. We'll be posting details on Ed very soon. It'll actually be broken up. Because we have so many students in the class, it'll actually be broken up into two different sessions, and you'll be assigned to one of the two sessions. And so, it'll be a bunch of details on that coming soon on Ed. The final project report is due in two weeks on Monday. And then a couple of other logistical items from the high-resolution feedback. First, the high-resolution feedback from before Thanksgiving break was about kind of the timing of homework 3 and homework 4, and so we will revisit that timing for the next offering. And then another thing that has come up a couple of different times was asking for solutions to the homeworks. And we've had a lot of discussion around it, and it's something that I think is always a little bit tricky to do. In particular, the thing that's tricky is that we reuse the assignments over time, over different quarters, and we do that so that we can have a very high standard and a high quality of the assignments. But it also means that if we post the solutions this quarter, then it could be that those solutions stay floating around, and other students use those solutions for future quarters. And we have seen in the past cases where students have actually posted their own solutions online, and students have copied those solutions. And so, we don't want to incentivize that sort of cheating, and so, as a result, we don't post solutions, which is somewhat unfortunate. But if you have questions about mistakes on the homework, either about your grade or anything, really feel free to make a post on Ed or come to office hours, and we're really happy to walk through the solution to the homework. And hopefully, that's just as good as having access to the solution. So my plan for today is really to talk about kind of the most recent trends and research that I find really exciting. And so, this is, I think, a pretty fun lecture because, yeah, I basically will talk about some of the things that I think are really cool that are on the edge of research. This will be including meta learning for adapting to distribution shift, including adapting with unlabeled examples and adapting with kind of local descriptions of an edit that you want to make to a model. And then, I'll also be talking about how we can meta-learn things that are more general than some of the things that we've seen in the past, and this includes trying to meta-learn a generic optimizer that can be used for any different problem, as well as trying to meta-learn symmetries in neural network architectures. And at the very end, I'll talk a bit about what I think are some outstanding open problems and challenges in the field and, yeah, how we might start to move towards addressing those challenges. Cool. So let's first talk a little bit about adapting to distribution shift, and first is why this actually matters at all. I think distribution shift is a really fascinating problem, and I think that it's really an important problem because of kind of the nature of reality. So the current paradigm in machine learning research is to take a data set, then train a model on that data set and then evaluate that model on some held-out data or something like that. But in reality, things are constantly changing in the world. And so, we see stocks, or supply and demand, or you deploy your system in a different part of the world. And as a result, the data that the system is often deployed on is actually different from the data that it saw during training. And I think we need algorithms that can handle the fact that the world is changing rather than just being deployed as a static system. And I also think that meta-learning is actually possibly a very useful tool for trying to address this challenge because meta-learning can train systems that can adapt very quickly. And so, we'll see that a little bit in some of the upcoming slides. Now, if this is kind of a current reality that that's faced, you might ask, well, what does industry do? So when actually people use machine learning in practice, how do they cope with the fact that the world is constantly changing? And you could ask Chip Huyen, who works on machine learning systems, and she says that machine learning systems degrade quickly over time because of concept drift when they're in production. And for online machine learning systems, typically, you just want to update them as fast as humanly possible in order to kind of address the fact that they degrade quickly because of concept drift. And so, I think this is actually an interesting thing to know about because it means that the way that we typically develop machine learning systems is under this kind of train test paradigm and actually, that the way they're being used in practice is this continual training setting, which is a bit different from the way that they were originally intended. And so, maybe that means if we want to build better machine learning systems and to actually do research that's relevant to the real world, maybe we should take into account the fact that they need to be updated constantly over time. Cool. And so, fine-tuning is a really reliable and performant approach for trying to update models quickly over time, but it also has some limitations. So first, it requires you to collect labeled data from the new part of the data distribution, and this can be expensive. It can take time. It may not be available. Can also be computationally expensive, and especially as you're trying to train larger and larger models, the amount of compute you'll need to fine-tune that model will, at some point, become somewhat impractical. And, in general, it's a fairly blunt tool. And if you only want to make a very small change to your model because you notice an error or notice one small thing that changed, fine-tuning isn't necessarily the best tool to make a very precise and small change to your model. And so, yeah, what I'll talk about next is how we might be able to leverage meta-learning to actually address some of these weaknesses of fine-tuning. Cool. So first, we'll focus on domain shift, and this is something-- a kind of distribution shift that we saw in the domain adaptation and domain generalization lectures. And then, after we talk about domain shift, we'll talk about a more general kind of distribution shift that's called concept shift. And so, to recap, in domain shift, you have some sort of categorical domain variable, and this could correspond to different users, to different locations, different times of day, and oftentimes, this sort of domain information can be derived from metadata that already exists in your training data set. And then, what domain shift looks like is you assume that your training data is from this distribution right here and your test data is from the same distribution, but where the only thing that has changed is the distribution of this underlying domain variable. And this kind of distribution shift is fairly general, and this is the kind of the shift that we'll look at in this first part, but there are also some things that it cannot capture. Now, one approach, I guess-- so we saw some approaches for handling domain shift in my lecture on domain adaptation and Huashi's lecture on domain generalization. One approach that we didn't really talk too much about is called distributionally robust optimization. And the way that this approach works is you try to form some adversarial distribution over your domain variable and then optimize for the worst-case distribution. So, in particular, you can formulate kind of a adversarial optimization, where you are trying to find a model that does well under the worst-case distribution over domains. And if you have a categorical domain variable, this is actually fairly simple to evaluate. You simply pick the domain that the model is doing the worst on in your training data set and then optimize for the performance on that domain, and then iterate that process. And this sort of adversarial optimization can enable a robustness to different domains. It can also be less pessimistic than adversarial robustness, which is another kind of robustness that you might have heard about. But it can often sacrifice the average performance or the empirical performance because it's actually quite pessimistic about the data that it will be seeing or the domain that it will be seeing at test time. And so, what we're going to try to do is actually kind of formulate an alternative paradigm that is a little bit less pessimistic but still allows us to be kind of fairly robust. And the way that we're going to be doing that is instead of optimizing for the worst-case domain, we're going to try to actually optimize for domain performance after having adapted a small amount to each given domain. And so, specifically, what I mean by that is we're going to assume that we have unlabeled data from the test domain. This could be the unlabeled data from a new user, from a new time of day, or a new place. And so, if you want to, for example, recognize handwriting, this is from the Federated extended MNIST data set. This is all from one user. You might have this batch of unlabeled data from that one user. And then, your goal is to take this unlabeled data, adapt your model to that user, and then infer the labels for all the examples in that batch. And so, this is going to be different from other approaches in a few different ways. First, unlike fine-tuning, this only needs unlabeled data in order to adapt the model rather than labeled data. Second, unlike things like domain generalization and domain adaptation, we're not going to be assuming access to this unlabeled data during training time. We're actually just going to adapt the model on the fly at test time. You really-- the main assumption that this is-- I guess the main two assumptions that it's going to be making is that first, you have domain labels in your training data set, and second, at test time, you have a batch of test inputs from the same group or from the same domain available at once rather than just having a single example. This is a little bit different from the standard machine learning setting, where you just assume that you have just one input at a time at test time. So this is the problem setting. And we kind of refer to this problem setting as adaptive risk minimization in the sense that you are-- instead of just evaluating risk directly on the examples that you're given, you actually have an opportunity to adapt with a small amount of examples. And then there's the question of actually how to optimize for this and how to actually develop a method that can allow you to adapt with just unlabeled data from a new part of the distribution. And so to do this, we could actually train a model for few-shot adaptation to the different domains in the training data set, and this will just use basically the same meta-learning algorithms that you saw in the earlier parts of the course. The key thing that's different, though, is instead of doing few-shot labeled adaptation, we're only given unlabeled data. And so, we want to be able to adapt with these unlabeled examples rather than a few labeled examples. And so there's a few different ways that you can do this. One is you could use the MAML algorithm. But because you can't compute a standard cross-entropy loss function, you can meta-learn a loss function and use-- meta-learn that loss function such that when you adapt with that loss function on the unlabeled data, you do well on each of the domains in the training set. And so what this looks like is you are going to be meta-learning both the initial parameters, as well as the parameters of some loss function, also represented by a neural network. You, in the inner loop, use that loss function to update the network from the initial parameters to get a new network, and then optimize this whole thing end-to-end such that you get good performance on each of the training domains. So that's one way to adapt with unlabeled data. Alternatively, you could also use a black box approach and simply basically pass in all of the unlabeled examples into a neural network that produces some context and use that context to make predictions for your examples. And this sort of black box approach is actually quite straightforward because the only thing you need to do to modify a black box approach to work without labels is to just remove the labels when you pass the data as input. There's a few different architectures that you could use for this sort of black box approach. One is the architecture that's shown here, where you just kind of predict this context and pass that into the network. Alternatively, this context could actually be computed in a different way. One really simple way to compute this context would just have the context correspond to batch form statistics. And this would mean that you're basically passing all of your examples in as a batch, and instead of using the batch form statistics from the training data set, you're going to be computing them on the fly at test time using all of your examples. Cool. Any questions on how this works? Yeah. How do we train a loss function and the parameters at the same time because is there like a loss for the loss function? Yeah, so the question is, how do we train the initialization and the loss function at the same time? One really critical thing is that we're only going to be training the inner loss function, and the outer loss function is going to stay the same. And so if your inner loop update is something like theta minus alpha grad L, the parameters of L, this neural network is parameterized by phi that takes as input the unlabeled examples. I guess, in this case, there's only three unlabeled examples. This is the inner loop update, and then we're going to be evaluating this in terms of how well these updated parameters do on the labeled examples, which will be x1 to 3, and the corresponding labels. And this outer loss function will correspond to just a standard cross-entropy loss function, and we could optimize this entire thing over both the initial parameters and the loss function parameters. Cool. Any other questions? Cool. So there's a lot of different experiments in the paper, but I'll mention just a couple of them. The first is the Federated Extended MNIST example that I showed on the previous slide. And in this case, the experiments are all trying to adapt to new users that have different handwriting, only with unlabeled data available at test time from those users. The experiments compare to a number of different methods. One of them is just standard ERM. We could also compare to the group robustness approach that I mentioned before. We can compare it to domain adversarial training, which was covered in Huashi's lecture and also compared it to a really simple upweighting approach that will just upweight the data from each of the users to be a uniform distribution so that you're sampling data from them equally. Cool. And then, we can look at performance, both in terms of the average test performance on these new users as well as the performance on the worst user. And this sort of worst-case performance is useful because it will give you a notion of fairness. Are you doing really terribly on some users and much better on other users? And it will also give you a notion of robustness, such that if your distribution changes more towards those worst-case users, what will your performance look like? And if we look at the numbers, first, in terms of average accuracy, we see that domain adversarial neural networks actually gives you somewhat better performance than empirical risk minimization, but that we can get around a 5% improvement by actually adopting, in this case, with a context variable, in comparison to just trying to train a single neural network. Then, in terms of worst-case accuracy, we also see around a 5% improvement, as well, over the best existing method, in this case, which corresponded to either upweighting or the domain adversarial training. Now, we can actually qualitatively look at what it's doing here. And so here's one example where-- this is an image that the system is given, and if you ask the neural network trained with ERM, it will tell you that this is a 2. And similarly, if you ask kind of the adaptive risk minimization, the meta-learned model that used a context variable, if we give it these two examples and ask it what the label is, it will also say it's a 2. But if you actually give it a little bit even more context, if you give it all of these unlabeled examples, it can correctly figure out that this actually corresponds to a lowercase a. And the reason why it can do that is it can look at other examples, for example, this example, and realize that this particular user often writes their 2 without a loop, and therefore, this must be a lowercase a rather than a 2. And so essentially, what the system is doing is it's adapting to this particular user, adapting the model to the user, such that it can more accurately make predictions for that user. Cool. One other experiment that I'll mention was looking at adapting to different image corruptions. And so this is using the CIFAR-C and TinyImageNet-C data sets, was trained using 56 different corruptions, and then testing its ability to adapt to images that were corrupted with 22 other corruptions that weren't seen during training. And again, here, we see that the domain adversarial neural network approach is one of the best-performing prior approaches, although it actually really is performing somewhat similarly to standard empirical risk minimization. And in contrast, by actually adapting and training for fast adaptation with unlabeled data, we're able to get around a 2% to 3% improvement in average accuracy and a 7% to 9% improvement in worst group accuracy. Yeah. [INAUDIBLE] when you say corruptions, are there still similar kind of corruptions let's say if I test like ImageNet [INAUDIBLE] is it able to handle a very different kind of distribution shift? Yeah, that's a great question. So first, in terms of the corruptions, we construct more corruptions by also considering the severity of the corruption, as well. And so, I think that's how we get to 78 corruptions here. And then, in terms of being able to handle very different corruptions, I think that it may still help somewhat. But I do think that the improvements will be less than the improvements that you get from corruptions that are a little bit more like the one seen during training. And that's because we're training for adaptation performance on a set of corruptions, and if you give it something-- it has to adapt to something that's very different from that, it won't necessarily generalize. Yeah. Does this loopback [INAUDIBLE] on higher-severity corruptions, as well? Was there an experiment [INAUDIBLE] We didn't specifically have any experiment that was looking at how performance does as the severity changes. My guess would be that it does help more when the severity increases because ERM does much better than these when there's no corruption. And I don't think these methods would help when there's no corruption, of course. So my guess is that it would be better with more severe corruptions, but it's also possible that you might see a bit of a fall-off if it's just too corrupted to be able to get any signal. Cool. So the takeaway here is that we can use meta-learning to adapt to distribution shift, and specifically to adapt to different kinds of domains only using unlabeled data. Now, what about other forms of distribution shift? Specifically, next, we'll look at a kind of distribution shift that's called concept drift or concept shift. And this is when basically, kind of the entire P of y given x distribution can change. And actually, I guess in a lot of the examples that we'll be looking at, P of x is not going to be changing. Only P of y given x is going to be changing. And I should note that when P of y given x is changing, you're going to need some sort of supervision in order to adapt to the distribution shift because P of y given x could change in a lot of different kinds of ways. And specifically, in this case, we'll look at handling this kind of distribution shift in the context of language models, although a lot of the methods that we'll be looking at are applicable beyond language models, as well. And as a motivating example, if you take a version of GPT-3 that was accessed earlier this year, and you ask it, what is the largest English-speaking EU nation? It will tell you the answer is the United Kingdom, but, of course, the UK is no longer in the EU, and so this answer isn't correct, and the answer should be Ireland. If you ask it who is Algeria's current president, it will also give you an answer that is out of date. And likewise, if you ask it the club team that Lionel Messi plays for, it will give you something that's out of date, as well. And so, there's a question of what should we do in this scenario. What should we do to try to actually update something like GPT-3 in order to allow it to make correct predictions on things about the world that are changing? Unfortunately, it would be really expensive to have to retrain GPT-3 or even fine-tune GPT-3. It would probably be fairly expensive. And so, what we'd like to be able to do is try to figure out how to keep these large models up to date without having to fine-tune or without having to completely retrain the entire model. And so this is where the kind of framework of editing can come in. What would be really nice is if we could take our base model. If you ask it who is the prime minister of the UK, it will tell you the answer is Theresa May, which was, of course, correct before, but that's also like three prime ministers ago now. And what would be really nice is if we could take this example and tell it that the answer should be Rishi Sunak, and pass that to a model editor, and then get an edited model that can give us the correct answer, including a correct answer to rephrasings of the questions or other things that are in scope like, is Rishi Sunak the prime minister of the UK? Furthermore, we don't just want it to give us the correct answer for inputs that are in scope. We also want to be able to ask it things that are out of scope and unrelated, like who does Messi play for, and still have it give us the correct answer. So we don't want to have it affect things that are unrelated to the edit. And so specifically, to try to scope out this problem a little bit more, maybe your edit example is, who is the prime minister of the UK? Our scope is the space of inputs that are related to that input, and that would include rephrasings of the question. And there are also examples that are out of scope, like why is the sky blue? It's also worth mentioning that there are some examples that are in-scope and out-of-scope that are going to be much harder, like where is Rishi Sunak the PM? This is going to be in-scope, but it's going to be a little bit further and more difficult than just a rephrasing of the question. And likewise, there are things that seemingly are somewhat related, but shouldn't be changed when we make an edit to the model. And these are actually really the most challenging cases, and it's important to-- when we actually get to some of the experiments, we're actually going to be trying to focus on evaluating on these examples rather than the easy examples, like why is the sky blue? Yeah. [INAUDIBLE] Yeah, so what about prompts that aren't in the form of the question? I guess-- so, in principle, yes, you could have some of those things be in-scope or out-of-scope, and you don't just have to have this for a question-answering model. You could have it be for just general language models. And if you have a model that can do both question answering and continuations, then you could have those kinds of things be part of this picture, as well. Yeah. What about [INAUDIBLE] Yeah. So that would also be in-scope, as well. And you could also imagine, I mean, getting along the lines of other kinds of inputs. You could also have something like a picture of Theresa May and say, is this person the prime minister? And, in principle, that would also be in-scope, as well, if you're considering models that can take that sort of input. Cool. So this is what we'd like to be able to do. And we can formulate this as a meta-learning problem. And so, there was this really cool paper by Sinitsin et al. in 2020 that framed this problem as a meta-learning problem of trying to actually train a neural network that could be edited in very specific ways. Unfortunately-- and we'll talk about the formulation from that paper. Unfortunately, this kind of specific method in the ICLR 2020 paper wasn't particularly scalable to large language models, and so, in terms of the methods that we'll be covering, we'll be covering some more recent work that is much more scalable. But this was the work that introduced the paradigm of doing this in the meta-learning setting. And so, to frame this as a meta-learning problem, we are going to assume that we have the ability to collect a data set that shows us examples of how editing should be done. And in particular, this data set will contain, first, a descriptor of the edit. This could be an input-output pair, like who is the UK PM? And the answer is Rishi Sunak. It will also include an example of something that's out of scope, and this is going to try to enforce the fact that the model shouldn't change on these out-of-scope examples. And it'll also include examples of in-scope input-output pairs. And so, in particular, if it's given an example like the prime minister of the UK is currently who? It'll be told what the new answer to that question should be. So this edit data set will essentially be something that we'll use to teach a model editor how to edit a model. Collecting this edit data set is going to be a nontrivial effort, although, in general, if we collect a general enough edit data set that has examples of lots of different kinds of things that we might want to do to edit models, then we only need to train a single-- we only need to use it once and collect it once in order to train a single model editor for making all sorts of edits to a large model. Cool. And then once we have the data set, there are a few different ways that we could train one of these model editors. The first approach is going to try to actually change the gradients and actually update the weights of the model. And so, what we'll do is we'll take as input the gradient that corresponds to the gradient of trying to fine-tune the model on that one edit example, and we'll try to transform that fine-tuning gradient into a better update for the model. And so, this is kind of captured by this picture, where if you have your pre-trained model, and you want to make an edit to it, then you compute what the gradient would be if you were to fine-tune on that edit and then pass that gradient into this Model Editor that will then give you an updated gradient that you'll actually use to edit the model. And once you apply this kind of revised version of the gradient to the model, the goal will be for it to give you the correct answer for these edits and also, of course, not affect the model output on other examples. And so, really, the main component of this is just training this single Model Editor right here in the middle that's going to take as input a gradient and output a modified version of that gradient. And the way that this is trained is using the edit data set. It will be trained to output the correct prediction on the in-scope generalization example, and it will be told that it should maintain the same distribution of outputs, given the out-of-scope example as the original model. So there's two terms in the loss function. Yeah. [INAUDIBLE] Oh, there's a question. So the question was, isn't this more expensive than fine-tuning because you're still computing the gradients? So yeah, you do need to still compute the gradient. The one big thing is that we're only going to be applying one update to the model. So we're going to be outputting a modified gradient, such that just one step of that gradient, one update, gives you a good model. And so, it will require fewer update steps than fine-tuning. And second, it's also just going to work better than fine-tuning. And so, by actually training it to do this, it will give you something that more effectively makes targeted edits without affecting out-of-scope examples. [INAUDIBLE] So the question is, is there any analysis on what it's doing to the gradient? Unfortunately, it's a little bit difficult to analyze. I would guess that it is probably increasing the scale to some degree, but it's certainly doing more than that. And generally, it's hard to interpret weights of neural networks, and it's also hard to interpret gradients. So yeah, we weren't really sure how to analyze it in particular. One thing that I will mention that we did in this work is that the gradient of a single example is actually a rank 1 matrix. So if you have a weight matrix, and you compute the gradient of that, so say you have a one-weight matrix right here. This is layer a and layer b. The gradient of this weight matrix corresponds to the outer product of the forward activations. So-- this isn't a gradient notation but the forward activations and the gradient coming backward from b. So it's basically something like dL db. So this is a vector, this is a vector, and it's the outer product of those two things. And so, it's actually a rank 1 matrix. And one thing that's cool that this Model Editor called MEND does is it actually decomposes-- instead of passing in the weight matrices, it passes in the two rank 1 components into the network and outputs a rank 1 or a low-rank update to the model. And as a result, this ends up being-- the dimensionality of the inputs and outputs of this Model Editor are much smaller. And so, in particular, if this is ha and hb are the kind of the size of these two activations, then this weight matrix is ha times hb, whereas the dimensionality of the two rank 1 terms is ha plus hb. And this means that the dimensionality of the input is going to be much, much smaller than if you were to input and output the entire weight matrix. Cool. So this is the first approach to model editing. That's actually to try to update the weights or change the weights of the model directly. Before we go over some of the results, I'm also going to show a second approach to model editing that tries not to update the weights of the model. And the motivation for the second one is that this approach ends up being somewhat difficult to scale to large numbers of edits because it's actually fairly difficult to figure out how to change the weights in a way that will make these sorts of targeted edits. And so, the second approach is going to try to take a more semi-parametric approach. And so, we'll have our base language model, and instead of updating the weights of this language model, it's actually going to be fully frozen. And instead, we're going to try to form a wrapper around that language model such that the wrapped language model gives you the behavior that you want after you've applied the edits. And so, this approach will have a few different components. The first will be a memory of edits that stores all the edits that you want to apply to the model. The second will be a scope classifier that will classify whether or not an edit and an input are within scope. This is actually going to be somewhat similar to something like matching networks or prototypical networks. We're actually going to be comparing the edits that you want to make and the inputs and seeing if they're similar or not. And then, additionally, you can train a counterfactual model that will make predictions if the edit is being applied to the model. And so the way this works at test time is, first, you can populate the edit memory with the edits that you want to make to your model. This is the one slide that I didn't update with the correct PM because the PM is changing really quickly. Then, when we get a new test input, what we're going to do is we're going to first compare that test input to the edits in our memory bank. The scope classifier will tell us that this edit is in-scope. And then, because there is an edit in-scope, we'll then pass the edit as well as the input to the counterfactual model, which will give us the revised prediction for the input. Alternatively, if we get an example that's out of scope, then we'll just pass that directly-- out of scope for all the edits according to the scope classifier, then we'll just pass that directly to the base model. And so, really, there's just two components that we're training here. First is a scope classifier that will tell us whether or not an example and an edit are-- whether an example is in-scope for a given edit. And then the second is the counterfactual model, which will train to make predictions after applying the edit for a given example. Yeah. [INAUDIBLE] Yeah, so one thing that you could do instead of having a counterfactual model is essentially prepend the edit to the prompt of the base model. And one potential advantage that this might have is that it will leverage kind of the power of this really large base model. In practice, we found that that didn't seem to work that well, but perhaps you could fine-tune the base model in order to be good at that sort of prompted behavior. And the counterfactual model seemed to do reasonably well, but it's something that-- I do think that trying to leverage the base model more I think would be kind of an interesting direction for future work. Yeah. [INAUDIBLE] and you need a new model and say that the edit size becomes larger than the model suffice. Yeah, so the question is this scalable, especially as you want to apply a large number of edits, and do you need the counterfactual model to be large? So first, yeah. So this will-- as the number of edits increases, you will still need to store all of those edits in memory. Fortunately, you can apply the scope classifier in parallel to all of those edits, and also in practice. For the edits, we actually only need to store the embeddings of the edits. We can just cache that computation. So there is things that you can do to make it computationally fairly cheap. The other thing that you could do is if you do start to accumulate a very large number of edits, at that point, it may just make sense to fine-tune your model because at that point, you do need to actually change the model fairly significantly and hopefully, that you don't accumulate edits that fast over time. And then, in terms of the counterfactual model, in practice, we do kind of pretrain it with a BERT style model. And so, it won't be necessarily as large as the base model, but it could still be somewhat large. And with a BERT style pretraining, we found that it could work well even with edit data sets that are far less in size than the data set that the base model was trained on. Yeah. So with this type of computation, if we see something that is within the scope, we just completely ignore the base model, right? Is it possible to create something that's like a container in which, instead of just sending it to a separate counterfactual model, like the counterfactual model just edits the output of the base model? Kind of like with the edits. Yeah. So one suggestion that came up before is we could possibly prepend the edit to the base model's prompt. Alternatively, you could kind of make a prediction from the base model and then pass that into the counterfactual model. And that might still be leveraging all the power of this base model, and that it would only be editing the output of this base model. And so, I think that for things like question-answering, that isn't necessarily that important, but I think that if you're in a setting where the task is to write long-form text, then something like that may actually make a lot more sense because generating long-form text is much harder than answering a question. And also, it may be that you only need to edit a small part of that long-form text. And so, yeah, I think that generally, different directions for leveraging the base model are interesting, and one way to do that possibly may be to take the output of the base model and pass it into the counterfactual model. One other small thing that I'll mention that's kind of a benefit of having the tube be decoupled is that by having them be decoupled, this does mean that you can actually plug in any base model into this architecture. And we found experimentally that even if you train this whole system with one base model, you can kind of apply it to lots of different base models. And that means that even if you then retrain a base model, at some point, you could still reuse the editor across multiple different base models. Cool. So in terms of the experiments, I'll highlight just two experiments. The first was on question-answering, just like all the examples we've been looking at so far. And there were two metrics that we looked at in order to evaluate how good the editor was performing. The first was edit success, which corresponds to accuracy on in-scope examples. And then the second was drawdown, which corresponds to how much is the accuracy dropping on examples that are out of scope from the edit. In order to combine these two things into one metric, we can just measure edit success minus drawdown. And so, the best that you could do is edit success of 1 and a drawdown of 0. And so, then subtracting the two would give you 1. And then, as this score goes down, that means that either your edit success is dropping or your drawdown is increasing. Cool. And then, we're going to evaluate this as we increase the number of edits that we're going to be applying at test time. And we'll be, in this case, editing a T5-Large model on the type of question-answering things that we've been seeing so far. Cool. And so, if we compare the three different editing approaches-- I guess I didn't include fine-tuning on here, although fine-tuning generally kind of does worse than all three of these methods. First, we see that with a single edit, both MEND and SERAC are able to do-- both the approach that was editing the weights and the semi-parametric approach are able to do fairly well. But as you increase the number of edits, really, these approaches that try to edit the weights of the model are much less successful at doing so. And really, only SERAC is able to achieve a high edit success and a low drawdown as you increase the number of edits. Cool. And then, one other experiment that I'll mention is you don't necessarily have to just edit factual knowledge and so forth. And so, one experiment that we did was editing the sentiment of a model. And so, we trained it to edit the sentiment on a variety of different topics. And then, at test time, if we ask one of Facebook's prior public models, what do you think about vaccines? It gives you a model that is very negative about vaccines. So everything highlighted in red is things that are negative. And then, if you actually try to edit this model with the latter SERAC approach, you can get something that is-- it's because we're telling it to be more positive, and so as a result, you see that it's much more positive about vaccines. And it's not just positive in a superficial way. It also says things like I think they're a good way to prevent infectious diseases. I think it's good for people to be informed about the risk of their health, and so forth. And so this sort of editing approach, I think, is actually fairly general. And as long as you have a data set that can tell it how you wanted to make edits, then you could apply it in a variety of different ways to these very large models. Cool. So the takeaways of this first part is we can use meta-learning to enable adaptation and fine-tuning of these large models, either with only unlabeled target data or with a high-level description or a single example of a change that you want to make. And these are two things that are, I think, very difficult to do just with vanilla fine-tuning. Cool. Now, I also want to talk a little bit about meta-learning across more general task distributions. And we're a little bit low time, so it's possible I might just cover the first thing and skip the second thing. This work is really about trying to push the boundaries of meta-learning and trying to see if we can get something that's much more general than the priors that we've been looking at previously. And this is a paper that came out on trying to train generic optimizers. And it's a really cool paper, and really, the goal of their work was to try to get an optimizer that works well for any problem and any architecture without having to tune the hyperparameters of that optimizer, and so, without having to tune learning rates or without having to tune momentum or anything else of the optimizer. And we'll get to the experiments at the end, but the short version is they largely achieve this goal, which I think is really exciting. So it has four central components. The first is a neural network architecture that predicts weight updates. The second is a algorithm for training that neural network. The third is a very large and broad set of optimization tasks. This is really important in order to get the generic part of this. And then, the last is a lot of compute. Cool. So let's step through each of those different components. So the first is the architecture. The architecture is actually somewhat complicated, but once you break it down, it's fairly simple. So first, for every single weight, for every single parameter in the network that you're optimizing, there is a really small neural network. It's really a tiny, fully connected network with two hidden layers that each have four units, and there's one of these fully connected networks for every single parameter in your network. And this is the network that's going to be outputting the parameter update for that weight. Now, where do the weights of this network come from? So there is a separate neural network, that's an LSTM, that is acting over the parameters in the weight tensor. And this is going to be generating the weights of all of the MLPs for each of the parameters. And so, this LSTM-- so for every weight or, I guess, for every tensor, so for every weight matrix, for example, you're going to have an LSTM that acts over those weights. And that LSTM is going to be outputting the weight matrices of these really tiny MLPs for all of the parameters in that tensor. The input features of this LSTM correspond to a bunch of different things. It includes the mean and variance of each of the parameter values in that tensor. A moving average of the gradient and the squared gradient, that's information that Adam uses. Also, things that indicate the fraction of training that's been completed, as well. So the input is all of those different features, the output is the weight matrices of the small MLP for each of the parameters. The last thing is this global aggregation. The LSTM is also going to output a global context. This global context vector is going to be pooled across the weight tensors and then reinput into each LSTM. And so, this will just help the LSTMs basically stay on the same page. Cool. So that's the architecture. How do we actually train this architecture? So the meta objective is the training loss at the end of the optimization for each task. And so, basically, you'll have some initial parameters for the task. You'll then use this architecture to update the weights many times up until, I think, around 200,000 steps. And then, you'll get this final parameter, and the meta loss will be how good is this final parameter at the task that it is trying to do? And the objective will be this final loss, but, of course, averaged over all of the different optimization tasks that it's being meta-trained on. Now, the meta-optimizer is going to be using evolution strategies. This is the approach that Yunho covered in his lecture using full rollouts. They're not going to be doing any sort of truncation. And so this is going to be a pretty expensive optimization. So they'll take this final loss and then update the weights of the LSTM that are used to make all of these different updates. Then, they also found a curriculum to be extremely helpful. And what they did is they first started with optimization problems with a smaller number of training steps and with a smaller problem size and then gradually moved towards longer training times and larger problems. How do you measure problem size? They estimate the problem size. This was just the time required to do one forward pass in the network. And so this is an indication of how big is the network or how complicated is the network. Cool. And then, lastly, tasks and compute. So they constructed on the order of millions of tasks with lots of different architectures, including MLPs, ConvNets, transformers, RNNs, autoencoders, and also other learned optimizers. For each model family, they varied the training data set, loss function, initialization strategy, and various hyperparameters, like the hidden layer size, the depth, the activation functions, and so forth. And they also did various forms of task augmentations, where they did things like reparametizing the weight tensors, for example, dividing the weight tensor by a certain value, having that be the weight tensor, and then having a constant be multiplied after that, delaying the gradients, changing the floating point position, and other forms of task augmentation. And then, computationally, as we mentioned on the previous slide, we're doing full rollouts, and we are optimizing this with evolutionary strategies. And so, they estimated that it took around 4,000 TPU months to meta-train the optimizer, which is rather expensive. So let's get to some of the results. Their first evaluation was on a set of 83 canonical tasks, and this is measured in terms of the number of update steps required to get the same performance as a version of Adam where they tuned the learning rate of Adam. And so the red line shows this versatile load optimizer, VeLO. And first, we can see that on 50% of the tasks, VeLO is more than four times faster than the learning rate tuned Adam. And on basically all of the tasks, it is actually the same speed or faster than Adam in terms of the number of update steps. And they also compare, not just to Adam but also these other optimizers, including optimizers for which they tune the hyperparameters, like OptList and Shampoo and also prior methods that are hyperparameter-free, for which there didn't need to be any tuning. So this result, in my mind, is really impressive and pretty exciting that you could actually, on such a large number of tasks, outperform Adam. What I think is even more interesting and cool is actually looking at how well it's able to optimize things that were out of distribution. So we looked at a lot of different out-of-distribution tasks. Here, they compared to a few different methods, although I generally tend to focus on the comparison to Adam because I think that's what people use the most, at least in terms of research. And they evaluate it on NERF training. NERF is a very different kind of-- I mean, it's not necessarily a different kind of architecture, but it's a very different kind of optimization problem than what it saw during training. And we can see that it is-- despite the fact that this is an out-of-distribution task, or at least seemingly out of distribution, the optimizer is able to optimize faster than Adam, so you can compare the red curve and the blue curve. The blue curve is actually a tuned version of Adam, where they tried-- this is kind of the best out of 14 different hyperparameter settings of Adam. They also looked at an MLP mixer architecture, which is just a different architecture than anything that's seen during training. And we also see improvements over Adam, so red is faster than blue. They also looked at object detection. So this is this kind of RCNN architecture, as well as the decision transformer, and we see that it's better than SGD on object detection and kind of similar to a decision transformer or similar to Adam on a decision transformer. Cool. Any questions about the results? Yeah. [INAUDIBLE] For the object detection? Yes. I'm not fully sure. I would guess that they chose to use SGD because that's the default architecture that's used for this or the default optimizer that's used for this, but I'm not fully sure. Cool. So they also had a really long list of-- they also included a number of experiments that it didn't work in, although, in general, I felt like the results were really impressive. The first caveat here is that it's compute. Even though it's faster in terms of the number of updates, there is a 10x computational overhead over Adam. And so, it's more expensive to run the LSTM compared to running Adam. That said, compute is often dominated more by gradient computation than the optimizer computation, and so this may not be that big of a deal. You may still get wall clock speed-ups. And second, they had some experiments that were looking at the performance with varying batch sizes. And you can see that it generally does well with very large batches, and this means that if you're worried about wall clock time, you can still get actually a much better wall clock time than Adam by just using a larger batch size because this means that when you have a larger batch size, your compute will be even more dominated by the gradient computation. They also found that the optimizer doesn't do well at optimizing very large models, specifically models that had more than 500 million parameters. And so, if you're in the business of training large language models, this may not be the optimizer for you. They also found that it had worse performance if you required more than 200,000 update stops because they only trained for up to 200,000 update steps. And lastly, they found that in reinforcement learning settings, specifically policy gradient and evolutionary strategy optimizations, it was much worse than Adam. This isn't too surprising because it wasn't trained on these kinds of tasks, and the gradients that you get from policy gradient and ES on reinforcement learning problems is going to be very different than the gradients you typically get from gradient descent. I guess maybe the other consideration here is whether it's worth spending 4,000 TPU months, I think was, on training this optimizer. In principle, if it kind of speeds up a lot of future experiments by being a single generic optimizer, then I think that there is probably a case for it being worth it, but there's always-- it's always, I think, worth considering weighing computational costs and so forth. Cool. So the takeaway here is that meta-learning can produce generic optimizers, which is really, in my mind, the first-- we've seen some results that have moved towards more optimizers that generalize well, but we certainly haven't seen something that's kind of truly giving something generic at this level before. Cool. We're a little bit low on time. I think I'll just briefly mention some of this work so that we still have time to go through some of the open challenges. So the second kind of thing that I think you can meta-learn over, in addition to the optimizer, is the architecture and the symmetries that are built into an architecture. One example of the symmetries built into an architecture is convolutions, which gives you 2D equivariants. And things like convolutions are great when we know the structure and we know how to build it in. But there are also scenarios where we don't know what the structure should be or we don't know how to build it in. And so, there's this question of whether we might be able to use meta-learning to recover the equivariant or invariant structure underlying a data set. For example, could you recover convolutions if you're given translational equivariant data? You might ask if MAML can already do this, and MAML can learn some variant initial features, but that equivariance may not be preserved through the gradient update. And so, really, the goal here is to try to decompose the weights into an equivariant structure and the corresponding parameters, such that in the inner loop, you can update only the parameters and try to retain the equivariance and keep that fixed. And so, we can walk through an example of convolutions where if you look at a 1D convolution, you get something like this, and you can actually represent a 1D convolution as a fully connected layer. And if you do that, it looks like this, where the weights are copied multiple times. And from this perspective, you can think about trying to reparametrize this weight matrix into, first, the underlying filter parameters, a, b, and c, and second, a matrix that indicates how the weights should be shared or how the weights should be tied. And kind of the product of this sharing matrix and the underlying filter parameters will give you a weight vector that you could reshape into the corresponding fully connected weight matrix. And you can kind of intuitively think of the sharing matrix is capturing the symmetries and the vector v is capturing the underlying shared parameters. Now, this was an example for convolutions, but it turns out that this sort of decomposition can directly represent this decoupled equivariant sharing pattern and filter parameters for a wide range of symmetries, specifically for kind of all G-convolutions for a finite group G. There are some symmetries that this can't represent, but we won't have time to get into that here. Cool. And so, once you have this decomposition of the weight matrix, you can basically, in the outer loop, learn the equivariance and possibly the initial parameters, and, in the inner loop, only update the parameters while keeping that sharing matrix fixed. And so, if you do this, you can try to actually see if you can recover things like convolutions from translational equivariant data. And so, specifically, here, we're going to try to recover 1D convolutions by giving it examples of random functions that have this 1D translation equivariance. And if you do this, of course, this is showing the mean squared error on random held-out tasks, and this is if you do MAML with a fully-connected network, with a locally-connected network, or with convolutions. And, of course, convolutions does the best because that reflects the underlying symmetry of the tasks. And if you use this approach with a fully connected network, it's actually able to do just as well as convolutions because it's able to recover the structure. And so, specifically, if you look at the recovered weight matrix from this sort of approach, where it's meta-learning the sharing matrix, you see that it actually gives you exactly the structure of a 1D convolution. Cool. And then, there's a question of, well, maybe you can also do better than convolutions in scenarios where the structure isn't exactly translational equivariance, and so, you can give it problems that have partial translation symmetry. So here, k is going to correspond to the rank of a locally-connected layer, and so k equals 1 corresponds exactly to convolutions, where k equals 2 and k equals 3 has less symmetry. And here, we see that this approach is able to recover a solution that's much better than convolutions by actually recovering something closer to the true structure in the data. And likewise, you can try to actually give it more symmetries than just translation, like rotation and reflection. Here, we actually started with a convolutional network and tried to learn symmetry on top of that. And you can get, again, something that's better than convolutions by recovering that sort of structure. These are all fairly simple problems, but I think that it's still conceptually pretty cool to think about how you might try to meta-learn these kinds of architectures from data. And maybe if you were able to kind of-- maybe if you did scale up these kinds of approaches with more compute, then we might be able to actually do this on a much larger scale, rather than these simple 1D convolution problems. Cool. So the takeaway is that we saw how meta-learning can produce a generic optimizer, and we saw some preliminary evidence of meta-learning being able to capture equivariances in neural network architectures. Cool. I'd like to finish by talking about some open challenges. So I'll group these into a few different categories. So the first category, I think, of open challenges is trying to address some of the problem assumptions that we often make in meta-learning problems and multi-task learning problems. And the first is generalization. So this actually kind of came up earlier when we were talking about trying to generalize the corruptions that are very different from the corruptions that we saw during training. And one specific instantiation of this is like a long-tailed problem, where you want to try to be able to generalize to the tail of the distribution. In this case, the-- in few-shot learning, in principle, we should be able to kind of meta-train on this part of the distribution and then do few-shot learning on the tail of the distribution. But these few-shot tasks on the tail of the distribution may be from a different distribution than the task that we saw during training. And so, there are some examples of generalization to the tails of distributions in dermatological diseases and also kind of the adaptive risk minimization example we saw before. But it's still, I think, an open challenge to really truly tackle these kinds of long-tailed situations. And I think that there may be perhaps some hints that come from some of the robustness literature and perhaps trying to combine those sorts of ideas with things like meta-learning. Another thing that's-- another challenge that comes up is so far, we've really been looking at meta-training over tasks that are from a single modality. And in contrast, when people leverage previous experience, we have not just one modality of data, but we have lots of modalities, like tactile feedback, and language, and social cues, and we're able to merge all of that prior experience into the knowledge that we have and leverage, for example, cues from language that we've read about when making visual decisions, and so forth. And so, I think it'd be really interesting to try to learn priors across multiple different data modalities, rather than trying to learn a prior that works well for a single modality. And there are some challenges and opportunities here. Different modalities often have different dimensionalities or different units, but they can also carry complementary forms of information, and that might be quite useful when trying to actually leverage that prior information. There's some kind of, I think, initial works in this direction, including the Flamingo paper that Eric covered in his lecture. Also, the Gato paper that tried to train a single agent on lots of different modalities. But I think that there's still really a long way to go in terms of trying to capture all of this rich prior information when learning new tasks. And then, lastly, beyond generalization and multimodality, one question that I think has come up quite a bit in this course is actually trying to understand when multi-task learning might help versus when you might just be better off trying to learn a specialized model. And so, there's all sorts of questions around algorithm selection and model selection that I think are still fairly wide open to explore in future work. Beyond building better algorithms, I think that also a big part of research and a big part of research that fuels progress in machine learning AI is better benchmarks. And I think that we need benchmarks that challenge current algorithms to find common structure. And arguably, the kind of learned optimizer work that we just talked about was fueled by a lot of compute, of course, but also, a benchmark that had a much broader set of optimization tasks than was previously considered. And we also want benchmarks that reflect real-world problems that will have an impact on the world. And so, we've seen some steps towards some good benchmarks over the past few years in various domains, although I think that there's still really a lot of progress that can be done here. And I especially think that a lot of benchmarks primarily look at computer vision or NLP domains, and there's all sorts of domains outside of images and text that are extremely useful, impactful, including things like satellite imagery or molecules. And I think we often don't see benchmarks that do a good job of covering those kinds of problems, as well. Cool. And then, lastly, beyond addressing problem assumptions and benchmarks is also just trying to improve the core algorithms. We saw in the learned optimizer work that bi-level optimization can be very expensive, and so perhaps there are ways to actually make that cheaper and approaches, both in terms of reducing computation, reducing memory, that make it possible to do things like that with much less compute. And also, I think that there's always a lot of room to develop a better theoretical understanding of these different kinds of approaches, which may also eventually then kind of fuel advances in the algorithms themselves, as well. And then, beyond these, also, I'm sure that you found challenges in your own homeworks and final project. And so, this list certainly isn't complete. It's just some of the things that I think are particularly interesting directions for future investigation. And then lastly, I'd like to end on a slightly kind of bigger picture for you, which is that I think that still, if you think about some of the biggest milestones, or at least what a lot of people think of as the biggest milestones in AI research, you think of things like IBM Watson, or the AlphaGo system that was able to beat a champion in AlphaGo or in Go. And all of these systems are really specialized for one particular application, or game, or problem. And even things like GPT-3 are really, I think, pushing the boundaries in terms of generality, but they're still specialized to this domain of text. And, in contrast, we know that humans are much more general. They don't limit themselves to language. They don't limit themselves to playing the game of Go from day one. And so, I think that the-- yeah, we still have, I think, a long way to go in terms of the field of artificial intelligence, of moving towards things that are more general, and have broader notions of intelligence. And in this course, we've covered, I think, a few different things that are, I think, steps towards building more general systems, like learning multiple tasks in a single model, leveraging previous experience with learning new things, leveraging unlabeled prior data or data from different domains, as well as trying to learn continuously over time, rather than just trying to learn one thing. And so, there's, of course, I think, bits and pieces that are still missing towards getting artificial intelligence that can operate at the same level that humans can, but I think that hopefully, through this course, you're much better equipped at trying to understand what might be missing for building these kinds of systems. Cool. That's it. I'd like to thank all of you for a really awesome quarter, but thank you, everyone, for being so engaged in the lectures and, yeah, for a great quarter.
AI_LLM_Stanford_CS229
Stanford_CS229M_Lecture_16_Implicit_regularization_in_classification_problems.txt
OK. Hi, everyone. Yeah, let's get started. So I guess today, we're going to talk about-- continue to talk about the implicit regularization. So last time we have talked about the implicit regularization of initialization, and today-- this is last lecture. Actually, last week we-- in the last two lectures, we have talked about implicit regularization of initialization, and today we're going to have two parts. So one part is we continue with the implicit regularization, and this is a better characterization in certain cases, as I will describe more. And another is that we're going to talk about classification problem. In all the past few examples, we are talking about a regression problem. And it turns out that for classification problem, the behavior is a little bit different. And instead of converging to some minimum norm solution, you converge to max margin solution, which is, in some sense, similar, but not exactly the same to the regression case. OK? So with this lecture, we're going to conclude the discussion about the implicit regularization of initialization, and then next lecture we're going to talk about the stochasticity. And that will be the last lecture about the implicit regularization. So today we're going to have-- so the first part-- this is number one, number two. So the first part, we're going to talk about a more precise characterization, certain characteristics about the implicit regularization effect of initialization. You can see exactly how the initialization influenced the regularizer. And to have some preparation, for today's lecture we're going to talk about the so-called gradient flow. I was trying to avoid this notion in the past, but I think the spirit has shown up in the past as well. So basically this is gradient descent with infinitesimal-- gradient descent with infinitesimal learning rate. And the reason why this is useful is because in certain cases we have infinitesimal where you can ignore some of the second order effect from the learning rate. And so this is just a mixed analysis. It's much simpler. You don't have to say how small the number is. You don't have to deal with the second order effect because the second order effect is just literally zero. There's no second order effect. And the way to do this is that it's actually also kind of a pretty clean formulation of optimization, even though it's continuous time. So what you do is you say you have a loss function. Let's say Lw is the loss function. So if you do gradient descent, then what you do is you say you take wt plus 1. And now I'm using the parentheses for time because I'm going to use that for the continuous time. So wt plus 1 is equal to wt minus eta times the gradient of the loss at time t. This is what you will do with gradient descent. And if you scale the time by eta, what I mean is that now, currently, we will do gradient descent every time you increase the step counter by 1. So before the time is t, and now the time is t plus 1, right? And now suppose you don't do that. You change the time scale. You say, every update I only advance the time counter by eta instead of by 1. So what I'm going to get is that I get w of t plus eta is equal to wt minus eta nabla L wt. And these two process, effective, are the same. It's just that the time, the unit of time, changed by a factor of eta or 1 over eta. And now if you scale the time, then you can take eta to go to 0. So take eta to go to 0, and then this becomes a differential equation or kind of like a continuous process. So you can write this as wt plus dt is equal to wt minus eta gradient L wt. I guess depending on what kind of community you come from, you can also read this wt dot, right? This is a derivative, with respect to-- I guess or I think here you also replace eta by dt. This is how you take the eta to go to 0. And this is effective saying that the gradient-- so that the derivative of w respect to t, which we denote by w dot t is equal to minus gradient L at w at time t. Where w dot t, this is just a derivative of w with respect to the time t. And in some sense, this allows us to ignore the eta squared term because eta squared here becomes dt squared, which is 0 compared to dt. So that's why this is useful. This will be useful for us. In some sense, this is mostly to simplify the equation. All the technical meat, in some sense, are the same. It just makes the analysis cleaner. And for the next two examples, in both of the examples, I'm going to use this gradient flow formulation for gradient descent. OK? So now let's talk about the model we're going to discuss. So the model is a variant of the last lecture, and there are some reasons for changing the model a little bit. I'm going to discuss that, but it's not super important. So the model we're going to do is that you have some quadratically parameterized linear model in some sense. So we have two parts. Let me write it down. So you have some vector w plus, and you take this o dot 2 minus w minus o dot 2, transpose x where we use this notation. xo dot 2 just means xo dot x. And the dot, this dot o, is this entry wise product. So w squared minus w minus squared transpose x. And w and w minus, these are both vectors in rd. And you can write w as the concatenation of w plus and w minus as the parameter. And so basically, this is very similar to what we did last time. So last time we had something like f beta of x, which is beta over dot beta transpose, right? So basically, now you have a negative term in it instead of just positive. And the reason is there are two reasons. So there are two benefits compared to last time. So they are not super important, but I know why you mention it. The one thing is that it's at beta, so this model last time can only represent positive linear combination of x. And now this fwx can represent any linear model. Right? Because before if you take the entry wise product of w and beta and beta, it's always positive and non-negative so you can only have a non-negative linear combination of the coordinates of x, and now it can have negative coordinates. And another benefit is that-- now if you initialize-- if you initialize w plus 0, I times 0 to be equal to w minus at time 0 to be the same thing. Then, as you can see, fw0x is equal to 0. And I guess for every x, it's just because the positive part cancel with the negative part. And I guess you have seen these kind of things before. This is mostly for convenience. So this will make the analysis even more convenient because initialization has zero functionality, and we have-- kind of use this for the NTK, and this will be useful for our analysis. And actually, what we are going to do today is that with this thing, we're going to see that if you change initialization, actually you're going to get different regularization and you can precisely characterize, how does the regularization depends on initialization? And in one of the cases, it will be the NTK. In the other case, it will be similar to what we discussed in the last time. So yeah, maybe I should mention that. And continue with the setup. So the loss function, the L times w will be just the square loss. And we can consider initialization, as we discussed. We'll consider initialization w plus 0 is equal to w minus 0, so that's-- you get the 0 initialization as the functionality of the initial model as 0. And also for simplicity, we choose this to be alpha times L1 vector, and alpha is the thing we're going to change. We're going to see that how does the regularization, implicit regularization effect, depends on the scale of alpha. So when alpha is small, it gives you something. If alpha is big, it gives you something else. And this L1 vector is chosen as before. In the previous lecture, it's chosen somewhat for convenience. You can still do it with other initialization, it's just that the form will be a little bit more complicated. And you can also have this-- we can also have this theta space. So suppose theta corresponds to w plus. So let's say that w is defined to be w plus the power of 2 minus w minus to a power of 2. This is the actual linear function you compute, the linear function of w, right? And we can define-- and we are interested in what kind of linear model we are learning in eventually. So let's say w infinity. Let's define this to be the limit in time infinity. So this is how the gradient flow-- where the gradient flow converges to. And let theta sub alpha infinity, and I think sometimes we just call this-- this is basically w, the model corresponding to w infinity. So this is the coefficient that we care about. We care about when you converge to infinity, what's the corresponding theta you get? So what's the property of this theta? And sometimes we just call it, for simplicity, we just write theta alpha and omit the infinity. But this is where it converges to. OK? And for the sake of the simplicity of the lecture, we assume everything kind of have a limit, so and so forth. So all the regularity conditions are assumed to be met. OK. And also, just to set up some notation, let this x to be the data matrix. And this is the matrix in 2D, and let's say y vec is the label matrix-- label vector. OK. So now, here is the theorem that characterized how does it characterize the implicit regularization. Let me write it down and then interpret. So for any alpha assume that you converge to a feasible solution. So assume that-- to a solution that fits the data. So in a sense that x theta alpha is equal to y vec. So if this is satisfied, it means that we fit the data exactly. And I'm using a purple color for this is because I don't feel that this is necessary have to be assumption. You can prove this, actually. The paper did assume this in the theorem, but I don't think that you have to do it. And actually, I just checked with [INAUDIBLE],, the author, two days ago, and he also thinks that you don't need it. But this is not formally stated in the theorem, so I assume-- but I do strongly believe that you can prove that you can converge to such a feasible solution. You don't necessarily have to assume it. But anyway, let's assume it just so that we are consistent with the paper. By the way, this is the paper by-- I guess I'll probably add the link. Woodworth in 2020 as part of the paper as something like "Rich and Kernel Regime." I don't know. I don't-- I'm forgetting what's the-- "Rich and Kernel Regime in Overparameterized Models. It's a pretty recent paper. It just showed up, like, two years ago. One year ago, actually. So suppose you-- OK. So suppose you have this. Then we know theta alpha is not only a feasible solution. It's not only zero order solution, but also it's actually the minimum norm solution or minimum complex solution according to the following complex dimension. That is the minimum complex solution. So you get arg min over theta such that x theta equals to y. So among all the feasible solution, you will try to find the minimum complexity where the complexity's defined by Q alpha. And what is Q alpha? So Q alpha is a function of alpha, so the complex measure does change as you change alpha. And Q alpha is equal to alpha squared times-- the alpha squared doesn't really matter because it's a scalar. I'm going to do an argument times some function of each of the coordinates q of theta i over alpha squared. And what is this little q? This little q is a one dimensional function where this little q is a function that maps r to r. Actually, it maps r to r non-negative, I think. And qz is equal to something that I don't expect you to interpret, but I think we're going to look at special cases which we can interpret. So this arc sinh. I guess this is pronounced as "sunge" in US, or "shine"? "Singe." OK. Anyway, arcSinh, sin hyperbolic, right, so z over 2. I think UK it's called "shine." I don't know why. Yeah. OK. All right. OK. So the first other bit is that even though you didn't minimize this complex machine algorithm, you only ran gradient descent, somehow you find the minimum complexity solution, and the complexity is defined by something like this. And now let's try to interpret-- this is the abstract theorem, but the important thing is that in particular, when alpha goes to infinity-- so if you have a very large transition, then this complex match-up q with theta alpha, this is something like theta i alpha squared? This is something like theta i squared over alpha to the 4. So which means that the Q alpha theta is something like, I guess, 1 over alpha squared times the 2 norm of theta squared. So basically if alpha goes to infinity, then the so-called complex measure, the q alpha, is the L2 norm of theta. And so if alpha goes to 0, then what's the complexity measure? So the complexity measure is what is the regularization effect or complexity measure? This is q theta i over alpha squared. This is roughly something like theta i absolute value over alpha squared times log 1 over alpha squared. I don't expect you to verify this limit because you have to do some kind of Taylor expansion to see this, but this is the thing. So then this means that a q alpha theta is, in some sense, the 1 norm of theta. I guess over alpha squared terms log 1 over alpha squared. But the content doesn't really matter because it doesn't change the order of different theta. So [INAUDIBLE] this the Q theta [INAUDIBLE]?? Oh, sorry. Sorry, sorry, sorry. That's-- yeah. Yeah, that's what I mean. Cool. So basically this is-- or in summary. So in summary, when alpha goes to infinity, then this is minimum L2 norm solution in a theta space of theta and an L4 norm for the W. Right? Because theta is the square of w. And when alpha goes to 0, this is similar to what we discussed the last lecture. So you have the minimum L1 norm of theta, which is the minimum L2 norm of the w. So this regime is what we have seen in the last lecture, and with a very similar model, right? But this characterized the whole regime. And between alpha and 0, you basically have interpolation, some kind of interpolation, between L1 and L2 regularization. So that's why this is more precise than before. So you know how those things interplay. Of course, it's kind of like for any particular alpha, this q function is a little complicated. But it's just the sum-- kind of like sum of some power of theta i in some sense, but the power is kind of between 1 and 2. You can think of like that. But it's not exactly a power, but it's something like that. So alpha [INAUDIBLE] scale [INAUDIBLE]?? Oh, OK. Say again? Alpha is the scale of this? Yes. Yes. Alpha is the scale. This is the only thing that depends on alpha in the algorithm. So we say that the initialization is alpha times L1 vector. And this L1 vector actually can change it as well. I think if you change it to an arbitrary vector for this regime, for this case, you actually don't change anything. So for this limit. So for the other limit, I think you change a little bit because the L2 norm becomes weighted by the initialization, by the particular initialization if it's not L1 vector. If it's L1 vector, the weighting is just all the same for all coordinates. If they are not L1 vector, it's not L1 vector, then you have a different weighting. Yeah. Those can be found in the paper. For simplicity, I didn't show the exact weighting. Right. And so here are some intuitions about this interpolation. And in some sense, just to connect to what we have-- in some sense, this is kind of like-- you can view this as a unification of what we have discussed in the past few lectures. When alpha is small, this is small initialization. So this is basically similar to the previous case, a similar intuition to previous lecture. But there is a small thing. So it's not-- note is that it's not exactly-- so this is only when it goes to a limit. So you have the minimum L2 norm solution. When alpha is not 0, it's not exactly minimum. The regularization effect is not exactly closest, closest solution to initialization. I think the paper shows-- there's some tiny differences, in some sense. So only when alpha goes to 0 you can basically say this is the closest solution to initialization. But generally, this is something we have discussed last time. And when alpha goes to infinity, actually, this is indeed-- this is the NTK regime. And why this is the NTK regime? I'm going to show you. This is kind of similar to what we have discussed before, but let me just do it again. So why this is NTK regime? So let's look at the-- recall that in the NTK regime, we have these so-called two parameters, sigma and the beta. So beta there was the smoothness or the Lipschitzness of the gradient and this is the condition number, right? And we say that-- and recall that we had this discussion that sigma beta over sigma squared mattered. Right. So if this goes to 0, then you are in NTK regime. You can approximate that quadratic. And now let's compute what sigma beta is in this case. So the gradient with respect to the initialization, x-- so w is 0. w0. Let's take this to be alpha L1 vector. w plus and w minus will be both L1 vector. Alpha times L1 vector. And we can compute the gradient initialization x. So there are two set of parameters. So the gradient for the w plus, I think you can-- if you compute, is something like this. And the gradient for the w minus is this to this. Oh, sorry. This is-- I'm sorry. Elementwise product. You can do this easily just by chain rule for every dimension. And this is just equals to-- I guess there's a vector two here, which I oftentimes probably don't look at. And this is 2 alpha-- because these two are just L1 vector, so this is 2 alpha x minus x. Sometimes this. Right? And now you can see that sigma and beta both linearly depend on alpha, right? So what is sigma? Sigma is the condition number of-- sigma is the condition number of the gradient matrix, the feature matrix that consists of the gradient for every data point. And the condition number scales linearly in alpha, and beta is the Lipschitzness, which also scales linearly in alpha. It's because alpha is multiplied in front of the gradient. So both of these scale linearly, so that's why beta over sigma squared will converge to 0 as alpha goes to infinity because below you have to go to dependency on alpha. And in the denominator, you have-- in the numerator you have degree-- you have a linear dependency on alpha. So that's why this whole thing goes to 0 as alpha goes to infinity. And also, when alpha goes to infinity, this is the NTK regime for the trivial feature because now the xr field of x is this thing. This feature map is really just that literally-- this, right? This is just literally the trivial feature because the only thing you did is that you flipped the x, which doesn't really make any difference, essentially. So basically you've got the minimum norm solution so you got-- so if you believe in the NTK perspective, you should get the minimum norm solution. So the NTK perspective will also tell you that you get minimum norm solution. Like, according to the features, right? Minimum norm solution-- minimum L2 norm solution using the feature x minus x. And x minus x is, in terms of a feature, is not very different from x itself. So basically you just get essentially the minimum L2 norm solution for the linear model. So that's the same as we discussed-- the same conclusion as we discussed above. Any questions so far? So the question is why NTK gives you the minimum norm solution. I think this is just because we will do the kernel method. So when you use kernel method-- so NTK tells you that you are doing kernel method with certain features, right? So NTK means kernel method. And the feature just turns out to be this trivial feature, and kernel method with the feature just gives you the minimum norm solution. That's what the kernel method does. So because what kernel method does is that you-- that's just what the kernel method do when you don't have enough data. When your feature dimension is bigger than the number of examples in the kernel method, you are learning the minimum norm solution for the features because otherwise you have to define something, right? So the kernel method, everything is L2, so you are minimizing L2 norm. That's implicitly in the kernel method. [INAUDIBLE] Yeah, that doesn't depend on initialization because it's a complex problem. Yeah. And you use a particular algorithm when you do the kernel method. That algorithm gave you the minimum norm solution. OK, cool. OK. Any question? One question? [INAUDIBLE] going to infinity [INAUDIBLE] respective [INAUDIBLE],, like, it could lead you off of the [INAUDIBLE]?? But I guess that goes a little bit back [INAUDIBLE].. We would say that there is [INAUDIBLE].. Yep. So I guess, in some sense, repeating the question and also answering it. So when alpha goes to infinity-- so yes. So your problem will be very ill posed. In some sense, the optimization landscape will be very bad just because your function will be not very smooth. And this part is hidden here because you are using gradient flow so you can-- so you are using infinitesimal small learning rate. So that's why it's hidden under the-- it's swept under the rug. And then practically, you also don't necessarily want to use the large learning rate. For one reason is the optimization, and the other reason is that maybe the L2 norm solution is also not good, right? So you also want-- you have an L1 regularization, at least for this particular setting. So that's another reason why you don't want to use very large learning rate. sorry, very large initialization. And another thing is that in practice, people sometimes do use-- this is about the empirical setup. Sometimes you do use large initialization, but people don't use infinitesimal small learning rate. So then you still cannot get into the NTK regime. But that's a good thing because you don't want to go to the NTK regime. So that's why, at the beginning, some people have confusions because at the very first paper by this NTK paper, I think they are claiming the initialization scheme they are studying is actually what people do in practice, and that's kind of true. It's very close to the Kaiming He initialization or the Xavier initialization in terms of the scale. But because they are using very, very small learning rate, so it's actually not really-- the theoretical setup requires very, very small learning rate. But empirically, you don't use those small learning rate, and also the theoretical setup doesn't have the stochasticity. So all of this together, it makes the theoretical setup different from the empirical setting. And that's a good thing because the theoretical setup says that you don't really do anything super different from kernels. Yeah. OK, so now let's discuss the proof of this theorem. So I don't have-- there is a little bit-- this proof is kind of interesting in the sense that the proof is similar to actually the linear regression model similar to the linear regression proof, but not similar to what we discussed last time. Not similar to last lecture. You would probably guess this is similar to last lecture because the last lecture has almost the same model as this one and is only doing a subcase of this when alpha goes to 0. But it turns out that the proof is very similar to the linear regression one, and you have these two steps. The first step is that you find the invariance maintained by the algorithm, by the optimizer. And recall that this invariance was that theta is in the span of xi for the linear regression. This was probably two or three lectures ago when we analyzed the implicit regularization effect of initialization. for linear regression we say that because you need a 0 and you use gradient descent, you always need a span of the data. And here we're going to find a different environment, which is more complicated and even harder to express, but we're going to find the invariance. And then you use the invariance. So step two, in some sense, characterized the-- I guess characterize is a very vague term, but characterized the solution using the invariance. And sometimes you use the invariance as additional information, right, to pin down which solution you converge to. In some sense, the difficulty is that if you-- without any additional thing, you just know that you convert to a zero order solution. You don't know which one it converts to, and the invariance tells you that which one it converts to and the invariance depends on alpha. And there's nothing about population versus empirical. Everything is empirical here. I didn't even define where the data comes from. I only tell you that this is the minimum norm solution such that the empirical error is zero. I don't have to care about population at all. So yeah. How does this kind of technique compare with the techniques we discussed last time where you use the fact that the data, the empirical loss, concentrates around the population loss in certain regions and you somehow do some kind of control of the dynamics? I don't know how. It's kind of hard to compare. These are two different approach. There is some good thing about this kind of approach because this doesn't require population. That sounds a good thing. But the bad things about this approach seems to be that it's very hard to find invariance for harder models, for more complex models. You will see the invariance is a little bit kind of magical somehow. But that's that. For more complex models, even the previous approach, the approach we discussed last time, wouldn't work either. So it's hard to say. Anyway, so let's proceed to see how does the proof work. So we need a little bit notation to somehow simplify our x position. So let's say let this x tilde to be the extended data matrix. So you extend the matrix to-- this is to deal with the-- you concatenate x and minus x so that you get a n by 2d matrix. And sometimes this is just to try to write everything in matrix notation so that you don't have to have the minus thing. So we would take wt to be the concatenation of w plus t and w minus t. This is of dimension 2d. And let's take a wt o dot 2. I guess we say that this is the entry wise power of wt. And this means that with this notation, x tilde times wt o dot 2, this is x minus x times w plus t dot 2, w minus t dot 2. And you can verify this is really the same as the-- this is just the way the model computes, right? This is the model output in data points. So I just want to use this so that you have the matrix notation, and now you can compute with a derivative of t, with w dot t. This is the gradient because we are doing gradient flow, so this is equal to the gradient of L at time t. And what's the gradient of L at time t? This is gradient of this loss function, and now can be written as-- I guess the loss function of wt now can be written as x tilde wt o dot 2 minus y vec 2 norm squared. That's because I vectorize everything. So I copy and paste here and then I take the gradient. Taking the gradient, you can use the chain rule. So if you believe v has got a a direction with this is x tilde transpose rt entry wise times wt. Where rt is equal to x tilde times wt o dot 2 minus y vec is the residual vector. So if you are familiar with linear regression, you will realize that this is kind of like-- this is what you got from the-- so this is if it was linear regression. If it is a linear regression, then this term will be our gradient. And now it's not linear regression because you have the quadratic parameterized model. That's why you also have to do chain rule to look at the derivative of the quadratic of wt. That's why this-- this is because it's quadratic. Anyway, so this is one way to think about why this is true. But the formal verification would be just that you look at-- you do chain rules [INAUDIBLE]. [INAUDIBLE] Oh, sorry, sorry. There should be a-- I think there's a 2 here. Wait. Let me see. I think I wanted to make it 2. So that means I think my loss function should have a 1/2. So where's my loss function? The loss function has a 1/2. I guess I also defined a loss function somewhere else before. Here. Right? That sounds good. My brain just automatically removed all the constants so it's very hard for me to deal with this. OK. Cool. All right. OK. So now, how do we-- we said that we want to have some invariance for this in some sense, so we want to somehow solve this differential equation. But you cannot really solve it exactly. I'm not an expert on solving differential equations, but I think this is beyond the scope of-- this is something you cannot really have a closed form solution. But interestingly, you can do-- you can get something without solving it, exactly. So we claim that-- so actually, this is a-- in a paper, they claim it's easy to verify this. You can claim that wt satisfies the following. This by this times exponential minus 2x 2 transposed t rs ds. OK. Why this is the case? So first of all, this is not a solution. This is not like-- depending on what you mean by solution. This is not necessarily my definition of closed form solution because rs still is a function of w. But it's going to be something very useful for us. And why this is true? It's actually relatively simple, but so here is the reason. So this is because what you can do-- suppose you have a differential equation something, like u dot t is equal to vt times ut. So I'm trying to abstractify it a little bit so that I can give a clean analysis. So you can see that this is a good abstraction of what we had before because before on the left hand side, you have the derivative w and on the right hand side, you have something times w itself. So this will be u. This will be v, this will be u, and this u dot t. That's my abstraction. And then suppose you have such a thing, then you can always do the following. You can say that u dot t over ut is equal to vt. Right? That's always true. And then the left hand side, this is a magical thing in many cases. This is log of-- of the log of ut. It's by chain rule. I think probably you have seen this in other contexts like policy gradients and other cases, depending on whether you know any of those. But anyway. And then you can integrate both sides. So you can say, if you integrate, you got a log of ut minus log of u0 is equal to the integration of the right hand side like this. And now you remove the log and you get exponentials. Ut over u0 is exponential times integration of this. And now if you map u to w and v to this-- I guess u to a coordinate of w and v to be a coordinate of this x transpose rti, then you can apply this and you get the desired result. And by the way, I think I need to make a remark here that this is entry wise application of exponential. So this is a vector. This is another matrix. So a matrix from a vector. This becomes a vector and you actually take entry wise exponential and you take the entry wise product with w0. OK. Any questions so far? So now, let's see why this is useful. It's a little bit magical, in my opinion. I don't have a-- conceptually I think this is fun, but I think the proof on a proof level is-- somehow there is a little kind of-- either you can call it coincidence or magic. So there turns out that this is all you need to verify this is a good solution. This is the minimizer of the solution. So first of all, we turn this into something about theta. So now we have a characterization for w, and let's turn it into something about theta. So recall that-- and also we simplify this a little bit. Recall that w plus 0 is alpha L1 vector. w minus 0 is alpha times L1 vector. That means that w0 is also L1-- alpha times L1 vector. This is in 2d dimension because w is a concatenation of w plus and w minus. So that's why this thing, w0, is basically not important. You just have alpha. So in the theta t to the theta at times t is w plus 1 power of 2 minus w minus t to power 2. And this will be-- OK, let's use this formula. Let's use this formula. Let's call this one using one. So the w0 doesn't matter. The only thing it contributes to is alpha, so we get alpha squared. I guess I'm not sure. Let's see. So maybe I'll just do this small characterization here. So x tilde transpose is x transpose minus x transpose. So that's why exponential minus 2x 2 dot transpose some vector v-- v will be this integral. So this will be a vector like-- let's say suppose you take this, the power of 2. Then this will be exponential minus 2x transpose v because this part is from the first part and the exponential minus 2-- times 2x transpose t and then to the power 2. OK. And then this-- and this power 2 will become 4 because this exponential. So we get exponential 4 minus 4x transpose v exponential 4x transpose v. So this small derivation is trying to deal with this, so you know that this thing to the power 2 will be something like this. And then the first power corresponds to-- the first power corresponds to w plus and second power corresponds w minus, so that's why you get something like here. w plus squared will be exponential minus 4x transpose this and minus exponential 4x transpose this. I guess what I'm doing here is just trying to make you believe that this derivation is true, but it should be trivial derivation. There is nothing difficult. OK, so this is the characterization of theta. And you can see that this is this exponential of this minus x minus of the same thing, you can write this more succinctly as the sinh, the sinh for x transpose. This is just a better definition of the sinh. I think sinh uses something like exponential t plus exponential minus t over 2. Something like that. OK. So basically, we have a calculation of theta. Right? And then you know that the theta alpha is equal to theta alpha at theta at infinity. So this is equal to 2 alpha squared minus 4x transpose. 0 to infinity rs ds. OK. So this is something we know that theta alpha, the final point, satisfied. Maybe let's call this equation two. And we also know that x theta alpha is equal to y because we assume, or we can prove-- I guess we discussed this. I think we can prove that you converge to a feasible solution. And I'm claiming that one and two-- so two and three, these two things, turned out to be the optimality condition of the program. Let's call it one. So one is this arg min theta. OK, I guess it's far, far away. So here. Let's call this program program I. So you want to say that theta alpha is the minimizer of this problem one. And it turns out that theta alpha satisfies these two equations, two and three. And these two equations are the optimality condition of that optimization program, program one. And that optimization problem only has one solution because it's convex so that's why this theta alpha is the solution. That's the plan for the next. Right. Sounds good. So by optimizing condition, I really mean KKT condition. So I'm not sure whether all of you are familiar with the KKT condition, so I guess there are two ways to think about this. This is just a small thing about-- background about KKT condition. So these are optimality conditions for constraint optimization problem. To be honest, I never really remember exactly what the KKT condition in many cases. So what I am going to show you is one way to think about it, which is probably not exactly the same as what you can read from the book, but it's going to be very similar. So suppose you have these kind of things, optimization programs like this, and q theta is a convex thing. And so first of all, the KKT condition is the following. So it says that q theta is to be equal to x transpose v for some v in dimension. I think this dimension is n. And then x theta needs to equals to y. So this is the KKT condition for this kind of program. And one thing you can do is you can just look up a book and just invoke a theorem from a book which says that this is KKT-- the optimality condition. The way I think about it is the following, if you're interested in it. So the way I remember this is that I remember this, or I derive this every time if I need it, as follows. So I think the insight is that optimality at least means that there is no first order update. There is no first order local improvement. So if you perturb your solution, you shouldn't have a first order improvement locally. If your perturb solution locally a little bit by a infinitesimally small amount, you shouldn't get a bound to your first order improvement. But you also have to satisfy the constraint, so you also-- no further other local improvements satisfying the constraint. Satisfy the constraint also up to first order because you may not be able to. So what does this mean in this case? It means that suppose you consider the perturbation the alpha theta. This is a perturbation. So how do you satisfy the constraint? To satisfy the constraint, you have to say that the perturbation needs to be orthogonal to the lowest span of x because if it's not in the lowest span of x, you perturb it, you may change the x theta and then you change the-- you don't satisfy the constraint anymore. So this is the way to satisfy constraint so that x 0 theta is 0. That's how you make the constraint work. And now let's look at theta plus the other theta, the local perturbation. And so this still satisfies the constraint, and let's see what's the value. So let's see what's the value of q. So q theta plus the other theta. This is equals to, up to the first order, q theta plus the other theta. [AUDIO OUT] We cannot hear you. Maybe let's try this. Can you hear me now? Thanks for letting me know. Yes, sir. Thanks. Is the audio good or not? Is it OK? Yeah, it's OK. OK. So I'm using my laptop's microphone, so maybe let me turn it in some way so that it works better. Yeah. Thanks for letting me know. So maybe I'll rewind a little bit back. I don't know for how long time you have lost me. So I guess maybe I'll just briefly go through the steps that we have discussed. So I guess I was saying that if you have the perturbation, you always satisfy the constraint because the perturbation is the lowest span of x. It's orthogonal to the lowest span of x that you always satisfy the constraint. And we want to understand-- we want to figure out under what condition this perturbation will never improve your function-- will make the function bigger. Because if it makes the function bigger, it means that its point is not optimal. So that's why you look at the Taylor expansion of this q and you found out the first order changes is this term and you want this term to be always non-negative-- nonpositive because if it's positive, then it violates the optimality assumption. So it's a necessary condition is that this term is always nonpositive, but this term is very easy to make sign flip because we can use the flip the other theta by whatever sign you want. So that basically means that for every theta in the orthogonal space of lowest span of x, this term has to be just literally 0 because if it's not 0, you can flip the other theta to make it positive. So that's why we are saying that here, for every delta theta that is orthogonal to the lowest span of x, this term is 0. And that really just means that this vector-- so because every delta theta integrals in this subspace is orthogonal to this vector, that means that this vector is in a complementary subspace of the subspace 0 theta. So that's why this vector q theta needs to be in the row span of x so that for every vector delta theta orthogonal to the lowest span, their inner product is zero. So that's why this, it can be written as x transpose times mu because x transpose will be the lowest span. x transpose mu-- x transpose-- I think that's called v. x transpose v is the representation of a vector in the lowest span of x. So that's why this is the-- that's how we develop the KKT condition. The KKT condition was that you have to be in the lowest span. The gradient of q as theta has to be the lowest span of x and also has to be a feasible solution. OK. Cool. So this is some digression about KKT conditions. If you're not familiar with it, then the only important thing is that this is the characterization of the optimal solution i theta. And we can-- now it's just pattern matching, right? So this corresponds to this, obviously. And this one, really, this corresponds to equation two because equation two-- OK. That's what I'm-- OK, It's not trivial yet, but let's see that. So KKT tells you that the gradient of q theta needs to be something like x transpose v and the invariance or the differential equation tells us that-- let me just copy paste it. Let me just rewrite it. So theta alpha is equal to 2 alpha squared sinh. So first of all, let's rewrite this as-- so simplify this and rewrite it as v times v prime, let's say. Because what v is doesn't matter. And then you can-- I guess, let's also work on the Q side of things. The other Q side you can compute this. And sometimes when you derive the Q, we are verifying it, but actually what you have to do is to reverse engineer to do this in other direction. But if you just verify that, suppose you are given a Q for this one to prove it, you can find the derivative of Q will be just arcsinh 1 over 2 alpha squared times theta. This is a derivative Q. It makes sense because a Q is a sum of some function of theta i. And the derivative of Q is a-- each answer is the sum function of theta i. And so then you can see that this, if you plug in the theta alpha here to this thing. So arg sinh r squared theta alpha. This is equal to just the 4x transpose v prime. So basically that's why gradient q theta alpha is equal to minus 4x transpose v prime. And this satisfies the KKT condition. The form doesn't matter because v can be any vector, so that's why q alpha satisfies the KKT condition. Let me grab that. So it's the global mean. It's the global mean. I guess it's the last step. Satisfying the KKT condition means global mean. This requires the convexity of this program. The constraint is linear. It's convex. The objective you can verify still. It's also complex. It's something between L1 norm and L2. Both of them are convex. Any questions? So if there's no questions, I'm going to move on to the next thing, which is about classification problem. Yeah? I see many of you are starting like this. Proof, still. Like, this proof-- I don't know. I don't have-- the plan sounds very intuitive. So how do you prove something is the minimizer of a optimization proof? You have to verify it satisfies the KKT condition, I guess. That's probably more or less the only way to do it if you want to show something as the optimizer of some optimization program. But it's kind of magical why it just happens to satisfy the KKT condition. So of course, there's something that we can choose. We can choose the q to make it satisfy the KKT condition. That's something you can choose. But the magical thing is that other things all match up, like the form, the x transpose times something. All of those things are all matched up, and also you can somewhat-- in some sense, you can always work with each coordinate independently in this special case. See, that's also something that's maybe a little bit special to this special model that we consider. All right. OK, so now let's move on classification problem, and we are looking at separable data as we always do for classification problem. And here we are going to only discuss one result, which says that you could do gradient descent. It converges to a max margin solution. And this is actually-- doesn't require any initialization. It works for any initialization. So the only thing you need is gradient descent and some loss functions, which [INAUDIBLE]. And no regularization, you just compute gradient descent on the loss function. You run for a long time. You're going to converge to the max margin solution. So I'm going to have to, again, start with a setup. So now we have a data set xi yi i from 1 to n and xi ERd. And yi is a binary label, plus 1 minus 1. Question about the-- Sure. [INAUDIBLE] OK. So instead of assuming the bottom to be w squared-- OK. [INAUDIBLE] Mm-hmm. OK. Then to address the proof, there's not guide proof because it breaks down that-- like, when you compute the delta [INAUDIBLE]?? OK. Yeah. So the question is about the previous thing, and the question is about if you don't use w squared, you use w to the k. And this is a very good question, actually. This is exactly what the paper studied in the more technical part. And the short answer is that everything can still go through, but the eventual q would be different. So the form of your q would be something not L1, L2. I think it's something like-- it depends on the power. So I think if the power is P, I don't exactly remember, but I think it's something like 1 over P norm when alpha is close to zero. When alpha is going to infinity, I think everything is still the same. The NTK regime is not sensitive to this. And then technically, why'd everything go through? I think the reason why everything goes through is that, roughly speaking, you are only playing with this single dimensional function in some sense, right? So it won't be sinh anymore, probably. There will be some constant, some other function. But still this x transpose something is still there, I think. It's not changed. And so eventually you just have to add in your different Q to make everything work. And the Q is still-- only depends on the coordinates. You do something on each corner, you take the sum, but Q still had this form. So that's why it's still somewhat doable. OK. Cool. So going back to the classification problem. So this is our setup, and here we are only going to do the linear model even though some of this theory still works for nonlinear model with roughly similar technique and similar conclusion. And here we're going to have a loss function. So the loss function will be L hat w is the-- let's say we do the cross entropy loss, the logistic loss. I mean, cross entropy loss. Times hw xi. Where this loss is this logistic loss, which is log of 1 plus exponential minus 1. OK. Cool. And the first thing is that to get some intuition. So first of all, we have multiple global mean if separable data. So this is a premises for any implicit regularization buffer. If you don't have one global mean and you can converge the global mean, there's no implicit regularization buffer. But why is there are multiple global mean? This is just because you can always have an infinite number of separators, pretty much. Unless in a very extreme case you just happen to get stuck at it exactly. So for example, I think it's probably easier to draw something. So suppose you have some data points like this and you have so many different possible separators. As long as you have one, you perturb a little bit, it's still separate. So there's this, the infinite many w such that w transpose xi yi is bigger than 0 for every arc. So you have so many separators, and for every w for-- maybe let's say infinite number of w bar such that where w bar is unit. w bar is unit vector. This statement doesn't really depend on the log, so you can always scale it. So for every w bar-- for any w bar such that this, you can scale it. So if you look at L hat alpha w bar will go to 0 as alpha goes to infinity. So any scaling of this unit separator, if you scale it extremely, then you are going to get a loss close to 0. So basically you have so many directions in it, so you can go to infinity in different directions and still converge to a zero loss. So basically, in some sense, if you are a little sloppy about all of this infinity times w bar are global minimum of this loss function just because the loss function goes to zero at infinity. So the loss function-- maybe I should also draw this. The loss function looks like this. This is the Lt. When t goes to infinity, you get close to 0 because the zero loss. And what's inside-- what's t? t is y times w transpose xi, and this thing will go to infinity as you scale the norm of the constraint. So you have so many directions that you can find. There are so many global minimums. The question is which direction you'll find. If you don't use any-- if you just invoke a theorem about opposition, you know that it will find a solution with error close to zero-- with loss close to zero, but you don't know which direction it is. You still have a bunch of flexibilities there. Many directions can-- if you go to infinity in many directions, you can get the loss back to 0. So that's the question that we're actually going to address. And we're going to say that this actually converge to max margin solution. So let's define-- so the answer is max margin solution. Direction. So I guess let's define, maybe first, the marginalized-- the margin and normalized margin. So I guess we have defined a margin. And this cross many cases where-- in many cases, the margin is the minimum, this. And we also assume-- we always assume linearly separable. So this definition is only defined for cases where it's linearly separable. And normalized margin is defined to be-- we normalize this by the norm of w because otherwise you can make the norm arbitrarily big-- arbitrarily small. OK? So max margin solution is defined to be for all w. Which one give you the maximum normalized margin? And let w star be the maximizer. This is the direction of the max margin solution and with unit norm. Because if you only look at this objective, it doesn't depend on the scale because the scale has already normalized all. So we define w start to maximizer of every single nor, OK?. So basically we're going to prove that if you do gradient descent, you're going to go to infinity. But each will go to infinity, but only along the direction of w star. That's the theorem. So gradient flow. I guess here we're talking about gradient flow just because it's convenient, as we discussed. So converges to the direction of max margin solutions in the sense that I think we don't really exactly see the convergence in direction, we only see the convergence in the sines of the value of the margins. I think you really want to do the exact convergence and direction, of course it will be a little more work. So what we say is that the margin of your iterate will converge to the maximum possible margin, gamma bar. Right. So as t goes to infinity. And wt is the iterate at time t. So in the next five minutes, I'm going to discuss a little bit about intuition, why this is working. This intuition against why this is working and how do we improve it, and some kind of a mixture of both of these two. And then in the next lecture, I guess I would prove the thing more rigorously. So why this is going-- why this is working? So the intuition is that-- so I guess I have a few steps here. So step one, this loss function, L hat wt, is going to 0 by standard optimization arguments, which is not covered by this course. But I think you can believe it if your optimization is working-- if your optimization is working, then your loss should go to 0. And second. So this is observation one. And observation two, I guess the loss, this loss function, which we defined to be the logistic loss, right? Some like this. This loss function is actually close to exponential for large C. This is just because you do Taylor expansion. Log of 1 plus x is approximately x. That's why you can get rid of the log at 1. And this is actually an interesting thing. So you call it logistic loss, but actually it's closer to exponential loss. So logistic loss is close to exponential loss. So most of the proof, actually, we are only going to do the logistic loss. I think the proof, actually, I'm just-- sorry. I'm going to do the exponential loss. So in the proof, I'm going to just assume it's exponential. Even though you can still-- the small differences can be dealt with relatively easily. But the third observation is that because of one, the wt has to go-- the norm has to go to infinity. And the reason is that if you just don't go to infinity, you never make the loss close to zero now, right? This is just because if wt is bounded, however, let's say, it's bounded by B. Suppose it's always bounded. Then you can always bounds these y times w transpose xi by something like AB times the norm of xi. You have some bound. So this is bounded. And then your loss, L hat wt, this is bounded by exponential minus d times xi. Something like this. And this is bounded below by zero. And this contradicts with what? Right. So if your norm is always bounded, then your loss is going to be low by some number. The number is very close to zero, but still is bounded by some number which contradicts with the convergence to zero. So now it comes to the most important thing. So with all of this preparation-- so we know that the norm goes to infinity, and then suppose, let's say, let's only look at the final case, the later regime where wt is very big. So suppose wt q norm is really big. That's called q. It's very big. Then let's try to simplify the loss function and see where the loss functions are. So the loss function I can-- maybe let's remove the t just for simplicity. Let's just look at-- suppose you want to look at some w such that the w norm is very big. So L hat w is the sum of this logistic loss or exponential loss. We're not distinguishing them for now. Let's say this is roughly equals to the exponential minus yi times w transpose xi. And because the loss would be very close to zero, it's actually more informative to look at the log space. So if we took a log space of L hat w, then this is roughly equal to the log of sum of exponential minus yi times w transpose times xi. So this is a log sum exponential. I'm not sure whether this rung a bell to some of you, so this is basically soft max. So I'm going to claim that this log exponential is close to max of this minus this. Am I-- yes. So why this is the case? Let's do some, again, abstract derivation. I guess I'm running late. So if you have log-- oh, I guess-- sorry. I think I'm-- probably I should have another step, so let's first another step. So this is log sum exponential, but also I can try to-- I want to use the fact that w has a large norm. So let's get a normal wq in front. Again, yi times w bar transpose xi where w bar is equal to normalization w. So I know I'm going to claim that this is close to max minus qyi w transpose xi i over. So why this is the case is this-- I guess for those who are familiar with this, log sum exponential is kind of like a soft max. So if you look at log sum exponential of sum, I said q times ui. I'm trying to kind of abstract it by a little bit, right? So you have a q that is very large and the ui is something fixed. I claim that this is close. This is roughly q times the max of ui iEn plus something like little q. Something that doesn't impact-- doesn't depend on q as much as q goes to infinity. So when the q is very big, then this is really doing the max. And sometimes this is kind of like when you do a temperature in a soft max. If you make it big, then soft max becomes hard max. And if you do the-- then this is really-- and sometimes soft max. And if you want how to improve this, just say the sum of exponential qui is proven upper bound. The upper bound is that it's a log of-- replace each of these by the biggest one. Exponential q times the max ui. And this is only log plus q times the max of ui. And so the log is small compared to q because q will go to infinity and I need something fixed in this abstraction. And on the other hand, you just take one term. We just only keep the term where you have the max, then you get q max ui. You drop all the other terms. There's no sum anymore. Log cancels to be exponential. You get it. So basically, this log sum exponential is close to the max up to some factor log n. But this factor log n will be small if q goes to infinity, and that justifies this step. So unless you have this step, this-- what's going off here. So if you think that you're minimizing the loss-- you're minimizing a loss, so that's why you minimize a log loss as well. So minimizing loss is kind of like trying to maximize this, and that means you are maximizing the-- so you are trying to minimize this quantity. Minimize the quantity max minus q yi about transpose xi, which means it's the same as maximizing, which means that-- sorry. You are minimizing what you are maximizing, the min of q yi w transpose xi. So you just fill out this time. Minimizing the max of this is the same as maximizing the min, no? This is just literally the same thing without any-- it's not like you're switching min and max. It's just really the sign. You have the minus sign. Maximize something is the same as-- the max of this is equal to the minus, the min qy of the transpose xi. And then you can put this minus also to minimize it. OK? All right. So basically you are maximizing the margin. That's what this is. So if you do this approximation, then you have maximized margin if q goes to infinity. And next time we are going to make this more formal with a little but-- with essentially the same situation, but the proof would be more clean. It's not exactly like this. It's not like you're dealing with errors. It's going to be a very clean proof. OK. I think that's-- yeah. That's all for today. Thanks.
AI_LLM_Stanford_CS229
Stanford_CS330_Deep_MultiTask_Meta_Learning_Lifelong_Learning_I_2022_I_Lecture_15.txt
So today, we're going to be talking about lifelong learning. And lifelong learning is an area that-- actually, I feel like the term lifelong learning means a lot of different things. And so we're going to talk a little bit about how the problem statement isn't super well defined in some ways. We'll also talk about some basic approaches to lifelong learning, whether we can do better than those basic approaches, and revisit the problem statement from the perspective of meta learning and how we might leverage some ideas for meta learning in the context of lifelong learning. Cool. So in the course, we've talked about a number of different problem statements. But two that I'll focus on here is multitask learning and meta learning where in multitask learning, our goal is to solve a set of tasks. And the train tasks and the test set tasks are the same set of tasks ultimately. Whereas in meta learning, our goal is to quickly learn a new task after having experience on a set of training tasks. And one thing that is maybe-- I think that we've seen like lots of examples in multitask learning and meta learning where a lot of scenarios in the real world that are defined by these problems statements. But there's also a number of scenarios in the real world where the tasks aren't given in a batch. And instead, we see the tasks one by one. And in this case, it's a little bit different of a problem setting in the sense that you aren't just given a large batch of tasks or a large batch of data from different tasks all right off the bat. And instead, you may want to be able to leverage previous experience in a more sequential fashion. So some examples of this, you might have students that are learning concepts in school. They'll learn those concepts sequentially over time rather than being given all of them from a batch. Also, in that sort of scenario, there might be a form of a curricula. And some of the tasks might get harder and harder. Or they might build upon some of the previous tasks. An image classification system that's learning from a stream of images from different users but could also fall into the setting where different users will join the platform. And we'll start uploading images at different times. You could possibly think of receiving data from a sequence of users rather than receiving data from a set of users all up front. Another example is maybe you have a robot that's trying to learn an increasingly large set of skills in different environments. It might continuously run into new things that it needs to learn or new environments that it needs to be able to learn in. And we don't want to be restricted to the setting where we only have a batch of settings. You can also have a virtual assistant that's trying to learn how to help different users with different tasks at different point in time. And so at each point in time, you're going to be-- a new user will arrive. They'll have a new task that you might need to perform. And you need to be able to handle that regardless of the experience that you've had before. And then lastly, you could have maybe a doctor's assistant that's aiding in medical decision-making. And, again, you kind of see cases in a sequential manner rather than necessarily all up front. Now, to talk a little bit about some terminology, I'll refer to what we just discussed as a form of sequential learning. And there's really a lot of terms in the literature to describe different forms of this kind of problem, including online learning, lifelong learning, continual learning, incremental learning, and this streaming data setting. And unfortunately, there isn't really crisp definitions for each one of these. And oftentimes, many of these terms are used somewhat interchangeably as well. And so it means that basically, the waters are a little bit murky, I think, in terms of-- there isn't really a lot of consensus behind what necessarily each of these terms mean. The other thing that I'll mention is that the setting is distinct from having sequential data or sequential decision-making. In particular, this means-- sequential learning means that we're going to be learning as the data comes in. The data will come in sequence. And we'll be learning from that data in sequence, whereas sequential data is scenarios where your data points has a sequential structure rather than it's coming in over time. And sequential decision-making means that your system is making decisions, multiple decisions in sequence, whereas sequential learning, again, is one where you're learning as data comes in over time. Cool. So because the problem statement of lifelong learning and the definition of lifelong learning isn't somewhat agreed upon, I'd actually like to do something a little bit different in this lecture and give you a little bit of an exercise. And in particular, I want you to pick an example setting. It could be a setting that you came up with. Or it could be one of the examples that we saw in the previous slide and discuss the problem statement in a small group for that example setting. And in particular, there's three things that I'd like you to discuss. The first is, how would you set up an experiment to develop and test an algorithm for that setting? The second is, what are the desired or required properties of an algorithm for that particular problem? And the third is, how would you actually go about evaluating such a system? So if it's helpful, I'll give you the example settings for reference here. And then I'd encourage you to split up into small groups, maybe small groups of no more than five people. And then I'll give you maybe about five minutes to discuss this in a group. And then afterwards, we will have one person from each group describe what they came up with in terms of A, B, and C, and the problem setting that they were discussing. And yeah, we'll start to discuss the problem statement. Does that sound good? Any questions? Cool. I can also help you split up into groups if that's helpful. Cool, OK. Go ahead. And it's going to be one problem setting per group. [CHATTER] Are folks good? Or do you need more time? Raise your hand if you need more time. OK, I think you've been outvoted. Cool. OK, let's come back. Yeah, for each person-- for each group-- I think we had about seven different groups. If one person could kind of briefly run through what problem setting you were discussing and the desirable properties in the evaluation. So do you guys want to start? Yeah, go ahead. So are [INAUDIBLE] limitations [INAUDIBLE] So is that [INAUDIBLE] have a distribution of user [INAUDIBLE] or however distribution [INAUDIBLE] are introduced. And the desirable properties of the [INAUDIBLE] properties. Firstly, the model should achieve good performance or infinitely good performance as the number of images [INAUDIBLE] to infinity. And secondly, the loss of the model should be, on average, approximately monotonically decreasing as you put in more images to the model. And to evaluate such a system, we just-- Sorry. To clarify, so you're saying good performance as the number of images grows. Is this as the number of users grows or as the number of images grows? [INAUDIBLE] That's the number of introduced rules. If it is for that user, it grows. Got it. And then what was the second thing? You said the loss. The second thing is our average, because we're [INAUDIBLE] expectation over different users, our average, the loss should be approximately monotonically decreasing as you increase the number of images. All right. Is that also per-- as you see more and more users? As you see more images for each user. OK. So is that very similar to good performance? I guess losses down, is lower than, doesn't that mean you have good performance? The first point is actually that the performance can be infinitely grouped as you increase the number of [INAUDIBLE] to infinity. I see. OK. So you'd want to be monotonically increasing in an asymptote to be kind of perfect. Got it. Cool. And then evaluation? [INAUDIBLE] sources from-- you could do [INAUDIBLE] you calculate the expectation, our expectation, of how many images will it take for performance to [INAUDIBLE] a specific threshold, like [INAUDIBLE].. So this is basically like learning efficiency, basically like how many images do you need for a user in order to get a good accuracy for that user? In order to get a specific performance. And the lower [INAUDIBLE],, the higher the performance. Cool. Maybe we can move on to the next group. Yeah, go ahead. OK. So we were doing, E, the doctor's assistant aiding a medical decision-making. Should we-- do we go through all of them, like A, B, and C? You can do only B and C, or if you want to kind of briefly mention A, you could do that. Briefly mention A. We would just-- we would have patient tasks come in, like various patient tasks, like recording breath and how fast you're breathing or whatever. And then that would be the experiment. As more patients come in, we would have more tasks to evaluate. The desirable report properties, we would be increasing and improving the, I would say, similar to improving the performance, as the number of unique task grows and the number of examples for each task also increases. So that would be a desired, or probably required, property of the algorithm. As for evaluating the system, we would evaluate it-- we would have the same recurring, a test that we use at multiple time points. At each time point, the system will have encountered more examples from more patients it has seen. And then, hopefully, the performance of those tasks they've seen before and new tasks increase. Because the task set will have those examples it hasn't seen and tasks it has seen. Cool. And so then, I guess, one thing that's a little bit different here from the other evaluation is you also want to be able to do well on past tasks, as well. Yeah, you want to see if it does well on new examples of past tasks. Cool. Anything else? No, I think we're probably [INAUDIBLE].. All right. Next group. OK. So we chose the example setting C, the robot in different environments. So one of the things we talked about was, actually, the problem of forgetting stuff. So if you constantly transfer the robot into new environments and learn new tasks, it could actually forget what it's learned in the past. So that's one of the very desirable and required properties we talked about. And also, we talked a bit about if there is some kind of expressiveness in the model that might limit what it can learn, like the overall number of tasks, in general. So if it learns more and more tasks, it could get worse in the performance for a single task over time, even if it doesn't yet. Also, it's like B and for C, we would kind of keep a database of the tasks we had in the past and basically see the more tasks we learned that we haven't forgotten any old stuff that we learned. Cool. So yeah, this is pretty well covered. The back left group. Sure. So we did the [INAUDIBLE] most individual [INAUDIBLE].. But also, I guess the [INAUDIBLE],, it would be like we can display the different [INAUDIBLE] that we're trying which could possibly affect our algorithm. Because I can represent the control to see what's performing best. And then the desired properties that we want from this task is basically [INAUDIBLE] like it's increasingly, progressively getting a difficult level of [INAUDIBLE].. So those things that we want the student to focus on would be increasing their knowledge, breadth, and depth so they can more [INAUDIBLE],, like the depth of their knowledge increasing and the breadth of their [INAUDIBLE] more difficult, like something that [INAUDIBLE] ends up [INAUDIBLE]. To progressively more difficult. And I guess the way to evaluate would be how the algorithm or the method is performing on tangentially related but kind of derived from the system, so how evaluation works in exams, for example. Like, testing the concept, that's kind of covered in the course, but the agent needs to develop to those patterns, like how is this organically learning and developing concepts to improve the breadth and depth of the subject? Got it. And so your evaluation would be through tests or through-- Yeah, kind of like a continuous evaluation, which is testing. So not all sort of dissimilar to what we're taught. But it's basically just the generalizability of the content. Yeah, that makes sense. One thing that's, I think, interesting to note is that when you do go towards more and more complex tasks, it may be that as you have more and more tasks, your performance may not actually continue to go up. It may actually stay constant or even go down because the tasks are also just getting harder and harder over time. So that, actually, may be a little bit different in comparison to if you have a sequence of tasks that are more IID, or similar in difficulty. Cool. The group in the front. We didn't cover a lot of things that we talked about. Our smarter, or our desirable properties, included good performance over time, not forgetting and not remaking similar mistakes. As to the evaluation, I think we have something slightly different. When we were comparing evaluation in terms of the time state, so is it better than it was? OK. Nice work, [INAUDIBLE]. But the [INAUDIBLE] and performance. So is it getting-- do you care more about is it getting better versus just the absolute performance? Cool. OK. The group there. We talked mainly about the virtual assistant task. And I think what's interesting about that one is different users might have different ideas of what constitutes successful help on a particular query prompt. So the system should be able to learn what that is. It should be able to adapt itself to the different users. And maybe it has to-- that's not so obvious, always-- but we talked about how you might generate data for that and then evaluating it by looking at how successfully it's able to help many different users other than just a few. Kind of like performance on, basically, all of the tasks come together, yeah? For all of the people. Yeah. Cool. And then I think we have one more group, which is in the back right. So we [INAUDIBLE] did the robot recording on a [INAUDIBLE] And the setting was that-- so the final goal that we want is for that we wanted to cook a meal. And we're training it by giving it some simple tasks, such as moving things, and making it more and more complex. So a desirable property is that it should be able to use the previous tasks in learning the new task. And so if we give it a more number of simple tasks before we give it the final task, then it should be able to learn the final task faster and better. So the value [INAUDIBLE] we will start, we will compare against just giving one simple task or giving a few different simple tasks and how well it is able to learn with those k tasks, how well it is able to do in the final task with an increasing number of simpler tasks. And the other thing was the same that many people have covered, that it should not forget the past. Cool. That makes sense. So that was actually a little bit different than the other ones in that you maybe only care about the performance on the last task. Perhaps the curriculum is only kind of a means to an end. It's not the-- you don't care about the performance on the intermediate tasks itself. I think that the only thing that is not mentioned over there that we thought of was how fast the agent, assuming that he solved our task before that, he can learn a new task faster, so the rate now for learning a new task. Yeah, so ability to learn a new task quickly. Interesting. Cool. So in general, actually, there's more consensus, I think, across the different examples than I've seen in the past-- generally, looking for good performance on past tasks and new tasks, although in some cases, we didn't really care about the performance on past tasks. We only cared about the performance on future tasks and later tasks. In some cases, you might be learning a single model that can kind of do everything, whereas in other cases, you might care more about adaptability. For example, different people-- the virtual assistant example came up where maybe different people do have different objectives. Someone wants a virtual assistant that has a certain-- I don't know, is very efficient, where someone else cares a lot more about being very thorough, for example. And so adaptability might actually be pretty different than if you were to just try to learn a single model that can do all of the tasks that it receives in sequence. There's a couple of other things that I thought of that I think actually can also sometimes vary between problem statements. I guess one thing that we did see a little bit is the tasks might be IID versus something that's kind of predictable over time versus a curriculum of tasks. You could also-- we didn't see this in any of the examples, but you could also have an adversarial sequence of tasks where, for example, you're trying to detect spam over time, spammers may try to create things that will get through the filter that you're trying to create. And then, in that case, the task that you see over time, or the data points that you see over time, may be more adversarial in nature. Another variation that can come up is having discrete task boundaries where you clearly have a new user versus more continuous shifts over time. I don't think we really saw too many examples of continuous shifts over time. But there's lots of examples in practice where, for example, people's sentiment on certain topics might change over time or the seasons might change over time and so forth. There also may be scenarios where there are very clear delineations between tasks and shifts versus not knowing exactly what those delineations are. And that can affect different algorithms. And then there's also some different considerations. So everyone talked about model performance. We may also care a lot about data efficiency, about the ability to learn tasks more efficiently over time. Another thing that actually didn't come up at all is computational resources. You might want to be able to learn a task with fewer computational resources than before. It may also be that as you see data over time, at some point, you may not be able to store all of the data that you've seen over the course of your lifetime. And you need to think about the resources that you have in terms of memory, as well. And in those instances, you don't want to necessarily have algorithms that will require memory that is linear in the number of tasks that you've seen over time. Because that will grow unbounded. And so oftentimes, another evaluation metric might be the number of computational resources that it's using or the amount of memory that it's using, as well. I guess another point about memory, as well, is in the patient example, which I think someone did, you may also not be allowed to store data over the course. User data can be sensitive. And for ethical reasons or legal reasons, you may not be able to store data for more than some period of time. And that can also affect these algorithms pretty dramatically, as well. And this relates to trying not to be able to forget past tasks or past users. And so that relates to privacy. There may also be other metrics that you care about in terms of interpretability, fairness and test time computation and so forth. So I guess part of the point of this exercise was to, I think, shed light on the fact that, depending on the problem setting that you're in, you may have different considerations for what is most important for that problem setting. In some cases, you might care a lot about the performance of certain performance metrics much more than others. And you may also have different constraints or different resources in terms of these algorithms. And as a result, because there is so much variation in the different problems people might care about, it means that some algorithms may be much better suited for some problem settings than others. Cool. Any questions up until this point? OK. [INAUDIBLE] you can always [INAUDIBLE],, even if your results are [INAUDIBLE] like an equation, to solve the equation. If you get the [INAUDIBLE]. So the question is, theoretically, can't you just kind of retrieve the user data even if you didn't store it on disk? Is that what you're asking? [INAUDIBLE] So this relates to-- this depends on the model that you're using. You may have a machine learning model that does have some sort of explicit memory mechanism that is kind of storing information. Or you have a model that has a lot of capacity and it's possible to back out the data from that model. And then there's other scenarios where you might train a model and it's actually not possible to back out the data from the model itself. In general, if privacy is a concern, then you want to use a method in which you can't back out the data from the model. So there's a lot of literature in what's called differential privacy that gets into how to train models such that you can't back out the data from the model. And there's also a lot of work that has shown that if you train a model on a sequence of data, it will actually, especially if that sequence of data is kind of changing over time, it will forget and start to do poorly on data from the past. Does that answer your question? Yeah. Cool. Any other questions? Cool. Great. So there's a lot of variety in this kind of lifelong learning problem statement. Now I'd like to talk a little bit about algorithms for solving it. I guess maybe to get a little bit more into the problem statement, there is a fairly simple way to formalize the general online or lifelong learning problem. And it's pretty simple. First, you have kind of time, and you're progressing through time. And at each point in time, you observe an input. This could be an image. It could be a patient record. And your goal is to predict the label at that given point in time. And then after you predict the label, you get to observe the true label. And this process repeats. This is a fairly typical kind of formalization of the problem. This doesn't cover all possible formulations. For example, in some problem statements, you may not be able to observe the label right away. There might be some sort of time delay. Maybe it takes, I don't know, a year to label your data, or maybe a month, or an hour. And so you may have some delay between when you make a prediction for a data point and when you observe the label. It may also be that you don't label all of your data. And so in that case, you may not be able to observe the labels for all of them. But this is a fairly general statement that's also pretty simple. And it makes it such that you can tractably-- you can think about this in theory. And there's been a lot of kind of theoretical work that's looking at this problem statement. Yeah? With this approach, wouldn't it take a long time to build up a large enough data set to get good accuracy? Let's say in the intermediate section, we have a few examples that you've seen [INAUDIBLE] Yeah. So the question is, wouldn't it take a long time to accumulate a data set that has enough data points, especially in deep learning regimes, where we want millions of data points? Yeah, absolutely. And so you wouldn't expect, in this problem setting, to be able to do very well right off the bat, especially if you're looking at image classification examples. And so oftentimes, in practice, you'll warm start this process with an existing data set that can help you initialize your predictor rather than starting completely from scratch. Yeah? Can we sort of think about this as a modified Markov chain or a transition from [INAUDIBLE] or optimizer and the [INAUDIBLE] say, a model convergence to [INAUDIBLE] the model parameters? Yeah, so I do think it's possible to formulate this as a Markov chain, especially if it does converge. Yeah. So there's a couple different variants on this problem setting, as well. So the inputs, you could have an IID setting where x and y are coming from this stationary distribution where the distributions aren't a function of time. But you could also have a setting where these distributions are changing over time, as well. There's also what's typically referred to as a streaming setting where you're not allowed to actually store your data points and you only-- you receive a stream. And the reason why this is a setting is that you might be receiving so much data that it's just impossible to store it over time. So for example, if you're streaming video, for example, and you're a very large video platform, you may not have time to actually store all of the frames that you're receiving. So lack of memory is one reason. You could also have lack of computational resources or privacy considerations. You may not be allowed to store the data, or you want to create a service in which you are promising to the user that you don't store their data. One other reason that the streaming setting possibly could be interesting is if you're interested in studying analogs to the brain. We don't really have-- we don't have hard drives. We can't just access a hard drive like a computer does. And so there's some work that tries to understand and study how memory might work in the brain. And so in that sort of setting, oftentimes, you want the model to not have any sort of external storage. It instead needs to somehow store it in its weights. Cool. This setting is true in some cases. And you may also have-- maybe you don't have zero memory, but you have limited memory and you can only store a tiny bit of data. There are also a lot of cases where it is actually quite practical to store a ton of data. Hard drives are getting pretty big these days. And oftentimes, we can actually store data as part of the process. Cool. So the last thing that I'll mention is we talked a little bit about task boundaries. If you are kind of seeing these data points from a sequence of tasks, then instead of just observing xt, you can also observe the task identifier that corresponds to that task. Cool. So what do you actually want from an algorithm that operates in this setting? One quantity that often comes up in the online and lifelong learning literature is this notion of what's called regret. And the way that regret is defined is you're looking at the cumulative loss of the algorithm, of the algorithm's learning over time, minus the cumulative loss of the best learner in hindsight. And so specifically, this can be written with this equation where the regret at a given point of time, corresponding to capital T, will basically just say, OK, what was the loss that your algorithm achieved at each point in time summed over time? So that's pretty simple. That's just the loss that your algorithm is getting over time. And then the second term is looking at, in hindsight, if we were able to pick a single set of parameters for all of those time periods, what's the best parameter vector that you could have done, given all of that data and evaluating the sum of the losses for that particular parameter vector. Yeah? For that, the second point, the parameter vector that you could have done across the board, is it just to launch the latest set of parameters at your latest task, at your current task? Is that the best one because it's seen the most data? Or could it be another one? Yeah, so the question is, does data just correspond to the last parameters, or could it be another one? So one important thing to note is that, in general, you can't evaluate regret in practice. Because we don't necessarily have access to that best possible parameters. But it is pretty useful for analysis. That said, if you did want to approximate this at the end of training, you could basically take all of the data that you had seen so far, train a model on all of that data, and that would give you a reasonable proxy for that second term. Yeah? Couldn't this value [INAUDIBLE] be negative, as well, since you're looking for [INAUDIBLE] different data for-- Right. So typically, this value is looked at in the IID setting, where, basically, there exists a single model that can do well on all of the data. But if you were in a non-IID setting, you could actually certainly do better. You could have a negative regret because you might be able to adapt to each point in time and get a model that's better than just a single model evaluated on all the data. Cool. Now, one thing that's worth also mentioning about this is that this is a quantity that is a function of capital T. And so oftentimes, we think about how is the regret growing over time? And what note about this is that if the regret is growing linearly with time, it's actually fairly trivial to construct a algorithm that has linear regret. Does anyone have any idea for why that's the case? Yeah? Don't you just have zeros, and then the minimum of the second term will just increase by the same amount every time step? Is that what-- Sorry. Can you repeat that? I didn't quite get it. Maybe I'm misunderstanding-- No. I think-- well, I think you were mostly right. But I didn't fully understand it. If you just have both the parameter at each time step to be just zero. Because the second term of this is just increasing. You add one of the same copy every time step. And then if you assume both tasks over time to be similar in terms of the last magnitude, you will just have something that grows linearly. Yeah. So basically, if you have just a constant set of parameters that your algorithm is producing for theta t, then you can actually get linear regret just with that. And maybe we can walk through an example to see why that might be the case. So we can look at a very simple example, which is maybe you're really into basketball. And you're interested in being able to predict the number of points that a player will score over time. And at the first point in time, you don't know much about the player. They're kind of joining for the-- they're playing their first game. And so maybe you guessed that they're going to have 10 points, score 10 points in a game. But then it turns out that they score 30 points in that game. And then at the next point in time, in the next game, you also guess 10 because you just have your constant model. And then they actually score 28 points. And likewise, maybe the third time you see them play, you also guess 10 because you're using just a constant model. And they actually score 32 points. In this case, in hindsight, the best model, if we-- well, I guess we could first compute the first term. So the first term would be maybe you're using an absolute loss, so your loss would be 20 for the first time step plus 18 for the second time step plus 22 for the third time step, whereas in hindsight, the best model, in hindsight-- for the first data point, it doesn't have anything. So maybe it would also have a loss of 20. But after that point, it would probably guess-- it should guess-- the optimal thing should guess 30. The optimal thing in hindsight-- so this is something where the regret is growing by, basically, an average of 20 at each time step-- or, sorry, the first term is growing by 20 at each time step. The second term is the best in hindsight. And so that would get a loss that is fairly small. So at the first time step, it would guess 30. At the second time step, it would guess 29. And the third time step, it would output 30 because that's the average. And so basically, the second term is fairly close to 0, whereas this first term is growing by 20 at each time step. In contrast, you can do much better if, instead of guessing 10 at every time step, if you guess kind of the average of the scores that it had seen previously. So at the second time step, you could guess 30. And then at the third time step, the average of 30 and 28 is 29. And so you could guess 29. And in that case, your regret is going to grow sublinearly. Because then, instead of being 20, 18, 22, your loss will be 20 at the first time step and then 2 at the second time step and then 3 at the third time step. And so if you were to plot this and looked at how this grows over time, the first one would be, if this is grades of 20 and this is each time step, your regret would be linear, whereas for the second one, you would first go up to here, but then it would grow much more slowly at future time steps. Cool. So I guess the takeaway here is that in terms of defining regret in these online learning settings, linear, you basically want to try to have sublinear regret, and you want to grow it as slowly as possible over time. Because that means that you're actually getting better and better at making predictions on your online learning problem. Cool. One of the really important concept in lifelong learning is this notion of positive and negative transfer and also forward transfer and backward transfer. So we talked a lot about trying to be able to do better on future tasks and also better on past tasks. And so positive forward transfer is this notion of being able to do better on future tasks as a result of seeing the previous tasks. And this can be measured in comparison to learning the future tasks from scratch. Positive backward transfer is basically the ability to do well on past tasks. And so if you have a positive backward transfer, that means that the current tasks are actually causing you to do better on the previous tasks than if you had only seen the previous tasks learned from scratch. Yeah? Is there an example where this happens? Where positive backward transfer happens? Yeah. So one example is maybe for each task, you don't have that much data. Maybe you only have, I don't know, 100 data points per task. And maybe once you basically accumulate enough data from all the tasks you've seen so far, you may actually do better on the first task than if you hadn't seen all of that data. I think that that's probably one of the most common cases where you might see this. And then you could also, likewise, define negative forward transfer and negative backward transfer. That's where, basically, you're actually getting worse at future tasks or worse at previous tasks as a result of seeing the other tasks. And negative backward transfer is actually quite common, especially if you have a small amount of memory and you can't-- and you basically start forgetting those past tasks. Positive forward transfer is also fairly common because you're essentially kind of pretraining on the previous tasks rather than starting completely from scratch. Yeah? When negative backward transfer occurs, do you think it's mainly because of the limited memory thing, or it's impossible for there to be a model that can do well on all of the tasks? Or is it just because of something about the training order? Like, if you processed the same data differently, you could get a model that did well on all of them? Right. So the question is, when you see negative backward transfer, is it caused by memory or caused by ordering in which the tasks were seen? Typically, people see it the most when they have very, very little memory and they just kind of train on all the data-- basically train on the data as it comes and don't actually continue to train on the old data. That's really the most common instance of negative backward transfer. That said, you can also have both negative forward and negative backward transfer if your model just doesn't have enough capacity. And so if you're trying to train on a lot of tasks with a really small-capacity model, that can lead to worse performance than if you were to train only on that task from scratch. And in that case, you could actually see negative transfer both in the forward direction and in the backward direction. Yeah? Is this taxonomy really that clean? [INAUDIBLE] tasks causes them to converge backwards to a higher loss? So your first question is, is the taxonomy that clean? I do think that these are pretty cleanly defined. I think that you can kind of write down equations for each of these. Your second question was-- can you repeat your second part of the question? I recall that we discussed previous tasks causing to converge faster or converge to a higher loss. Right, I see. So this is often measured in terms of final performance on the task. But you're right in that there may be-- there's more than one criteria that you care about. You may care about learning efficiency. You may care about final performance. And you may actually get better efficiency but worse final performance. And so it's important-- you can basically measure both of these in terms of performance and in terms of efficiency. It depends on what your performance metric is. And so if your metric is the number of images it takes to get a certain level of performance, then you can measure these in terms of that metric. You could also measure each of these in terms of the final performance, as well. And so you basically just need to define the metric you care about. And then these terms make sense in the context of a certain metric. And actually, if we have time at the end of the lecture, we'll show some experiments where we're measuring both learning efficiency and final performance. Because oftentimes, you do care about both. Cool. So let's get into some basic approaches. And really, there's two very simple approaches, which are probably the things that all of you could come up with if you were to try to approach one of these problems. The first approach is to store all the data you've seen so far and train on it. And so, in particular, up until time t, you have-- at time t, you have all of the data that's basically the union of all the data points that you've seen. Whoops. Maybe I'll use capital T for this. So at time T, the union from t equals 1 to T. And you just train a model that-- this is the data up until that time step-- and then you just train a model that tries to minimize your loss function on that data. And you train this model at every single time step. Of course, if you were to train a model from scratch on the data set at every time step, that would be very expensive. And so typically, you'll warm start this model with a model trained at the previous time step. And you won't necessarily train this all the way to convergence. This is referred to often as the follow the leader algorithm and generally achieves very strong performance. It can be computationally intensive. But if you warm start, that can help with the amount of compute that you need. So if you're basically just continuously fine tuning on the data you've seen so far, that can help. It can also be memory intensive because you need to store all the data that you've seen so far. And for some applications, this is a deal breaker. For some applications, this is completely fine. Cool. And then the second approach is simply at every time step to take a gradient step on the data point that you observe. And so what this means is instead of actually storing all of this data, basically, at time T, you will just update your model by evaluating it on the data that you received at that particular point in time. And so this would correspond to just looking at the data at that period of time and taking a gradient step on that. You can also take a couple of gradient steps if you think your model needs it. This is basically just stochastic gradient descent. It's, computationally, very cheap. It also doesn't require any memory beyond the memory of one data point and your parameters. It is subject to negative backward transfer because it might forget some of the data points that it saw before. Oftentimes, when we run SGD, we need multiple epochs, multiple passes over the data. And this is not really being given multiple passes over the data. This is often referred to as forgetting, sometimes referred to as catastrophic forgetting, although I think that's a little bit dramatic, in some cases. It could also be somewhat slow to learn, as well, especially if you only take one gradient stop. And if you take multiple gradient steps, then you might overfit to the current data point that you are seeing. Any questions about these two approaches? Yeah? [INAUDIBLE] forwarding. But like any [INAUDIBLE] in this case, [INAUDIBLE] getting the data in an online fashion? Yeah. So if you're in the IID setting, this is not too bad. If you're in the setting where you're not getting data points in an IID setting, then this can be more challenging because you'll have correlations between neighboring data points. And that can cause-- that especially can cause-- things like forgetting where it'll just memorize the kind of correlations that it's seeing right now. For example, if you see-- if you pass in data that has correlated labels, it might just memorize that label and not even consider outputting labels that it saw in the past. Cool. So these are the really simple things. They're actually not that terrible for things. And that's why like things like SGD are very common. There's also a question of whether we can do better. Before we talk about whether we can do better, we can look at just applying one of these very simple algorithms in a robotics scenario. And we talked a little bit about warm starting. And so in this case, we're going to warm start it on a pretty large data set of robot grasping. It corresponded to 580,000 grasp attempts collected across seven robots. This model was able to achieve an 86% grasp success rate on objects that look like this. But then, if you gave it objects that were a little bit different, like these glass bottles, the performance dropped significantly. And so the way that this will work is it will be very similar to the first algorithm that we saw where we want to continuously fine tune on the data. And it's going to use a Q learning algorithm to do this. It's not too important, the specifics of the RL approach, but basically, it takes the initial data, it kind of trains an initial model on that data, and then uses that to initialize a Q-function or initialize the neural network model. And it adapts on a mix of the original data and the data in the new scenario, like with the glass bottles. And even just fine tuning does quite well in this scenario. So if you just do pretraining and then fine tuning to one scenario, you could increase the success rate from 32% to 63% in this harsh lighting condition, from 49% to 66% with transparent bottles, and so on and so forth for these different scenarios. But this is just fine tuning. What about a lifelong learning setting? So we can basically take this same sort of approach and apply it in this sequential learning scenario where for each new condition that the robot encounters, it will basically initialize the neural network with the parameters from the previous time step, or the previous task. And then it will be fine tuned on a mix of the data from the current scenario and the original pretraining data set. And if you do this, you can get performance that also looks quite good. And so it's able to continuously fine tune on each new scenario that it experiences and is able to get much better performance than if you just took the model trained on the first data set. One thing that you might notice is that these numbers are actually monotonically increasing. I think that that happened by chance. There is no reason to necessarily expect it to get better and better as you continuously fine tune. It may be that, for example, the last task is maybe a little bit easier. Or it could be that it just benefited from having more gradient steps, for some reason, or something like that. Now, this is a scenario where we're keeping around data from the first-- the initial data set. And so, as a result, I would expect the models here to be able to still transfer well to the original environment. That said, if you didn't assume that you had all that memory, then it probably would still forget in that scenario. And so what I'd like to talk about in this next section is whether we can design an algorithm that uses only a very small amount of memory in order to avoid that sort of negative backward transfer. So the case study here will be this idea of having an episodic memory. And we're going to assume that we can store a very small amount of data per task in memory. And then, when making updates for new tasks, we're going to try to ensure that those updates don't unlearn the previous tasks. So the first step is easy. The second step is a little bit more difficult. That's kind of the meat of the algorithm. And so we're going to assume that we're learning some predictor that takes as input the example and the task ID and predicts the label. And then we'll assume that we have some memory for each task. This memory might have as few as five examples, for example. And then at each time step, we're going to minimize the loss of our predictor on the data from that time step for that task. But the key part is that we're going to add this constraint, which is that we don't want the predictor, the new predictor that we're optimizing over, on task k to be worse than the predictor from the previous time step. And we're going to try to apply this constraint for all of our previous tasks, not just our current task. And so the idea here is that we want to find a new predictor that does well on the new task, or the new data point, such that the loss on the previous tasks doesn't get worse. And this constraint will use the memory that we had stored for the previous tasks. And so this isn't necessarily-- this is going to make the assumption that that memory is at least somewhat representative. And we could imagine possibly overfitting to that memory and just memorizing those functions, although, in practice, if you're making a small update to the model, you would expect that if it doesn't make the predictions on some of the data points worse, it probably shouldn't also make other things a lot worse, as well. Now, there's a question of, how do we actually implement this constraint in practice? And what we're going to do is we don't want to make these worse. And so we can add this kind of local linearity assumption. And basically, instead of placing a constraint on the loss function, we can place a constraint on the gradients. And so what this looks like is we'll basically try to make it such that we ensure that the gradient of the function-- basically, the gradient that we're trying to apply to our predictor has points in the same direction or is orthogonal to the gradient for the previous task. And if their gradients are pointing in the same direction or orthogonal, then, assuming local linearity, that means that the loss function for the previous task won't decrease. And so if they're orthogonal, we would expect the loss function to stay the same. If they're actually pointing in the same direction or at least somewhat in the same direction, then you may actually get positive backward transfer, where, actually, you might actually improve on the previous tasks. Cool. So we're basically going to try to improve on the current time steps while also ensuring that the gradients are pointing in the same direction as the gradients for the previous tasks. It's worth noting that it's possible that this constraint leads to an infeasible constraint optimization. Because if your gradients for your current task and your previous task are pointing in opposite-- it may be that it's not possible to improve on the current task while also going in the same direction as all these other past tasks. And this may become increasingly infeasible as you increase the number of past tasks. Because that's going to basically increase the number of constraints on your optimization. Yeah? [INAUDIBLE] if the number of your data points is smaller than the number of parameters in your model, [INAUDIBLE] become infeasible for every single time step? If a number of data points is greater than the number of parameters in your model, would it become infeasible? It really depends on what the data points are, certainly. In general, yes, although in this case, we're actually looking at this averaged across the data points within a task. And so it's really if the number of tasks is above the number of parameters. And in practice, we usually have a very large number of parameters and a very-- not necessarily a small number of tasks but the number of tasks is far less than the number of parameters. Yeah? [INAUDIBLE] doesn't necessarily [INAUDIBLE] the constraints from [INAUDIBLE] on-- the main constraint that we are [INAUDIBLE] doesn't guarantee that [INAUDIBLE]?? You're asking, if we don't have local linearity, then this doesn't imply this? Yeah. In practice, this doesn't exactly guarantee the [INAUDIBLE] constraint variable in that [INAUDIBLE]. So if you are locally linear, then this, I believe, should imply this right here. Basically, the linearity is important because, basically, if your gradient points in a certain direction, you want to ensure that the-- what's a good example of this? Say your loss function looks like this, which is like pretty heavily non-linear. Then if your gradient right here is saying that it's OK to basically go right, because then you'll have a smaller loss function, and you take too big of a step, then you'll actually end up-- if you go right and you take too big of a step, you might actually end up here. And so that's a problem. But if it is actually locally linear, and so your step size is smaller than that, then that means that you will actually ensure that it goes down or, at least, that it stays the same if the equality holds for the inequality. Cool. And you can formulate this as a quadratic program and solve it once you evaluate the gradients. Cool. So they evaluated this approach on a few different experiments. They looked at three different lifelong learning problems. The one is a sequence of MNIST tasks, so digit classification, where each task has a different permutation of the pixels in the image. The second task is different rotations of MNIST digits. And the third task is a CIFAR-100 image classification task where each task is introducing five new image classes. BWT is short for Backward Transfer. FWT is short for Forward Transfer. And their total memory size that they assume that they had available was 5,012 examples. Cool. So in the left plot, we can see that-- the method that we just discussed is called GEM, or Gradient Episodic Memory-- we see that the average accuracy is similar to or higher than training a single model, training independent neural networks for each task from scratch, and some of these other prior methods. We can also see that some of these prior methods have negative backward transfer, meaning that they forget the previous tasks, whereas this approach is actually able to get a small amount of positive transfer. And then we also see that there's very little forward transfer. And then the right plot is actually evaluating the accuracy on task one after you train on each additional task. And so we see that this approach is able to maintain a pretty high performance on task one, whereas these other methods, the performance starts to drop as you see more and more tasks. We can similarly look at plots for the second problem, which is these different MNIST rotation tasks. And again, we see a somewhat similar trend. One thing that's different here is we actually see a lot more forward transfer. And that's probably expected because it's probably pretty difficult to-- there isn't a lot shared between different permutations of the pixels, whereas if you're rotating these images, there's a lot more that might be shared across the different tasks. And then for CIFAR-100, it's also somewhat of a similar story. It's a little bit kind of noisier in terms of the performance. But it's also able to get less negative backward transfer and a higher accuracy than some of these other methods. The last thing that I'll point out here is if we take a step back and think about these problems, these MNIST problems and so forth, in practice, we can store all of MNIST on a hard drive, or even in RAM, in most cases. And so these tasks are certainly somewhat synthetic. And so one general thing to keep in mind when working on lifelong learning is trying to make sure that the experimental domains that you're looking at are ones that are reflective of real-world problems that you ultimately care about. Cool. And then one other thing that I'll mention that I think is kind of cool-- we don't have time to talk about it in this lecture-- but you could actually use meta-learning to acquire a learning procedure, an online learning procedure, that can avoid forgetting. And so there's a couple of works that have looked into this topic and fairly successfully developed update rules that don't have backward transfer. And so if you're interested in learning more about that, you can take a look at these references. Great. And then in the last six minutes, I'd like to talk about a slightly different variation on the online learning formulation. So so far, we looked at a formulation where we're basically evaluated on a sequence of data points as we receive it. And this problem setting can make a lot of sense in certain scenarios, especially if we have a stream of data. But if you do actually have different tasks, this formulation may not necessarily make full sense from the standpoint of the evaluation. Because when you see a new task, you're actually going to be also evaluating its zero-shot performance. And it may actually be very difficult to perform well zero-shot on a completely new task. And in some cases, kind of more realistically, you might be given a small amount of data for each new task that you're looking at. And so the picture might look something more like this, where you are actually kind of learning each task and then being evaluated on that task rather than being evaluated on that task right from the start. And what we might hope for in a setting like this would be something where the first task we're learning pretty slowly. And as we see more and more tasks, we're able to learn more and more quickly over time. Basically, the thing that differs in terms of the evaluation is instead of measuring performance on every single data point that you see, you can evaluate the performance only after seeing a small amount of data for each new task. And this is really primarily a difference in the evaluation rather than the stream of data. And so, in particular, what this looks like is for each task that you see over time, you observe a small data set for that task. You use some update procedure, like gradient descent or something else, to produce parameters for that task. And then you observe a data point. You're asked to make a prediction on that data point. And then you observe the label. And so these last three steps are identical to the standard online learning setting. The thing that's different is you're given this initial period to try to actually learn the task with a small amount of data. And in this setting, you can actually create the analog of regret from the online learning setting in this online meta-learning scenario where it's exactly the same as before, except instead of looking at the loss of the predictor, the predictor first gets to apply this update procedure-- which could be one step of gradient descent, which could be something else-- applied to the training data set before it's actually evaluated on each task. And again, the goal here would be to try to get sublinear regret rather than linear regret. Cool. And you could think of this as the loss of the best update rule in course minus the loss of the best update rule in hindsight. Cool. We can apply meta-learning algorithms to this kind of setting. And you can basically take the same follow the leader algorithm. And instead of follow the leader, you could have something like follow the meta-leader, where you store all the data that you've seen so far, you meta-train on that data that you've seen so far, and then you apply the update procedure that you've meta-learned on the current task, and repeat that process. And you can basically-- also similar to follow the leader-- you can warm start your metaparameters with the metaparameters from the previous time step. If the tasks you're seeing in sequence are non-stationary, then it can be useful to use optimization-based meta-learners for this. Because you would still expect the update procedure to do well on tasks that are possibly out of distribution. So that's also something to keep in mind in this setting. We're running out of time, so I think I'll probably more or less skip this. But kind of the takeaway here is that if you measure the learning efficiency and the learning proficiency-- so how fast it's learning and the error or the performance it has as you increase the task index-- you see that, in general, these algorithms are all able to decrease the number of examples they need and decrease the error as they see more and more tasks. But if you use a meta-learning algorithm versus some of these other algorithms, it's actually able to kind of better learn more efficiently over time and also do better and better over time. And this suggests that some of the meta-learning things that we've learned in this course are actually kind of quite well suited for this sort of online learning setting. Cool. So the takeaways from the lecture is, first, there's lots of different flavors of lifelong learning. Unfortunately, a lot of the work out there puts them under this same name. And so that means that if you look up a paper on lifelong learning, it might end up being a very different problem setting than a different paper that studies lifelong learning. Defining the problem statement is often one of the hardest parts of this. And so hopefully, the exercise at the beginning got you thinking a little bit about how you might define problem statements for different problems. And lastly, you can sort of view meta-learning as one slice of the lifelong learning problem where you have some previous experience, and your goal is to very quickly learn something new at the current time step. And it's also a pretty open area of research.
AI_LLM_Stanford_CS229
Stanford_CS229_I_Societal_impact_of_ML_Guest_lecture_by_Prof_James_Zou_I_2022_I_Lecture_18.txt
So I'll be telling you about some of the applications of machine learning, especially in health care settings. So I'm assistant professor here at Stanford. My name is James Zou, and a lot of my group works on actually developing and deploying the AI systems for biomedical and for health care applications. So feel free to stop me if you have questions at any point. So what I want to do today is to just first to give you a few examples, right, a few case studies, of what kind of AI systems are we using and deploying in health care settings and also talk about what are some of the challenges in actually building AI systems for health care applications. So the first example I want to give is actually based on this paper that we published a couple of years ago. It's on computer vision system that we developed for assessing heart conditions, right. So the idea here is that there are these ultrasound videos. So if you go to the Stanford Hospital, right, or most of the hospitals, they will take a lot of these ultrasound videos, which is looking at the human heart. And we developed a system to basically read these ultrasound videos and, based on these videos, to assess the different cardiac conditions of the patient. And the systems is also now being-- we developed the system as published. And then we spent much of the last two years in trying to actually deploy this at different parts of Stanford. For example, that's a setup that we have using this at the emergency department here at Stanford. OK, so how does this work, right? So if you think about these ultrasound videos, these cardiac ultrasound, also called echocardiogram, so the example of this is shown on the left here, right? So if you think about the heart as like a power pump, so the standard way to estimate how much power the heart is generating, especially by looking at these ultrasound videos. So there are actually millions of these videos that are being collected every year around the US. And the current workflow is that the cardiologist or the physician will actually look at these videos, and they would try to identify which frame of the video where the heart, this chamber of the heart is actually the most expanded, where it's the largest, and also try to identify the frame where the chamber is the smallest, right? And by looking at how much the volume of the heart changes from going from the largest to the smallest, then they can get an estimate of how much power the heart is producing, if you think of the heart as like a pump. So as you can imagine, that process can be quite labor intensive because it's quite manual, right, because they have to go through the entire video by hand. And then once they find the frame, they have to actually trace out the boundaries of the chamber to figure out the volume of the chamber of the heart. And all of those steps currently is done basically with manual annotations. So this is where we thought these machine learning applications, especially computer vision, can be very useful, right? So we developed a system called EchoNet, right? And what EchoNet does is to basically mimic this clinical workflow, right? So it takes this input, the same kind of cardiac ultrasound videos like the one we see here, right? So it produces a real-time segmentation of the relevant chamber of the heart, which is the one that's shown in blue, right? And in addition to doing this real-time segmentation of the heart chamber, it also produces a real-time assessment of how much power the heart is producing, which is technically called this ejection fraction. So that's what's shown here. So by doing this it really simplifies and automates these different manual parts of the clinical workflow. So I wouldn't go into too much detail of how the algorithm, the details of the algorithm, but here's a high-level overview of what it's doing, right? So it's basically taking input as videos, and they're basically two sort of components of the algorithm. It's like the top arm and then the bottom arm. So the top arm is basically looking through these videos using a spatial temporal convolutional network, right? So because it's a video, so we have both the spatial information for any given frame how big the chamber is. There's also the temporal dynamical information, right? So this is sort of a modification of the standard kinds of ConvNets that you typically have seen on the ImageNet type applications by adding additional dimension corresponds to capture the temporal dynamics. OK? OK, so it's like a three-dimensional convolution in that sense, right? So it's going through that in the top to extract the features. In the bottom, it's basically doing this real-time segmentation, right? So it's basically producing a segmentation of this chamber of the heart that was colored in blue, right? And these two arms will come together at the end to make an actual real-time assessment for how much power or the ejection fraction of the heart for every beat because, once you have the segmentation for every beat, so you can actually then assess the power. So after it does this assessment, so there's a final classification layer where it's actually trying to predict all sorts of relevant and interesting, clinical interesting cardiac phenotypes. So there's the probability of heart failure where you can predict that. You can also assess ejection fraction, which again, is basically how much power the heart is producing. So it turns out that we can also use the same layer to predict all sorts of other functions, like liver or kidney function because it turns out that, once you know how the heart is doing, that you can actually learn a lot about the rest of the body. Yeah, question. You choose the [INAUDIBLE]. So I guess like when [INAUDIBLE] he wants me to do with sequence, you do something like the [INAUDIBLE]. Here, you explicitly pass in three dimensions, at least sometimes. How do you decide [INAUDIBLE]? Yeah, so it's a great question. So here, if you look at these videos, right, so the heart is actually pretty repetitive, right? Roughly about every once a second where the heart would expand and then contract. So there's actually a lot of repetitive spatial information, which actually makes it quite well suitable for these kind of more convolutional architectures which are looking for these spatial patterns or temporal patterns. And here, in particular, we basically are looking at the time scale of basically once every second, right? So we get maybe around, I think, And then so for every second, then we make like one assessment of the ejection fraction and also the assessment of the heart condition for every individual beat, which is about once a second. And then what happens in the end is that you have, for every beat of the heart, we get an assessment of, OK, so how much power is that heart producing and how likely does the heart is have different diseases. The video itself actually could have multiple seconds. You can actually capture many beats. So at the end, we actually do an aggregation across all of those beats to say holistically for the patient then what is the status of the patient. Cool. Other questions? Great. So this is the system now that's actually used and developed here using data from Stanford. And we also test the system, both at Stanford and at other hospitals. So one of the places we tested is actually in a hospital in Los Angeles, the Cedars-Sinai. And we just took the algorithm without any modification and then just shipped it to Cedars, right, and then just see how did it do, right? And we're actually quite happy to see that, even without any modification on a different hospital where they had different ways of collecting these images, collecting videos and data, the algorithm actually had very similar performance as it did at Stanford. So the AUC is quite high. It's about 0.96, right. OK, so that's the first example. Any questions about that? OK, so the second example I want to briefly mention, right, is an application of AI more for telemedicine or telehealth applications. So what is telehealth, right? So in normal settings, where usually if you have some sickness or if the patient has some illness, they'll actually come in to visit the doctors in person. But recently, the last few years, there's been an explosion of the need for visiting and having patient doctor interactions, not in person, but through digital formats, right, especially without having the patients needing to leave their homes. And so you can imagine that, really, the need for these kind of telehealth or telemedicine applications have really expanded, especially during the pandemic, right? So for example, just at Stanford and Stanford hospitals, just over the last two years, there's actually something like a 50-fold increase in the number of these digital or televisits compared to about two years ago. So one of the-- so telemedicine could potentially be really transformative for health care, right? If you can imagine not having to actually leave your home and drive an hour to come to Stanford, right? It's much easier to see doctors and also make appointments. One big challenge, right, with telemedicine in general is the idea that can you actually get sufficient information without having the doctors seeing the patient in person, right? And in particular, oftentimes, a lot of the information that the doctor gets is by sort of visual interactions with the patient. If I actually see you face to face, then you can often examine them quite closely. And that's difficult to do when you're doing this on Zoom or some other video visits. And that's really one of the big challenges of telehealth is in getting high-quality images, right, from the patient to the doctors. And for example, at Stanford there are actually a large number of visits that are wasted. So patient will set up a visit with a doctor, but the doctor is not able to get sufficiently good quality images from the patient. So then you can't really make an informed recommendation or informed diagnosis, right? And then they have to reschedule and we turn that back into in-person visits. So it's actually a large number of hours both by the patient and physicians that's wasted because of this lack of quality images. So what we want to do is to basically see, can we actually use machine learning, so especially about using computer vision, to improve the quality, right, of these images specifically for these telehealth type applications, right, because it turns out that people are very good at taking photos for their Instagrams and for their Facebook. But maybe they're not so good at taking photos that are clinical quality. They're i.e. informative for this clinical decision-making process. So the idea is, can we actually use machine learning to guide people and help people to take more clinically informative images? So that's the motivation behind this algorithm that we developed. It's called true image, which has also been commercialized through Stanford. And so the motivation is quite intuitive, so similar to how online check deposit works. So the idea is that we want something that's very simple that could be run on people's phones. And then would automatically tell people is the image that you're taking, maybe of your skin lesion, is that sufficiently high-quality for your dermatologist or for your clinical visit? If it's not so good quality, then the algorithm actually provides real-time feedback, guidance to the patient on how to improve the quality. Maybe it's just that you want to zoom in a bit more or you want to move closer to the window to get better lighting. So it provides these real-time guidance until they get sufficiently high quality. So yeah, basically the TrueImage algorithm, so it basically runs, it's designed to be run at the patient's facing side, right? So the patient could be taking a photo of their skin, and then they want to use that to send it to their dermatologist for their televisit, right? And the algorithm basically would decide if the photo is sufficiently good quality. If it's sufficiently good quality, then that's fine. The images go through as normal. If it's not so good quality, right, the algorithm would decide how to give a recommendation to the patient. Say how do you actually improve the quality of your photo? In this case, maybe assess, OK, you need to move to a brighter lighting, right? And the patient would retake that, and if it's good enough, then it's passed through the algorithm. Is the setup and application clear to people? OK, so a little bit more about under the-- oh, gosh, maybe I'll just jumped into how well does this work? So the algorithm, we actually conducted a prospective study here at Stanford. Prospective study means there's actually a real-time study where we recruit the patients that use these tools. So almost like a clinical trial. So this is done in the dermatology clinics that Stanford operates. And we tested this on about 100 patients, right? And it was actually quite effective. So by the patients using this algorithm, they were able to basically filter out about 80% of the poor-quality photos which are i.e. photos that they would have sent to the clinician but otherwise would have been useless by the clinician because they are not sufficient high-quality to actually make a meaningful, informed diagnosis. It's also nice that this is actually very fast. So on average, it takes actually less than a minute for a patient to generate a high-quality image by using the TrueImage algorithm. And this is an example of the kinds of improvements that you see here. So maybe this is an initial image that someone would actually really would take and send it to their doctors for these telehealth applications, or for these telehealth visits. And that TrueImage algorithm would identify that this image has the following issues with the blur and the lighting. It makes a recommendation. And then after using the algorithm, right, the patient actually have a much better image that now would facilitate their televisit. So this is actually being now-- it was tested at Stanford in dermatology settings, and it's also being integrated now into the Stanford medical records. Cool. Any questions about this before we move on? Yes. Beyond the dermatology, I know that dermatology is like, probably, a field that is pretty-- this is pretty useful because you can make a judgment just based on [INAUDIBLE].. But beyond that, are there any other readily convertible fields in medicine? I feel like oftentimes you need-- your doctor is looking at your throat or something that's less applicable? Yeah, so it's a good question. So I think dermatology is probably the most immediate application of this, right? There are also a few others of more like primary care settings, where often the doctors actually get more information just by inspecting what the patients look like and how they behave. So this is where it could also be useful. For other settings, where it actually requires taking a biopsy of the patient, for more pathology or cancer diagnosis beyond skin cancer, then the patient will still have to come in to the visit. Yes. Is it appropriate any sort of domain knowledge into identifying what you're actually trying to take an image of? Yeah, yes, it's a good question. So what the algorithm does is it actually does incorporate quite a bit of domain knowledge. So one of the developers on our team for doing this-- she's actually one of my postdoc-- was a dermatologist. So she sees patients one day a week, and we actually did our initial piloting of this in her dermatology clinic. So for example, the kinds of domain knowledge that comes in will be, first, we actually take the image. Then we first segment out just the skin part of the image because if you take an image, there could be all sorts of background, maybe our furniture and chairs. And we don't really care about the quality of those backgrounds. So if you segment out to the relevant human skin. And we identify-- after we segment out the human skin, right, then we also try to map onto what we think are the likely issues with the image, if there are issues. So oftentimes, the things that come in would be like is the image-- does it have enough lighting, right? So the kinds of lighting that's required for a dermatologist is actually quite different from the lighting that's from the standard photos. So that's actually one place where often people make mistakes, right? And another place where the expert knowledge is very useful is in terms of how much zoom is needed. So sometimes, if people zoom in too much, that's actually not so good because then they lose the context of the surrounding-- if you just zoom in only onto your lesion, then you lose the context of the neighboring parts of the skin. Or if it's too zoomed out, you also don't have enough information, right? So there's an optimal zoom, which we get basically actually by-- so the way that we train two images is actually by having dermatologists generating annotations about what is the optimal zoom from a database of images. Cool. Yeah, good questions. So another example that's very quickly mentioned is we've also been developing these algorithms for using machine learning to improve clinical trials. Whereas, the clinical trials are the most expensive part of medicine. Each trial could actually cost hundreds of millions of dollars to run, and really the bottleneck of this entire biomedical translation process. So one place where we found where machine learning can be very useful is in helping these clinical trials or helping the drug companies to decide what are a good set of patients to enroll in a given clinical trial because you want the patients to be that they're diverse. So they really cover diverse populations. And also that the drugs are likely to be safe and effective for that set of patients, right? So this is basically a tool that we developed called Trial Pathfinder and for helping to guide the designs of the clinical trials, specifically the designs of which cohorts of patients are eligible to participate in the clinical trial. And this is being piloted now by some of our collaborators and partners at Roche Genentech, which is the largest biopharma company. And if you're interested, the more details are described in this paper. Good. So now that we've talked about a few examples, right, of where machine learning can be used in health care settings and where I think it's having a substantial impact, I would also like to discuss some of the challenges and opportunities that arise when we actually think about deploying machine learning in practice. In 229 we talk a lot about actually how to build these algorithms, right? And there's also a lot of interesting challenges after we build them. Think about how do we actually deploy and use them in practice. So just to set the stage, I want to give you a concrete example, which is like a little detective story. So here's the story, right? I mentioned these dermatology AI applications. So dermatology is actually one area where there's been the most intense interest and investment in developing AI algorithms, precisely because the data there is relatively easy to collect. And oftentimes, these algorithms will work as follows, where you take your photo maybe of a lesion. You can take your phone and take that photo. And then behind the scene here there will be some sort of-- often a convolutional neural network that looks through this photo and try to classify is this likely to be cancer or not. So in this case, it actually predicts that it's likely to be melanoma, so it's skin cancer. And the recommendation is that the patient, the user, should go visit the dermatologist as soon as possible. OK, so the reason why this is useful is that there are actually several millions of people every year who have skin cancer but are not diagnosed until it's too late. And with skin cancer, if you diagnose it early on, then it's actually very treatable. But if it's too late, then it's deadly, right? And potentially, many of these people they actually could have made earlier diagnosis because many of them have access to be able to take these photos. So that's the reason why there's a lot of interest now both by academic groups and also by commercial companies, like Google, in pushing out this kind of AI for dermatology applications. So of course-- yes, go ahead. One question, what's the target going to do [INAUDIBLE]?? This is like the ordinary patient, or it's actually the doctor because I mean if there's something growing on your skin, [INAUDIBLE] it's dangerous to actually show and not do something [INAUDIBLE].. So what is your target base? How do you take care of the region? Yes, good question. So there are algorithms that are people are putting out that are consumer-facing. There are also algorithms that are more clinician-facing. So most of these ones here are actually more consumer-facing. Oh. And the reason is that, to actually make an appointment to see a dermatologist could be like a three month or six month wait time, yeah. Whereas, maybe people, they don't want to make a visit every time they see something because they don't know if it's likely to be serious or not. So it's basically for that kind of applications where there are a lot of these consumer-facing algorithms are being put out. OK, thank you. Yes. Just one question. What's the [INAUDIBLE]? Is that overall accuracy or just focusing-- because we don't want people that actually has the cancer signal, but because of that has like [INAUDIBLE]?? Yeah. So that will be-- what if the absence of [INAUDIBLE]?? Is that possible that is it is a cancer [INAUDIBLE] maybe [INAUDIBLE]? Yes, yes, it's a good question. So here I'm showing you the AUC of these algorithms from their original publications and papers. And in addition to AUC, people also care quite a bit about the sensitivity and specificity, which I also mentioned in the bit. And in particular, I think the sensitivity is probably the important thing here. A sensitivity here means that, if the patient actually has skin cancer, how often would the algorithm say that they actually do have skin cancer? So that's actually really the important part. If a patient doesn't have skin cancer and the algorithm says you have skin cancer, that's not great. But it's actually not too bad because maybe they'll get it checked, and they say it's OK. But if you actually miss the diagnosis, then that can be potentially more damaging to the health. OK, so given that there's a lot of interest in these algorithms, certainly we're interested in thinking about how to potentially use them and deploy them here at Stanford, right? So we actually took three of these state-of-the-art dermatology AI models. They're all solving this task of a given photo, predict whether it's malignant or benign, skin cancer or healthy. And we tried them out here at Stanford. So if you look at the original algorithms, they will have very good performance. So AUC is very high. However, when we tested them at Stanford on real Stanford patients, the performance is certainly dropped off quite a bit, right? It's much worse. The AUC dropped from So that's the setting of this little detective story. So what we want to understand is why did this happen? Why did these algorithms perform so poorly on Stanford patients because we really need to understand that if we really want to be able to use this in a responsible way in practice. And just to be clear here, so these are actually just images from real Stanford patients, right? There's no adversarial perturbations or attacks down on top of these images. So before I tell you what we found with this, would people actually like to guess? What do you think are some potential reasons why the algorithm's performance dropped off so much when they're applied on the real patients? Any ideas? [INAUDIBLE]? So when it was in the in-patients it was 0.93 in [INAUDIBLE] and Stanford version, it was 0.6. But is that Stanford patients or the real patients [INAUDIBLE] that was taken [INAUDIBLE]. So 0.93 was the performance of these models on their original test data. So the original test data also came from those real patients that these companies or these groups have collected. But 0.6 is the performance of these algorithms when you apply them to the Stanford patients. OK, is there a timing difference? Was it best to be [INAUDIBLE] taken [INAUDIBLE] people in [INAUDIBLE] different time period? So that could be one possible factor. I guess there are some differences, like temporal differences in data sets. It turns out-- yeah, go ahead. I was just going to guess there may be a distribution of difference in age. If that happened, Stanford patients will be more [INAUDIBLE] college students in [INAUDIBLE] maybe more concentrated there [INAUDIBLE] that you have in [INAUDIBLE]. So that's also a good idea. Maybe there's some age differences between the original test patients and the Stanford test patients. Yes. I was just going to say [INAUDIBLE] patients who are in California. People get more sun, so they [INAUDIBLE] the types of skin [INAUDIBLE] have different [INAUDIBLE] people, so someone might [INAUDIBLE] in California versus [INAUDIBLE].. OK, so that's also a good idea. So maybe there's some changes in the location which drives different distributions of diseases, skin diseases that are more common here. These are good ideas. Yeah, any other suggestions? Yeah? I don't know whether that's the case, but maybe the quality of the image would matter? So that's also a good idea. So maybe there's some differences in what kind of cameras were used or how the quality of the images across the original test data and the data here. So these are all good ideas, but there's actually a couple of really big factors that haven't been talked about yet. People want to say more? Or assuming that they have their training processes and operate their train, validate, test sets, since they're not doing anything weird, like, actually seeing test data, is that a factor? So maybe there's some question about whether the models are being overfit or not to the original test data. OK, yeah, so these are all excellent suggestions. So we actually did sort of a systematic analysis, an audit, to figure out what happened here. And the goal of this audit, mathematically, is that we really want to explain this drop-off than performance. So we see that, this drop-off from 0.93 to this 0.6. We want to understand what are the factors that statistically explain this difference in the model's behavior? So it actually turns out one of the biggest single factor is actually label mistakes, right? So what does that mean? So if you look at the original test data, the data that was used to evaluate these original algorithms in their initial publications, so it turned out that the original test data had a lot of mistakes in their annotations. So what happened is that the test data were generated by having these dermatology images. And then they will have dermatologists to visually look at the image and say, is this benign or is this malignant because that's relatively easy to collect. However, just even having experienced dermatologists visually looking at images can also lead to a lot of mistakes, right? So the actual ground truth come from you take a biopsy of the patient and then do a pathology test to say does this patient have skin cancer or not. So the actual ground truth from the biopsy is basically that the labels that we have here at Stanford. But the labels from the original test data actually had a lot of noise in them because they were just coming from these visual inspections. And this actually explains quite a bit of the drop off in the model's performance. And this is maybe not something-- that's the first thing that comes to mind. If you think about somebody built a test data set and evaluated it, oftentimes, in machine learning, we just assume that's the test data should be pretty clean, should be good. But in practice, the quality of the label itself in the test data can often be highly variable. Depends on the real-world applications, right? So the first question we should always ask is so how good is the quality of the test labels? How good is the quality of the test data? So a second big factor which people mentioned is that there is actually a distribution shift in the different types of diseases, right? So the original test data all had a relatively common skin diseases, again, because it's relatively easy to collect. The data here we have at Stanford had both common and also less common diseases because all sorts of people come to Stanford to get treatment. And the algorithms perform worse on the less common diseases. And because of this distribution shift, it also explains some of the drop-off in the model's performance. So the third factor is that actually it turned out that these algorithms had significantly worse performance when applied to images from darker skinned patients. So specifically, if you look at the actual sensitivity of model, which as we said, is what we really care about if a patient has skin cancer, how likely is it to find those skin cancer. So the sensitivity is actually much lower when the algorithms are applied to images that come from dark skinned patients. And when we dug deeper into this, it turns out that actually the original training and test data sets had very few, and in some cases, zero images that come from darker skinned individuals. And now, I think the overall takeaway here is that, oftentimes, when we look at the application of machine learning in real world, in practice, it's very difficult to interpret the performance of the model. So if someone to tell you their AUC, it's almost meaningless unless you really know the context of what is the data that's used to evaluate that model. So here I talk about the dermatology settings, but we actually did similar kinds of audits of all of the medical AI algorithms that are approved by the FDA. So as of last year, there's like over 100 medical AI systems that were approved by the FDA so that it can be used in patients. So each symbol here corresponds to one of these algorithms. And here I'm just stratifying them by which body part they apply to. So some of them apply to the chest or to the heart, et cetera. So there's a bunch of interesting findings we have from auditing these algorithms. I just want to highlight maybe a couple of the salient points just for today. So the most interesting thing is just look at the color. So I colored each of these algorithms blue if the algorithm actually had reported evaluation performance across multiple locations, maybe for multiple hospitals. Otherwise, it's colored gray. So it's already quite clear that most of these algorithms, over 90 out of the or we couldn't find evaluation performance across multiple locations. We only see how does it work at one site. OK, in addition to that, so only 4 of these 130 devices were tested using a prospective study. By prospective, I mean more like a human in the loop study. So they have the algorithm, and they tested how does this work in a real setting with maybe doctors or with patients. So the remaining tested on retrospective data. And that means that somebody collected a benchmark data set beforehand. And then they applied the algorithm to that benchmark data set. So the retrospective benchmark data set could actually have come from the same hospital where the algorithm was being trained or developed, right? So as we saw from the previous example in the dermatology setting, so if you only have data from one location that's collected retrospectively, that can potentially mask substantial limitations or biases in these models. OK, any questions? Yeah. Pretty surprising. What is the process that the AI [INAUDIBLE]?? That, it seems like this is pretty important [INAUDIBLE].. Yeah, it's a good question. This is actually something that we are working together with the FDA on, right? So the FDA has a quite rigorous process for evaluating drugs. For example, for the COVID vaccine to be approved by the FDA, they have to run a very large-scale, randomized clinical trial to show that the drugs are safe and effective. The evaluation standards for medical AI algorithms by the FDA is actually quite different compared to drugs. So for example, these algorithms, they do not have to go through a prospective clinical trial. It doesn't have to be randomized clinical trial. So that's why many of them were tested on these just retrospectively collected benchmark data sets. So one of the interesting challenge going forward is to figure out what's the proper way to evaluate and to monitor these algorithms in practice? OK, so now given that these challenges that we saw here, so I just want to quickly go through a few lessons that we've learned or recommendations that we have for how do we improve the reliability or trustworthiness of these AIs, especially as they're being deployed in health care or biomedical context, now based on some of our experiences from both building, deploying, and also evaluating these models. So I'll say a little bit, a couple slides about each of these points. So the first one is that I think, as we saw, that there needs to be a much greater amount of transparency about what data is actually used to benchmark or test each of these algorithms, right? For example, just to give you a concrete visualization of this, we actually did a survey analysis of what are all the different types of data that are used to benchmark dermatology AI models? So each square here corresponds to one of these dermatology AI models, where people have published a paper on this. And so the squares are the models for the papers. So the circles are the data sets. So the size of the circle corresponds to how big that data set is. And there are two colors, right? So the red circles are basically the private data sets, which means that these are data sets that someone-- could be a company or academic group-- has curated. But they're private so that nobody really has access to it. And then the blue ones, circles, correspond to the public data sets. These are the openly available benchmark data sets. So for example, one here is a relatively large public data set. It's often used by many of these algorithms for benchmarking. And what's quite interesting is that there's actually a large number of these algorithms, maybe about half of them, that were mostly tested or entirely tested on relatively small, actually, private data sets. Basically, all of this was on the top right. All right, and those are the ones where it's actually could be potentially problematic because then it's very hard for people to audit and to understand what's going on in the data and what's going on in these algorithms. So we've been very keen to try to release much more publicly available data sets. So I mentioned that we built this cardiology video algorithm. And actually, as a part of that paper, we created and released, I think, one of maybe the largest publicly available medical video data sets. So it has basically all of the videos that we trained and tested on. So we released all of those. There's over 10,000 videos along with the patient's information and annotations. So this is all publicly available so that people can use this for additional research. And I think this is still maybe one of the largest public data set of medical videos. So in addition to understanding what data goes into developing the models, so we're also very interested in thinking about more quantitative ways to understand how do different types of data contribute to the performance or to biases of the models? So what does that mean? So from a machine learning or statistical perspective, oftentimes, you have your training data. Here you have these different-- each sample could be one of these sources of training data, data from a particular hospital, right? And then you have your favorite learning algorithm. So we're model agnostic. Or it could be a deep learning model or could be XGBoost, random forest. And you have whatever performance metric that you care about in deployment. It could be accuracy or some sort of loss, F1 score. And let's say if your model actually gets 80% accuracy, so ideally, what I want to do is to be able to partition that 80% accuracy back to my individual data sources of my training set, right? So I want to say that, oh, how much each of the data points for each data sources contribute to the model's performance? And the reason why that's useful is that, if the model actually makes mistakes in deployment or if it exhibits biases in deployment, as we saw, then we also want to be able to say very quantitatively what specific training image or training source actually are responsible for introducing those biases or mistakes in the model's behavior. So if we can actually do this end to end from the data to the model and then going back to the data, so then this will make the whole system much more accountable and more transparent. OK, so in a bunch of works with my students, we have actually developed approaches for doing this, for exactly trying to do this data evaluation. It's based on this ideas we're calling data Shapley scores. So the idea here is that we're able to compute a score for each training point. It could be a training image. And the score would actually quantify how much does that image contribute to the model's behavior either positively or negatively in deployment? So for example, if we use our dermatology as our running example, so the training set could be quite heterogeneous. It could be quite noisy, as we saw. And the models trained on this could have relatively poor performance when it's deployed in clinical settings. So the data Shapley score that we proposed actually would just be like a number, a score for each of my training image. The score could be negative if that image is somehow not informative or contains some misannotations or introduce some sort of outliers or bias to the model. So the model, if it's trained on that image, actually does worse. And the positive scores indicate that these are the images, the training points that are informative, such that when the model is trained on those images, they actually learn and do better in deployment. So actually try capturing some informative signals. So the Shapley scores can then be computed relatively efficiently on individual data instances. And this is actually quite useful also for improving the model's reliability because one thing that we can do now is to weight my training data by the Shapley scores. So a simple idea, after we compute the Shapley scores, a simple experiment that we can do is to just take the original model and just retrain the model on the same data set. Except now, I'm weighting each data point by their Shapley scores. So this has the effect of encouraging the model to pay more attention to data points that have high Shapley scores, which are, again, the data points that we believe are the more informative or have better annotations. And by doing this, this out here actually can substantially improve how well the model works in deployment settings. And the benefit of this approach is that it's quite data efficient because we just still have the same data set. We didn't have to go out and collect a new data set. We'll have the same data set and actually the same model architecture. The only difference now is that the same model architecture is now being trained on a weighted version of the data rather than the vanilla kind of training. OK, so those are two, I think, quite complementary ideas. Once we want to be much more transparent about where the data is coming from. And with the data Shapley scores, this helps us to understand how do the different types of data really quantitatively contribute to the model's behavior. And by understanding that contribution, this also gives us ways to quickly improve the model's performance. So the third lesson we learned, actually quite important, it's actually really useful to try to really understand why does the model make specific mistakes because, as a general message, if we want to ensure that the AI systems that we deploy are safe or responsible, it's actually really the mistakes that are the most revealing. By looking at the mistakes, we can actually understand what are the potential blind spots or the weaknesses or the limitations of the model. So we developed a bunch of algorithms that try to basically provide more high level natural language explanations for why does the model make specific mistakes as a way to teach us about blind spots of these machine learning models. So here's one example. So as I say, if you've actually put in this image, so the true label is zebra. But if you put in this image, all of the algorithms, some of them will make a mistake and predict this to be a crocodile rather than a zebra. And in this case, we'd like to understand why did that happen. And so the explanation we provide using this tool that we call conceptual explanation of mistakes, so it actually automatically generates a reason for why the model made the mistake in this context. So in this case, it's because there's actually too much water in the image. So in other words, if the image, the same image has less water and maybe more field, then it would actually have gotten-- the model could have gotten the correct prediction of zebra rather than crocodile. So you can view this conceptual-- this mistake explainer as like a wrapper around different computer vision AI systems. So it takes one of these AI systems and looks at the mistakes the models make. And then it provides this high-level natural language explanation for why did the model make that mistake on that input. So this is quite useful because then we can apply our AI explainer, this mistake explainer to also try to explain why did some of these dermatology models make mistakes on these different users, on these different patient images. So here are actually four example inputs where the original dermatology AI classifier made wrong predictions. So the correct diagnosis, the correct label is written on top, and what model's predicted is written on the second line. And in each of these examples our mistake explainer automatically provides our reasoning of why the model actually made that mistake. So for example, in the first example, so we learned that the reason why this model actually made the wrong prediction here is actually directly because of the skin tone. So it's because of the darkness of the skin tone. So in other words, if the skin actually had been lighter, all else being equal, then the model would have actually gotten the correct prediction. In the second example, the explainer learned that it's really because of the blur in the image that led the model to make that mistake. And the same image had been sharper, the explainer learned that then the model would have actually gotten the correct prediction, which is on top. So in the third example it's because of the zoom. So it turns out that it's actually too zoomed out. And that's really the reason why the classifier made those mistakes. And the fourth example is because there's too much hair in the image. And just by actually understanding why the models made these mistakes in each of these specific instances, this actually gives us quite a lot of insights into potential limitations and blindspots of the model. Here, we really learned that potentially it doesn't really work well on dark skins. It really needs to have pretty crisp images. It can't be blurry, and there's a certain level of zoom that it needs in order to make these diagnosis. And also, if there's too much hair in the image, then the models doesn't really work well. And these insights are actually pretty actionable, right? For example, you can then take these insights and then as a guideline to help the users to improve their image quality. So maybe you actually tell the human users like we did with TrueImage, you want to zoom in more, or you want to take sharper images. It could also give us insights on if we want to collect additional data, what additional data should we collect in order to improve the model's behavior across these different-- and improve their weaknesses. Maybe we want to collect more diverse images across different skin tones. We also want to collect more training data with more hair in it. So the last point, what also ties all of this together, is that we really recommend that we need to have much more human loop analysis and testing of these AI models because if you think about how machine learning is often developed, I think it's often optimizing for the wrong objectives because most of the time you have a data set that's fixed, a static data set. An algorithm is optimized with SGD try to optimize for its performance on that static data set. But that's actually not what you really care about. It's what you really care about is actually how well the algorithm really works in the real-world applications, which oftentimes it's not really in isolation but with some team of human users. Especially in health care setting, most of these algorithms that happen here, they're not just working in isolation. But there's often some clinician who takes the recommendation from the algorithm and makes some final decisions. So in the ideal setting what you would really like to optimize over is maybe to an SGD to optimize the model's performance directly for their final usage rather than on their accuracy on the static benchmark data set. But that's actually challenging to do. So to address this challenge, we developed these platforms called gradio, which actually makes it very easy to collect real-time feedback from users at all sorts of different stages of model development. So basically, with one line of Python code, we can basically create a wrapper around any machine learning model. And this wrapper also creates an interactive user interface, which can then be shared, or I guess, a URL with any user. So if they open up that URL, then they can just interact with the model in real time on their browser without having to download any data or having to download or write any code. And by doing this that makes it very easy even for noncomputer scientists to be able to interactively engage with the model, right, to test it out, and provide feedback of the sort that we discussed before. OK. So gradio is actually now-- was recently acquired by Hugging Face, but it's still public. And it's also being used by basically all of the larger major tech companies and many thousands of machine learning teams. It's also what we use to power some of our own deployments here at Stanford. So just to summarize, I think these are the four main key lessons that we learned from our experiences in applying, in building, and deploying, and auditing these models. And I talked a lot here about applications in the health care settings. But I think many of these lessons and applications also apply more broadly in other domains where machine learning is being used. And all of the papers and the algorithms I mentioned are all available on my website. And here, again, are the different references. And I also want to thank the students that led all of these works. So maybe we'll pass here and then see if people have any more questions. [INAUDIBLE] in like data capture, how does that work? You like [INAUDIBLE]. Yeah, yeah, good. So the high-level idea is that we want to estimate the impact of individual data points. And we do this by basically adding and moving this data point across different contexts. So in each context I basically have a different subset of my data, and I say, OK, so what's the impact of adding this particular data point to that subset? If I add this point, does that improve my model's performance after adding a comparator before adding it? And we do this basically across a lot of different scenarios. Each scenario corresponds to a different random subset of my training data. And the reason why we do this across many scenarios is to really capture potential different interactions between different data points. And then we finally aggregate across both these different scenarios to get one single score, data Shapley score for each individual training point. In essence, you have to retrain the model multiple times with different subsets of the data and then evaluate them on your test to see how they impact the [INAUDIBLE].. Yeah, so in principle, if you want to do this exactly, then you need to retrain the model many times. So we actually came up with a bunch of different more efficient algorithms that enables us to estimate the Shapley scores without having to retrain the models. So for example, in some of the applications we can actually come up with analytic mathematical formulas to either exactly or approximately compute the Shapley values without having to retrain the models. Yes. hi. Is this related to the cooperative inquiry concept of Shapley value? That's right. Yeah, so the original ideas for these kind of Shapley values actually came from economics from game theory, where they're the people are interested in ideas of how to allocate credits among different users or among different participants in the game. So imagine if all of us, if we do a course project and we get some bonus. And how do you split that bonus among each of the individual participants so that people don't complain? The people are happy with their bonus. So it's developed in that context of more like game theory credit allocation. And we extended that idea to the data. So now, instead of having individual workers or participants, now everybody brings their own data sets. So data is what works together to train the machine learning model. Basically, the performance now is basically how well is the bonus. It's how well does the model perform? So then we can see, so how do we allocate or attribute the performance of model across individual data sources or individual data sets. Cool. Any other questions? Yeah. It's a cool explanation of mistakes that the labels that it tells you, are those preprovided by the team? Or are those generated actually by the model? Yeah, it's a good question. So what we have is basically we have a library of concepts, like what we call it a concept bank. So basically, we can look at all the sorts of common visual concepts, like the concept of water or concept of stripes or the concept of zoom and color, right? So those are all big concepts. And we create a pretty large library of hundreds of these concepts. Then that's basically the input into the explainer, as the explainer try to see, so what of these concepts would actually be able to explain the model's mistakes? In cases where the concept that we provide are not complete but maybe there's some texture information that actually leads to a mistake but it's not in our concept bank, we have some additional ways to try to automatically learn concepts in a more unsupervised way directly from the data. But most of the concepts we use are actually from these large concept banks that we can just learn ahead of time. Okay. Great. Yeah, then we can wrap up here.
AI_LLM_Stanford_CS229
Stanford_CS330_I_Unsupervised_Pretraining_for_Fewshot_Learning_l_2022_I_Lecture_8.txt
I'm going to be revisiting some unsupervised pre-training stuff today. This is reconstruction-based methods as opposed to the contrastive methods that we looked at on Monday. All right, so the plan for today, we're going to do a little recap on what Chelsea talked about on Monday. So we're going to talk about the unsupervised learning problem setup and just go over contrastive learning briefly. And then the meat of the lecture is going to be about reconstruction-based unsupervised pre-training. So the main difference here is that we're not learning by comparing multiple examples. We're sort of learning one example at a time by just trying to compute a representation and then reconstruct the example. And we're going to talk about a couple of different instantiations of this idea. Some of them, you've probably heard of before, some of these big, fancy, hopefully not scary models. And a lot of this content is going to be stuff that homework three is on, so it behooves you to pay attention. And you're the people who came, so you're already ahead of the curve. And if you pay attention, you'll be even farther ahead of the curve. So hopefully, by the end of the lecture, you'll have some familiarity with some of these now pretty widely used methods for unsupervised pre-training and some of the ways that we can then fine-tune these models once we've pre-trained them. And ideally, be ready to breeze through homework three. All right, so just to start this recap, so as Chelsea said on Monday, we're doing unsupervised learning. We have some large unlabeled data set. Ideally, a diverse unlabeled data set. We do this first gray arrow which is unsupervised pre-training, and this gives us our pre-trained model. And then once we have our pre-trained model, we usually use some often small data set for fine-tuning to specialize this model to a particular task. And the goal is to get sort of a good predictive model for this particular task. And after that, we talked quite a bit about contrastive learning on Monday. So the basic idea here is that examples that are similar in some sense should have similar representations. And so this could be examples that have the same class label. This could be examples that just come from nearby regions of an image if we don't have class labels. It could be augmented versions of the same example by flipping or cropping or adding noise. Or if we have sort of temporally structured data like videos, we can take video frames that come near each other in time to get sort of positive pairs. Another thing that we talked about is it's not good enough to just take things that are similar and push their representations together because then we end up with this degeneracy where everything gets the same representation. So we actually need to contrast things, too, and that's how we get this wonderful name, contrastive learning. And sort of the most straightforward implementation of this is this triplet loss. So here we have a triplet because we have kind of our anchor example, and then we have a positive example which we say is sort of a similar one, and then the negative example which is different. We push together the similar ones, and we pull apart the negative ones. And then you can generalize this to have more than one negative. So this is the sort of N-way classification loss which is used in the SimCLR paper that you might remember, which is also known as the NT-Xent or Normalized Temperature-scaled Cross Entropy Loss, which is really an awesome name. One minor correction from Monday is there were some questions about whether or not the similarity score for the positive pair shows up in the numerator and the denominator. And it does, so in the SimCLR loss we have the similarity score for the positive pair in both the numerator and the denominator. And the reason that's useful is since we have this positive score on the denominator and the numerator, we know that this fraction is going to be between 0 and 1. And so we can sort of interpret this as a classification probability. And so really, we can kind of just think about this contrastive objective as like training a classifier with our representation. So it makes it a little easier to think about the loss and what it's doing. All right, how do we feel? Yeah, I love the enthusiasm. Keep bringing the energy, folks. OK, so now let's talk about some reconstruction-based unsupervised objectives. So again, the main difference between these methods and the contrastive methods are we're not comparing and contrasting multiple examples. We're sort of doing this one example at a time, which makes things a little easier. And the simple intuition for these methods is that a good representation of our data is one that lets us reconstruct it. So if we have some image of this lovely little dog, we might pass it through our encoder to get a representation. And ideally, if we have a good representation, we should be able to decode this representation back into our original input. And ideally, with a decoder, that's not absolutely massive. It should be sort of a reasonable size decoder. And sort of the bonus of this type of setup is that we don't have to worry about things like sampling negatives or using large batch sizes because again, this objective is purely defined on just a single training example. And so we don't have this sort of more complicated relationship between multiple examples and a batch to worry about. OK, so that gives you a little bit of a flavor of what we're talking about. We can make it a little more concrete. So we have this intuition that a good representation is what lets us reconstruct the input. So how do we actually train the thing? Well, we really just define some loss function that tells us how close our reconstruction is to the original input. So for images, we might use just L2 or squared L2 distance. And then we train this whole thing end to end. So we train the encoder and the decoder end to end. And hopefully, by the end, our encoder gives us representations that are useful. So that's kind of a neat story, but I'm curious if people have any creeping suspicions on what can go wrong in this situation. If you've ever trained a VA, you might know the answer. Everything-- it is included under Everything. Yes? So [INAUDIBLE] memorize the input? Boom. You saw my slides, didn't you? Very impressive. So yeah. So I mean, one way to think about this is according to this loss function we've written down, would the identity function be a good encoder or decoder? That's not actually totally rhetorical. I'm curious if it would be. So the answer is yes, it would be because if you have the identity function as an encoder, r is just x. And if the decoder is the identity function, x hat is x, or our loss is 0. And that's a great loss function or a great encoder and decoder for this loss, but it's not really useful. So the question is, how do we fix this? And there are a lot of different ways to fix this, but we're going to focus on one common one that people use. So if r is basically unconstrained, if it's just as high dimensional as our input, then there's no reason the encoder can't just copy the input into the representation, and then the decoder just copies that representation to be its prediction. And this is not really useful. We're not learning anything meaningful. And so the key idea that people use to kind of get around this issue is adding some sort of a bottleneck on our representation. So the most common, maybe most intuitive type is just make it lower dimensional. So we might have an image that's like 100 by 100 pixels, and our representation might only be 64 dimensions. So intuitively, it's much harder to just copy the whole image into this low-dimensional space. And so the hope is that these latent dimensions in our representation will actually be forced to represent some high-level concepts that are more efficient in storing information than just memorizing one pixel at a time. OK, so let's say we train this model now. So we have this encoder that compresses our input into some more compact, in some sense, representation. How do we actually do the few-shot learning part? Well, we'll throw away our decoder, and we can just initialize a prediction head on top of this representation. So really, what we've just learned is a feature extractor of our data. And now we'll just treat this as a normal machine learning problem. We can fit some sort of prediction head to these representations. Instead of the raw images, this could be an MLP, or it could just be like an SVM or whatever. And we'll train it to make a classification score. So here if we're doing like five-way classification, we just have a classifier now. And the simplest recipe here is we'll freeze the encoder and just fine-tune this prediction head or just fit the prediction head. OK, so this is the basic autoencoder or bottleneck autoencoder setup, which is kind of the-- you can maybe think of like the simplest thing that kind of works. And the pros here are that it is very simple. It's very general. You can kind of do this for pretty much any sort of data because all you need to do is pick this loss, this distance function, which you can usually come up with some sort of distance function for most data modalities. You also have this advantage compared to contrastive learning that we don't need to select these positive and negative pairs. And this is often the sort of most difficult part of the contrastive learning set up. And so it's really nice to get rid of that. But the major downside here, in particular, is that we have to design this bottlenecking mechanism. And it turns out this is really hard to do well. So you can use this sort of low-dimensional bottleneck. In practice, this doesn't really give you that good of representations even if you make it relatively low-dimensional. And you might wonder, well, why is that? And a lot of this is known mostly just empirically by training them. But what you end up with is still sort of representations that mostly memorize the images and don't actually encode any sort of high-level or interpretable concepts that are likely to generalize well beyond just reconstructing this exact image. And you really do kind of end up with representations that just encode the specific details needed to minimize the reconstruction loss but not really anything that is easily adaptable to new tasks. And so what ends up happening a lot of the time is r is more like a hash of your input than a sort of conceptual summary. And if you think about-- for example, using the gzip representation of an image as its representation for fine-tuning, that's not going to be a very informative representation even though it's going to be smaller than the original image. And so just compressing is not-- at least in this sense of dimensionality is not good enough to actually force our encoder to squeeze out sort of useful reusable features. So how do we encourage the encoder to extract these high-level reusable features. And one strategy that people use is just other types of bottlenecks. So we can bottleneck our representation in many ways. We can use information bottlenecks that essentially add noise to the representation. We can use sparsity bottlenecks that constrain the number of dimensions that can be nonzero. We can also use capacity bottlenecks to force the decoder to be relatively weak so that the representation needs to be sort of easy to decode. But the strategy that actually is most common in practice is to worry less about designing a bottleneck and just make the task a little bit more difficult to force the encoder to learn something a little less trivial. And that's going to take us to our next section on masked autoencoders, which is sort of a class of models that encompasses a lot of the large pre-trained models that we have these days that we use in practice. How are we doing so far? Any questions? Is everyone having fun, though? Yes, very much. [INAUDIBLE] Does it mean you can somehow extract the information about the samples that run into the autoencoder? Just by taking a look at the encoding-- Yeah. So in a sense, you can. So it depends on the exact model and the capacity and the training data. But in the same way that you can-- hash functions are generally difficult to invert. But if they're imperfect, you can invert them. A lot of times, yeah, you can not only just decode a particular representation back into its image, but you can kind of fiddle with these models in a way that lets you extract multiple examples from the training distribution, which can pose sort of a privacy risk. Does that-- great. Yes. So from your perspective for few shot performance of these models, since the model is kind of just memorizing details from the original image. If you just [INAUDIBLE] Yeah. So I mean, if you have-- I guess the question is, if the representations aren't that great, can we get around that by just making our predictor higher capacity? And we definitely can if we have a lot of data that we're able to fine-tune on. But the idea is in the best-case setting, we're going to learn representations that let us perform well on our downstream tasks with really simple predictors because the simple predictors are the ones that we can fit with only a little bit of data that are very difficult to sort of overfit with. And so if we need to do this sort of thing, that sort of means like our objectives are not working very well because ultimately, it just means we're not going to be able to do few-shot learning as well because we'll need a lot of data to fine-tune. Yeah? [INAUDIBLE] go back to [INAUDIBLE] image [INAUDIBLE] that kind of thing. Yeah. So I mean, you could imagine sort of like a cross-encoder, where instead of reconstructing x, you reconstruct like X-PAWS. Yeah, so I don't-- there's not a specific paper I'm thinking of that's extensively evaluated this sort of thing. But sort of colloquially, you can do that. I'm not sure how it directly compares to stuff like BERT or yeah, any of these models here but in a way that sort of is what masked autoencoders do. In a sense, they do come up with these positive pairs and then-- just hang on, all right? But if you're still wondering that at the end, come grab me. All right. So we want to go beyond this bottlenecked approach to the autoencoders because designing a good bottleneck that actually forces our encoder to give us good features is really quite tricky. And so the common solution people use now is called masked autoencoders. And the issue that this sort of gets around is like our regular autoencoders are really trying to predict x from like x. And this is sort of a degenerate task. And so we add this bottleneck to try to avoid our degenerate solutions, but maybe the task is just too easy. And the easiest way to solve the issue is not to design a really fancy or really perfect bottleneck but just make the task a little bit harder. And so that's sort of the approach that we're taking with masked autoencoders, where we no longer predict x from x. Instead, we just mask out a little bit of the input, and we try to reconstruct the other part of the input. And so in this sense, going back to your question earlier, we sort of are doing this in that now x and y are not the same. They're sort of like this positive pair that's generated by literally just like partitioning the input into pieces. And this does work really well, so you're just like five years too late. But otherwise, that would have been like a brilliant research idea. So that's sort of the gist of masked autoencoders. And even though this is a relatively small departure from the autoencoders we already saw, it's a really, really powerful framework for representation learning that is pretty widely applicable. OK, so let's go over kind of the general recipe for how masked autoencoders get trained in practice. So again, just like when we're training a regular autoencoder, we need to pick some sort of distance function that tells us how good our reconstructions are, given the original input. So this part is the same. And then for training batch of examples, for each example, what we're going to do first is we're going to sample what we're going to define, which is a masking function. So we're going to take that example, and we're going to generate x tilde, which is sort of the masked out version of that example and then y, which is like usually just going to be everything else. So typically, x tilde and y are just going to be disjoint subregions of the input. So in the image example, it's literally like partitioning the image. And one part is x tilde. The other part is y. And then we're going to make a prediction with our model, f theta, that gets not x but x tilde's input. And then we'll compute our loss. So our loss is going to be between y, which is the sort of held out part of the input, and y hat, which is what our model thought the held out part of our input is. In some cases, the target might just be all of x, not just the held out part. But this is a relatively minor detail that doesn't make a huge difference in practice. And so these highlighted parts are basically our design choices or our control knobs. And two of them are pretty much the same as regular autoencoders, the model and the loss function. It's really just this masking function that we are now able to define, which gives us sort of one more knob to turn. And so again, as an example, in the case of an image, we'll split it up into two subregions. And our model might be a convnet or a transformer, which we'll talk about a little later. And our distance function might just be L2 distance. In the case of language or one-dimensional sequences, we might just mask out a couple of the words or the tokens in our sequence. And that's sort of our noisy input. And our targets are just the words that were there or the original data there. Here, often, and pretty much all the time now, the model that we use is a transformer, such as BERT, which, again, we'll talk about soon. And the loss function or the distance function here is usually KL Divergence or cross entropy between the model's predicted distribution of what it thinks the missing words were and what they actually were. OK, so probably to talk about a couple of specific instantiations of this framework, probably the most famous one is BERT, which maybe you've heard about, which is this large transformer model that's pre-trained on a huge amount of text that's really good for fine-tuning on new usually classification tasks. And basically, BERT gets as input two sentences, but that's sort of a detail of BERT training. It computes representations of the sentences, and then it predicts basically the missing words in both of those sentences. And it just does this task for many, many TPU hours, and then you get out a good model at the end. So let's look at this in a little more detail. So again, we might have this text input. And here, again, as sort of just a quirk of BERT, we use two sentences instead of just one for training, but it's not a huge-- it's not hugely important. And so we apply our masking function, which gives us our masked out version of that input. And I've just annotated the time step in the sequence here that hopefully will make things a little easier to see what's going on. And so I've randomly sampled three of the time steps to mask out. And I'm going to define my targets which are just the original words there. And then we stick this entire sequence with the mask tokens through BERT. And that gives us a prediction for every time step in the sequence of what the word is likely to be, a probability distribution over all words of what it thinks the word is there. That probability distribution isn't very interesting, except for the time steps where we masked out the word. Because for all the other time steps, we know what the word is. So we just compute our loss on those masked out time steps. And this gives us basically three distributions that are actually needed for training, which is the model's distribution for time step two, for time step six, and time step nine. And our loss function here again is going to be KL Divergence. And this is going to work out to be the cross entropy between the sort of one hot distribution that is our actual labels here. So basically, a 1 on Biden and a 0 for everything else and the distribution that our model predicted for each of those time steps. So what that works out to be is the negative log likelihood of Biden on this distribution for time step two, the negative log likelihood of president for this distribution at time step six, and the negative log likelihood of was at this time step nine. And we just sum those or really average often in practice. And this is our loss for training. And then we just optimize this end to end, and we are happy. Just a couple of details about the way that BERT does masking just for completeness, BERT masks out 15% of the time steps at random just uniformly. And of these 15%, 80% of them get replaced by literally the mask token that I've used here. And the other 20% actually get replaced with just a random word instead of a mask token. I'm curious if you just think for a moment why we might want to use a mixture of mask tokens and random tokens instead of just replacing it with a mask token all the time if you think about how we're ultimately going to use this model once we've pre-trained it. Yeah? You don't want to constrain your model to only a sentence, is that you've seen [INAUDIBLE] new sequences? Yeah. So you definitely don't want to sort of overfit to the types of sentences that you see during training, but it's sort of a specific type of sentence we're only seeing during-- that we would only be seeing during training that we want to avoid. And that's what I'm trying to get at. So we would be able to correct [INAUDIBLE]?? Right. So in other words, we also want to have good representations of words that are not mask tokens, right? So in a sense, during training, if we replace a word with a random word and we ask our model to tell us what the original word was there, now our model, in addition to just filling in mask tokens, is also basically solving this task of for every word in the sequence, assume the word that you see here is not what was actually there in the original text. What do you think the word would have been? And so now we have a model that not only gives us good representations of words that are masked out but every word in the sentence. And that's important because at fine-tuning time we're going to be getting sentences that have no mask tokens, and then we're just going to be shoving like real data into the model. And we don't want to be overfit. Like you were saying, to only the data we see in training, which always has mask tokens in it. So that's sort of the motivation for why we sometimes actually just mix in a random word. Yeah? So when predicting the identity of one masked token, it cannot see the identity of the other mask tokens? No. Yeah, and that's for computational reasons. I mean, you could. You could only mask one token at a time, but then you're sort of using only a very small amount of your sequence for training at every time step. And you have to make your batch very big. And the thing that makes that difficult is now you basically need to tune this percentage of the input that you're masking, right? If you don't mask enough, then you're not really like learning very quickly because you're only predicting one word at a time. But if you mask too much, now the task is too hard because you can only see a couple of words. And there's a lot of blood has been spilled trying to figure out the right way to sort of do that masking. And as a perfect segue to this final point is it is possible that we can do better than just picking random time steps. And two common ways people do this are they mask out longer spans of text at a time, so not just like a single word here and a single word there, and also do what we call Salient Span Masking. So looking for kind of information-dense parts of the sequence or the image and masking those out instead of just like masking out random words like was or on or something. And in certain settings, these can make a huge difference in how good your pre-trained model is. We have three questions. I think I saw you and then you and then you. Will there be work on looking at how similar the random tokens are versus-- how similar the random tokens are to the ground truth tokens and how that might affect BERT training outcomes? You want it to be, you can make it to be more difficult by making it [INAUDIBLE] synaptically similar but not exactly the same. Yeah, that's a good question. I'm not aware of actually ablating the criterion for how you pick a random token. As far as I know, BERT just samples uniformly from the entire vocabulary. But it'll be interesting to see-- if you pick synonyms, basically, is that harder? Yeah, it's a good question. Someone back here. I'm sorry, [INAUDIBLE] to figure out what the random word was. What the original word was. Yeah. Yes. If you could repeat the question for the recording. Oh, yeah, yeah, yeah, sure. Sure. So the last question was just that what do you actually do when you put in a random word. What are you predicting? And the idea is you replace-- like, for position nine, you replace was with like apple. And then the model has to predict what word originally was there. When you say we're selecting for information dense spans, is it like, what you're saying is we're looking for something like high entropy, or how do you define that? Yeah, that's a good question. So the question was, when I say information-dense spans, what does that actually mean? There are a lot of heuristics people use, but oftentimes, people will just take another model that has been trained to recognize essentially like dates and names and just pick out all the dates and names, which often end up being high-entropy portions of the text, right? If you say my name is blank, it's going to extremely high-entropy distribution because it could just be anything. So essentially that. But it's difficult to actually compute the entropy itself, so we use this proxy which is classifying the type of word it is. Yeah. Why don't we use a random mask for all of this, instead of the masking, just use a random token, maybe? Mm-hmm. That's a good question. I guess I would suspect that using random tokens are a little bit more damaging than mask tokens because for a mask token, the model sort of knows it can ignore that time step. There's no information there. Whereas if you replace everything with a random token, the model sort of now has to solve two tasks. First, it has to figure out which are the time steps that have real tokens and which are the ones that don't have real tokens. And then once it has figured that out, then it has to try to figure out what was the original token for those time steps where they are sort of determined to be false. So I don't know. That's just my one potential guess. But it seems like it could just be harder. But it's a good question. Yeah. [INAUDIBLE] the tokens? So for BERT, we just compute the loss on the masked out tokens because-- So then, what happens to the random tokens? Oh, no, yeah. I'm sorry, including the random tokens. So yeah, the loss is just computed on tokens that are perturbed, so masked or replaced with a random token. But the tokens that are not changed, we don't compute a loss for. So what if BERT tries to change the tokens which weren't supposed to be changed? We don't compute a loss for those tokens at all, so it doesn't matter. But intuitively, what should happen is, since we are replacing some of the tokens with random words, BERT sort of should be acting as if every word might have been replaced with a random word. And so it should be giving you a representation at every time step that is useful for predicting kind of the likely words that would have gone there. So would that be like adversarial tokens which could like completely mess up the whole thing? Absolutely. They're pretty rare, but there are people who basically use BERT to come up with adversarial things to mask out to train another model like BERT. I'm going to move on just because we have lot of content, but I love the questions. I really do. OK, so zooming back out from BERT, I hope that was useful to see that in a little more detail. Although BERT is the most famous, this sort of masked pre-training kind of approach is not a language-specific phenomenon. And especially recently, people found that this works really, really well for learning representations of images, too, and also with transformers. So it's a very, very similar kind of learning setup. So here for an image, what we do is we chop the image up into a sequence essentially. So we turn the image into a grid of like subregions or patches. We mask out-- in this case, instead of 15%, we mask out 75%, which gives you some sense of the way that information is spread out or how information-dense like the signals are in these two cases, which is sort of interesting. But first, we just mask out tokens the exact same way we do in language, just a different percent. And then we compute the representations, in this case, of only the unmasked patches. So we flatten out that sequence of all the patches that didn't get masked out. We stick them through our encoder. We get a representation of all of those. And then we insert placeholder patches into all the locations that were masked out. So now we have sort of the same shape as our original image. And in the places that weren't masked, we have our model's representation. And in the places that were masked, we just have placeholder tokens. And then we run this through a decoder to get a prediction of what the rest of the image look like. And so at fine-tuning time, we just throw away the decoder. We just use the output of the encoder but with the whole image. And this is really, really a powerful learning setup as well. And actually, this sort of masked autoencoder pre-training can give state-of-the-art few-shot image classification performance and actually can do better than supervised pre-training. So if you do this procedure on ImageNet and then fine-tune on ImageNet with labels, you can actually do better than if you just trained on ImageNet with labels from the get-go. So in a sense, you're sort of extracting even more information from your training data set by doing this procedure and then fine-tuning with labels than if you just train with all the labels to start. So that's pretty cool. But in addition to that, one thing, I think, that's interesting is to compare with contrastive learning. So this is a really, I think, informative figure to get a sense of maybe some of the strengths and weaknesses of masked autoencoders versus contrastive learning. And the interesting regime, I think, is right here. So here we have MAE, which is this masked autoencoder model for images. And then MoCo is this momentum contrastive learning method, sort of V3 but whatever. And what's really interesting is that if we don't do any fine-tuning and we just fit-- we freeze the pre-trained model and we just fit a linear head on top of the frozen model's representations, contrastive learning works significantly better than the masked autoencoder. So it's a difference of a little more than 4%. But if you actually fine-tune some of the model itself, it seems that the masked autoencoder pre-trained models fine-tuned better than the contrastive pre-trained models. And so you have-- it's almost like a little bit of a trade-off of representation quality of the frozen pre-trained model versus like fine-tunability. So it's not clear that it's actually a trade-off, but it seems like for these two classes of methods at least right now, the contrastive models give you slightly better representations but the masked models maybe fine-tune a little better. So if you have only a tiny bit of fine-tuning data, maybe the contrastive models are better because you don't have enough data to fine-tune a lot of the model. But if you have a decent amount of fine-tuning data, which often we have more than like 30 examples and the hundreds of examples, the masked methods can work really well. So now we've talked a lot about transformers. And I should say that masked autoencoders work really well, and they have gotten a lot of attention. But one of the things, I think, that has been the engine of a lot of that progress is the transformer architecture because to do this sort of mask completion task requires a high capacity model that you can scale. And in order to apply this recipe across data modalities, it's nice if you have some architecture that's not really modality-specific. And transformers give you that. So transformers are this sort of general purpose architecture that you can pretty much just directly point at language or images or molecules or like sequences of states and actions and reinforcement learning, and you don't really need to change the architecture. And so it makes it really easy to kind of transport this masked representation learning approach to new modalities. So I think it's important that we spend a little time just looking at transformers. Some of you might be familiar with it already, but hopefully, you'll learn something. So OK, what is a transformer? So we got a little bit of a peek in the previous slides. Yeah. Can you go back? Is there any intuition why constrastive learning doesn't perform as well as most autoencoders for fine-tuning? Let me think about that. No. I don't have a concise answer for why I would expect this to be the case. I guess one way-- and so that-- I mean, my short answer is I don't have an answer I'm very confident in. If I had to totally hazard a guess, I guess you could say that the pre-training objective for contrastive learning is a little bit more like just doing classification, like we saw in that early slide, right? Like, we're basically coming up with a classification score for the positive pair compared to other pairs. And so it's almost like your pre-training like in a pseudosupervised way. So it kind of makes sense that if your downstream task is to do image classification, maybe pre-training this way is going to work pretty well without changing the model at all. Whereas like the masked pre-training approaches is a little more different than classification. But maybe you're extracting more signal from the data. And so if you're able to fine-tune the model, it'll work better. But because the pre-training objective is a little different than your downstream task, it's not necessarily going to work as well with no fine-tuning at all. That's the best I can give you. [INAUDIBLE] Oh, gosh. Sorry. Yeah, the question was, do we have any intuition for why contrastive methods might give us better representations but not fine-tune as well? And the TLDR was that maybe the contrastive objective is a little closer to doing classification based on the derivation we saw before. And so maybe it kind of make sense that without any fine-tuning, they do better at classification. OK, so transformers. This is a nice figure from the vision transformer paper that I thought is a good at least 20,000-foot view of the inputs and outputs of a transformer. So we haven't cracked into the main meat of it yet. But to see sort of what's going in and coming out of the model, transformers operate on sequences of data and sequence in the very general sense. So this can be a sequence of words. It can be a sequence of here image patches. It could be a sequence of bonds in a molecule. It can be kind of whatever you want. And so the way a transformer works is we somehow convert our data into a one-dimensional sequence. We convert these into a sequence of embeddings. So in the case of an image patch, we just have a linear transformation that takes like our whatever 16 by 16 grid of pixels and projects it into some usually like something like 768 or 1,024 dimensional embedding space. So now we have this sequence of image patch embeddings. And then before we stick this into our actual transformer encoder, we do one extra thing which is really important, sort of to. First, we concatenate what we call a CLS token in the beginning of the sequence. And this is just because at the end, when we actually want to make a classification score, we need to-- we need to pick one of these representations to actually stick in our final predictor MLP. And so we just have a special token in the beginning for that. But the really important thing is we have this thing that we call the position embedding. So we get our embedding for each image patch here, but we concatenate this number, which really, in fact, isn't literally a number. But it's an embedding that is specific to each literal position in the sequence. And this actually turns out to be really, really important because the transformer architecture itself doesn't discriminate between locations in the sequence. So it's just giving you an output that's a function of the set of things you put into it. And so if you don't add something to tell the model that this is the embedding at position one and this is the embedding at position two, you'll end up with a model that's sort of totally permutation invariant of your input. So if you totally scrambled the patches in the image, you'd get the same output. And that is not-- that's not good. We want our model to be able to reason about the relative position of things. OK, so what's inside this transformer encoder? The transformer encoder has a few main pieces that we'll dig into in more detail in the next slide. But at a high level, we basically have this fundamental block that we just apply over and over and over. So we'll have something like 5 or 10 or up to like 100 of these blocks that we apply in a row. And then we just take the output of the final block. And inside each one of those, what do we do? Well, first, we do-- we have a normalization step where we just take every embedding in our sequence, and we normalize it separately. So every embedding should have 0 mean and standard deviation 1. Then we do what we call Multi-Head Attention, which is basically the only step in this whole process where the embeddings at time step one and another time step will interact with each other. So this is where all of the inter time step interaction occurs. And we'll go into the math of what exactly is happening later. We have a residual connection that wraps around both of these. And then we have another normalization step. And then we apply an MLP just separately to every time step. So again, there's no communication between the embeddings at different time steps. It's just a sort of weight-tied MLP. And then we have another residual connection that skips both of these. And then we have 12 or 24 of these blocks that we train end to end. So that's sort of the high-level picture of the mechanism in the transformer. And again, this Multi-Head Attention is the only time where reasoning across time steps. And it does so in a totally sort of order independent way. And another thing that's worth noting is that really the only difference between transformers for different modalities again is this initial embedding step. So if you want to apply a transformer to your new type of data, whether it's language or code or molecules or images or 3D point clouds or whatever, all you need to do is figure out how to take my original data, which is the raw data, and turn it into some sequence. And then you stuffed in the transformer. And usually, you'll get something reasonable. There are definitely better and worse ways to do that, so I don't want to make it seem like a trivial point. But it is pretty much the only decision you need to make. OK, so let's look at these in a little bit more detail. And this is one of the denser parts of the lecture. So feel free to ask questions if I've not explained anything adequately. But here let's look at a language example just to not get too focused on one modality or another. So we get some input sequence. In this case, it's a sequence of words. And we have T elements in this input sequence, where here T is 6. And the first step that we're going to do is-- for language, we're going to do a tokenization step. And this is needed to basically convert our words into indices because that's how we're ultimately going to convert them into vectors. So we have literally a big mapping that we store-- that we fit using a large corpus of text. And every word or really in practice, like subword sequence of letters just gets assigned some number. That's sort of the first step here. And then the second step is to do this embedding lookup where every token index has its own learned vector. And this is how we're going to get our initial input sequence to our transformer. So every word gets a number, and then we just look at the row in our big embedding matrix corresponding to each of these token indices. And that's sort of our initial embedding. The only asterisk there is we add a positional embedding. Like we said before, to be able to know that, the word Joe came first in the sequence, not tenth in the sequence. So this is sort of that first step outside of the gray box. On this last slide, this is sort of preparing our initial embeddings to then put into our many transformer blocks. And this is how we do this in the language setting. All right. So then let's get to the gray box. That's what we're all here for. So again, this is just one transformer block. And in practice, we're going to have many of these in sequence all with different parameters that we learn. And the first thing that we do is we're going to normalize all of these vectors in our sequence independently. So each of these vectors is going to have a mean 0 and a unit standard deviation. And so again, we have this chunk now of T by d, where our sequence length is T, and d is the actual embedding dimension or the hidden state dimension of our model. Then we have this self-attention step, which is sort of where the magic happens, so to speak, where we will compute this self-attention matrix, which essentially tells us how related or how similar the token at one location is to a token at another location. And this is kind of what tells us like how we should be propagating information between time steps. And this is sort of hypothesized to be what makes transformers really powerful is that it can do this explicit sort of point-to-point reasoning instead of for an RNN, where if you have information here that's relevant to something much later in the sequence, you have to kind of recognize that here and then remember it as you process all of the tokens in between. The difference is with the transformer, I'm directly comparing each token to every other token. So I don't have to do that. OK, so how does that work? Well, we compute a new set of representations, A, which is the same shape as the input representations, but it has this particular form, which is maybe a little bit like, what the hell, at first. But we'll go through it one step at a time. So there are really two main pieces. So this A again is the output of the self-attention step. And the two main pieces are what we call the self-attention matrix and the value matrix. So the self-attention matrix is this thing that I've drawn right here. This is the T by T matrix that tells us how similar is the thing at one location to a thing at another location. And the value matrix is basically like if I say that my representation at this time step really only depends on the thing at this other time step, the value matrix tells you what is the actual representation that you're going to be pulling from that other time step that's going to replace your current representation. To be more specific about this, so Xq is just your input here, x times some d by d or even a down projection to a lower dimension. But some transformation matrix, which is a linear transform. And then the same thing with Xk, it's just another learned linear transform. And again, with Xv, it's just another linear transform. So you just have three different projections of your input vectors. And to get your attention matrix, you take the inner product of all pairs of what we can call the queries and the keys. And this gives us some matrix. And then we do a softmax to sort of normalize each row to sum to 1. So every time step basically gets assigned a distribution over all time steps in the sequence. And then when we do this matrix multiplication with the value matrix, what we're doing is we're doing a weighted sum over the value vector for every time step in the sequence weighted by the softmax attention score for each of those time steps. And-- [INAUDIBLE] Yeah. So when we do this-- when we do the final matrix multiplication to get A-- so the question was, can you say that sentence again? The answer is no, but I'll try to be similar because I don't remember what I said. Essentially, what A is doing is for every time step-- so the value of a1 is roughly just going to be a weighted sum over each Xv so that the projected version of each one of these vectors weighted by the attention score for each one of these time steps. So this self-attention matrix is this T by T thing, meaning for each time step in our length-T sequence, we have a length-T vector. And that vector is a probability distribution over all of the time steps. And we just do a weighted sum over their value vectors. And so just as an example, if this is our self-attention matrix, literally, the number right here in the matrix is going to be like x of the inner product of the third row of the query matrix and the second row of the key matrix over the sum of the x of the third row of the query matrix and all of our keys. This is just the softmax, but I've just written it out for a particular cell. Yes. [INAUDIBLE] And the MLPs. [INAUDIBLE] Yeah. So after self-attention, we do another residual connection and normalization. And then we have an MLP that we apply, it's the same MLP for every time step. The weights are shared. But it's just sort of like a post-processing on the output of self-attention. But yeah. So now it's all the parameters. So it's the XQ, the WQ, the WK, the WV. And then this is a one hidden layer MLP, so we have two weight matrices here. And that's all the parameters. Other than like your input embedding matrix, which is relatively few parameters. Yes. Is there a benefit of learning WQ and WK separately since we just want to find them together? Yeah, it's not obvious, but yes. Mathematically, it's equivalent. So actually, in an early version of this slide, I wrote it out with just the collapsed single matrix because when you do the transpose of XK, we get X times WQ times WK transpose times X transpose. So you can just define a new matrix that's WQ, WK transpose, and it's like, it's the same, right? But it's not the same. Because when you train it, it turns out it's easier to train if you have both of the matrices because when you do one step of gradient descent like you're going to update-- whoa, we're getting in the weeds. But like each matrix is going to get like a rank-one update. And if you have a product of two matrices, they both get a rank-one update. So the update you make to each of them to the thing in aggregate is richer. So in some sense, it's easier to optimize. It's not totally well-understood, but yeah, people still do it with this double matrix way because it just seems to work better. It's a good question, though. Another question? [INAUDIBLE] I don't know the answer to that. I think he was first, but I'll get to you next. What do you mean by rank-one update? Now we're officially in the weeds, but I'm happy to talk. Offline, then. When we say, we got Multi-Head Attention, where does that show up in this one? Yeah. Good. So someone's paying attention at least. So this is not Multi-Head Attention. This is Single-Headed Attention. And so really, what we'll do with Multi-Head Attention is we won't do this once. We'll actually repeat this. Basically, these WQ, WK and WV matrices will all have an extra index, which will be the heads. And so you'll repeat this eight times in parallel. And each one of them will take like a-- will project usually like a different subspace of x because-- Maybe [INAUDIBLE] Right. So if you have-- yeah, basically, one of the words in the sentence might be related to like-- it might have multiple senses, for example. And so if you have the word bank, maybe it's related to river because it's like a riverbank. But maybe it's also related to Federal Reserve, and you haven't disambiguated what kind of bank it is yet. So you want to tend to both, but you want to do them separately. So you do them in the two different heads, and then later on, you combine them. [INAUDIBLE] that's sort of a detail of like real transformers that-- we'll omit it for now. OK. And so-- oh, yeah, yeah. So is this normalization before or after residual connection? Usually after. Sorry-- [INAUDIBLE] That's a great question. It's pretty much purely empirically decided as far as I know. There are some papers that try to analyze some of these decisions and give rationales behind them. There's a paper, I think, called like transformer circuits or "A Mathematical Framework for Transformer Architectures" or something like this from Anthropic that gets into this in more detail. But my understanding is largely because it's what works best. But I'm not sure actually the location of the normalization makes a huge difference in the final performance. It's kind of arbitrary. OK. Yes. [INAUDIBLE] but how exactly do positional embeddings work? How do you make them so that the transformers [INAUDIBLE]?? Mm-hmm. So you're embedding lookup gives you a T by d sequence. So you have a sequence of length T, and you do your embedding lookup and you get a d length vector for each time step. Your positional embeddings can literally just be like a stack of learned vectors. And you have one for the thing at time step zero and one for the thing at time step one and so on and so forth. And you actually just literally add them together. And that's what forms your input sequence. And you just learn them end to end. There are other ways of doing it that aren't learned that also work, and there are more complicated ones that try to make it easier to generalize to longer sequences than you saw during training. But the basic idea is it's just like a vector you add to the raw inputs. All right. I'm going to move on. So the last question was your question which was-- remind me of your question once again. How do you make the positional embeddings? Right, right. Yeah. So the question was about the positional embeddings and where do they actually come from. And basically, it's just you have a learned vector for every time step in your sequence. And for each corresponding time step, you just look up the one at that index. There are more complicated ways of doing it but yeah. OK, so now we have the gray box, which is great, because we love it. And we're pretty much done now, so we have lots of these blocks. And we train them all end to end. And we get some representation for every time step in our sequence. And then finally, if, for example, we're doing like masked pre-training, we just have a final linear transform that projects the representation at every time step into the dimensionality of our vocabulary. And then we do a softmax. This gives us a distribution over all the words in our vocabulary. And now we can just do like maximum likelihood training or whatever. And this is pretty much the whole thing. Cool. All right, so we've talked about pre-training. We've talked about transformers. How do we actually fine-tune these guys once we have them pre-trained? So let's look at one more example. So here I've prepended this CLS token that we talked about before because it's not really needed to understand pre-training. But for fine-tuning, we're going to use it. So we're going to just push this sequence through our model. And when we do fine-tuning, oftentimes, we'll just take the representation of the CLS token, and we'll fine-tune a new prediction head, which is often just a linear transform or a small MLP on top of this CLS token representation. I guess that's pretty straightforward. That's pretty much what we said before. We were talking about regular autoencoders. But one of the-- the big questions hopefully on your mind is, what do we do with this guy during fine-tuning? So I mean, we could freeze these parameters. We could fine-tune all of them. There are other options. Maybe we fine-tune some of them, or we could freeze them but inject some new parameters that we're going to fine-tune. And it turns out in practice people usually do sort of option three here. So in some sense, they kind of freeze and fine-tune them in lots and lots of different ways. And we're going to look at one of these methods in a little more detail so that you have some intuition about what that actually means. And the method that we're going to look at-- which again, there are many. And I don't mean to pick this one as like the end-all, be-all, but I think it's reasonably concise to get your arms around. So I think it can be useful to work through. It's called Low-Rank Adaptation of Language Models. And again, the intuition is like we want to fine-tune our model a little bit in some sense. So we don't want to necessarily fine-tune all the parameters of our model because we don't want to destroy all of the knowledge in the model. Of course, there's a side of the question of like, what is a little bit even mean? So I'm curious if people have just like a sense of, if we want to fine-tune our really big model, we want to fine-tune it kind of a little bit, just enough to learn the task but not too much. Like, what would that mean to you in terms of maybe what a method might look like or what we might want to avoid what failure modes we might observe? Yeah. Perhaps maybe you don't want to lose the ability to do the original task, or you don't want the whole probability distribution [INAUDIBLE] how we used this probability distribution. Mm-hmm. Right. And in some sense, we want to preserve the original model. Because if we totally throw away the original model, what's the point of starting from the pre-trained model? Yeah. I have a question. Want me to go back or-- I'll just ask, so [INAUDIBLE] Yeah, so the CLS token actually is in the vocabulary. It's just another-- usually, it's like token 0 in the vocabulary. And it does show up during pre-training. For BERT, there's actually-- I omitted one thing about BERT training, which is that we don't just do the masked language modeling. There's actually this extra task, this next sentence prediction task where we take the representation of the CLS token and we just do binary classification which is whether these two sentences are consecutive sentences or not. So like half the time, you'll sample two random sentences from your data set. Half the time, they'll be one after another. And you just have to do this classification. And that's how we sort of train that representation. Yeah. But later work has showed that you actually don't need to do that. It doesn't really make a difference. [INAUDIBLE] Yes. So as far as I know, the CLS token is just-- it's in the input during training, but it's not-- yeah, it's not used. I don't think you'll mask it because it will always be the CLS token. So it'd be sort of degenerate. But you still get decent representations out of it. So it's kind of mysterious. But there's a follow paper called RoBERTa, which is basically ablating some of the things in BERT and turns out you don't need to do the next sentence prediction thing. And you can still fine-tune it fine. So if you're curious, you can look into that. OK, this is going to take forever. OK, we are here. OK, so we talked about what a little bit might mean. And basically, we've gotten to it, which in my mind is we want to preserve the knowledge and the pre-trained model. We don't want to totally obliterate everything that's there. And that's sort of like a learning-based objective. Practically speaking, we also want to avoid having to actually store a new copy of every single parameter of our model for every new task that we want to fine-tune on, right? Like, a lot of these pre-trained models are very big. If hundreds of millions, billions, or even hundreds of billions of parameters, it'd be really annoying if we need like a whole new copy of the model for every new task that we want to fine-tune on. So the other motivation here is like we just don't want to have to add that much every single time from a storage perspective. And so to get into this one method that addresses these issues, I want to take a brief walk back in time to 1972 to describe this sort of particular view on what a linear transformation actually does, which ultimately gets at this question of what is a little bit. So consider the fine linear transform, which is the building block of neural networks, in general, but also transformers specifically. And we know that for some rank-r matrix, which is a linear transform, we have a decomposition of this form. And using this decomposition, we know that if we evaluate the matrix vector product with some input, we can just rewrite it. So we can push the x inside the sum. And we have this essentially weighted sum of each vector, v, weighted by this inner product of x with this particular u sub r. So this is just like algebra, barely algebra, at this point. And one way we can interpret this is that this matrix vector product, Wx, Is really outputting a sum over what we can think of as memories. So each vr is sort of like a memory, and each ur is kind of like a key that determines how relevant a particular input is to that memory, right? So if x is totally irrelevant to a particular memory vr, then that means x will be orthogonal to ur. And so this inner product will be 0, so we won't include that memory at all in our output. If x is very aligned with the particular ur, that means that memory is going to show up very strongly in our output. So this is not really even new math. This is just kind of another way of thinking about the linear transformation, which is useful for motivating all sorts of things in like computational neuroscience and ultimately machine learning. But it's useful here because now we can go back to what does it mean to fine-tune a little bit. And we can say, well, a little bit means we just only want to add a couple of memories. And what that means is we ultimately just want to make a low-rank change to W. And that's convenient because a low-rank matrix can be stored much more effectively than the full-rank matrix. So we've kind of addressed both things here. We're going to preserve most of the knowledge because the new matrix is only going to differ from the original matrix by some low-rank change, and we can store this change very efficiently because a low-rank matrix is not a lot of parameters. And it's exactly what LoRA does. So when we do fine-tuning, ultimately, we're going to start with some pre-trained parameters, W0. These are frozen. And our fine-tuned parameters are just going to differ from our pre-trained parameters by some low-rank matrix AB transpose. So both A and B are these d by p matrices. Sorry, I should have written W0 and Wft R d by d. So we're just assuming the matrix is square, but there's nothing specific to square matrices here. This works perfectly well for non-square matrices. And so when we compute this matrix product, we get some low-rank d by d matrix that we add to W0. And we fine-tune just A and B here. So we don't fine-tune tune W0. And so we only end up needing to store kind of 2 times d times p new parameters instead of 2 times d squared new parameters for every layer. One minor note is that we do-- it's usually much easier to fine-tune these models if at the initialization, we're starting out with the same function as the pre-trained model. So that means we want AB to be initialized with zeros. But the way we do that is a little bit tricky. So if we initialize both A and B as all zeros, we're actually not going to get any gradient for either of them, so we won't learn anything. And so if we want the product AB transpose to be zeros but we want to be able to learn things, we actually need to initialize one of them as all zeros and the other with basically nonzero, like normal random initialization. So that's kind of a detail. You'll see that in homework three, but it's something to keep in mind. So like I said, that's LoRA. I hope that was somewhat interesting to think through. But there are lots and lots of different methods for lightweight fine-tuning that people have come up with in the last few years. This is one sort of survey of a lot of them which plots the percentage of the parameters of the original model that actually need to get updated with the actual accuracy. And we can see at least for this evaluation, which is just one benchmark, LoRA is in the relatively like heavyweight in terms of these lightweight methods but also the high-performing methods. But keep in mind, even though this is relatively heavyweight, it's still only fine-tuning less than 1% of the parameters of the model. And what's really interesting is that in the same paper-- the paper has a nice, direct, and audacious title like, lightweight fine-tuning is better than in-context learning or something. But their main result here is that actually this is the case at least in some settings where if you use some of these lightweight fine-tuning methods in the right way, you can get better few-shot performance than a GPT-3 model that's like 10 or 100 times bigger. And so in a sense, if you have sort of more than a couple of examples-- in-context learning is still very powerful if you only have one example to adapt on. But if you have a few more, if you have in like 20 to 70 regime, which is I think what they studied in this paper, fine-tuning can be more parameter-efficient or scale better than in-context learning. But again, this is very, very hot off the press, so to speak, so I don't want to make any claims that are too categorical. And you'll make a slightly simpler but similar comparison in homework three actually. OK. How are we doing? We're doing great. Good for us. Last section. So we've talked about reconstruction as a heuristic for representation learning. We've talked about autoencoders which are sort of the first stab at using this. And now we've talked about masked autoencoders which sort of try to address some of the issues with them. And I want to talk about another related class of models, another sort of related kind of heuristic for pre-training, which is autoregressive models, which I believe you saw in a previous lecture on Black-box meta-learning. But I want to revisit it in this context because I think it's interesting. So in a sense, autoregressive models are very closely related to masked autoencoders. But they sort of simplify a couple of things. So first, we can think about what some of the downsides of masked autoencoders actually are. So I mean, maybe the most obvious one is like we have this extra thing we have to pick. Like, we have to decide how we're going to do the masking, and maybe there are better and worse ways to do that. And it's not always obvious what they are. And so that just like causes us stress, and we lose sleep and all sorts of things. There are other things like-- as someone asked before, we're only masking out some of the examples. Sometimes it's 15%. Sometimes it's 75%. Can we mask out more? Why not? Well, we're only using basically part of each example to train on. We're only using the parts that we masked out, and we're not computing loss for all the other time steps. So in a sense, we're not fully utilizing each training example. And then finally, the downside is that once we pre-train this thing, we can actually sample from it. We don't have a generative model. We can't sample from the original-- generate new samples from the new data distribution, which sometimes is interesting and fun. OK, so basically, the idea behind autoregressive models is we're going to simply ask, well, instead of masking out a random subset of the input, what if instead our sort of learning objective is just take a prefix of the input and predict the next word or the next pixel or the next token or the next patch or whatever? So now we don't have to pick a masking strategy at all. In a sense, we're masking every token. And so what we're going to learn is simply this autoregressive model which parameterizes a distribution over the next token or the next word or whatever, given the preceding. So if we have, again, a training sequence like Joe Biden is the US president, we're going to actually convert this into six different training examples by masking out every word in the sequence and conditioning on the ones before it. So we don't really have this constraint of only using 15% of the example anymore because we're computing a loss on every single time step. And here I've added a beginning-of-sequence token because we need something to put in the model to get our first prediction for what the first word is going to be. So one difference here is for masked language models, sort of the output at a particular time step is what we think that word is if it's masked. Here the output at a particular time step is what we think the next word is going to be. So we have this shift by 1 going on. So what this looks like when we actually stick it into our model is we have x0, which is just the BOS token. So we put that in our autoregressive model, and this gives us some distribution over next tokens, given just BOS. And then we look at the next token in the sequence. And this is used as both the input at the next stage, as well as the target for the previous stage. So x1 is actually the target y0. And then we also put this combined sequence through our model, so now we have BOS and Joe. And we get a prediction over what we think the next token is going to be. And then we roll this out. So we see the next token. We see Biden. That's the target for the previous step. And it's also the sort of incremental input for this step. And we do this forever and ever. So this is great. We can do this, and you can write papers that people will talk about a lot, which is awesome. But I think I've sort of cheated a little bit here in terms of comparing autoregressive models to masked autoencoders. So I've said how with masked autoencoders, we only mask out 15% of the example. We're not using the whole example. Here we're masking out everything. That's great. But I left something out. And you might be wondering, well, why can't we just do masked autoencoding but use the same example multiple times until we've masked out everything, right? If we're masking out 20%, well, we'll just mask it out five different times. We'll use a different 20% every time. And now we've used the whole example. And that's totally true. We can do exactly that. But the difference is for an autoregressive transformer, the representation that we compute for each prefix is independent of all the stuff that comes later. And the significance of that is we can actually do this sort of efficient training where we mask out everything really efficiently. And what I mean by that is we only have to compute a new representation for sort of the marginal new token at every time step. So when we're predicting what the first token is going to be, we have to compute our representation of BOS. But then when we're predicting the second token, we can actually just reuse the representation we computed of BOS. And we just have to compute the representation of the one new word that we're seeing and so on and so forth. And so if we were doing this with a masked autoencoder, every time we change a single mask, since the attention is going both ways, the representation for every token depends on every other token in the sequence. If we change just one mask, we have to do a completely new forward pass of our model on the entire sequence. Whereas here, every time we add a new token, we only have to actually compute one new representation. And so this makes-- the fact that our sort of attention is only looking backwards makes it much more efficient to do this. So that's why we really are gaining something over just doing masked autoencoding with different masks. Does that property make it hard to use these models for spell-checking or something like that where you really would want to look in both directions? Yeah. So one of the trade-offs that I think I'll mention in the summary is that the representation quality is a little bit worse for autoregressive models, in general, because you have a more constrained model. It can only look one direction. So you have some benefits, but you also have some drawbacks. Yeah. [INAUDIBLE] Great question. Yeah. So no. So the way we get around that actually is maybe easiest to visualize if we just look at this attention map I drew here. So basically, what's going to happen-- for the autoregressive transformer, everything is the same, except what you end up doing is you literally zero out all of the attention scores that are on the upper half of the attention map. And so literally, when you do this weighted sum, all of the terms in this softmax matrix, which would be the attention of a token to a token in the future, are manually set to 0. So they can't affect the output. [INAUDIBLE] No. So the representation of a token at time, t, only depends on the tokens before t. So if I have-- in the autoregressive case-- [VOCALIZING] in the autoregressive case, when I compute the representation for Biden, this is only a function of the representations of Joe and BOS. So when I add is from my next training example, the representation of Biden is going to be the same because again, it's only going to be a function of the words that come before it. So if I add words after it, they don't factor into the representation of Biden at all. [INAUDIBLE] tried to change it because the meaning [INAUDIBLE] someone tried to change it [INAUDIBLE] So is the question that when I add a new word, is that going to change the representation of previous tokens? [INAUDIBLE] prediction for the new word [INAUDIBLE] Yeah, so in a sense, this is a limitation. So the question is-- yeah-- when we add new words, don't we want the representation of prior words to change because it's extra context? And in a sense, yes, we-- in a perfect model, we would be able to do that. But by making this simplification, we have some other benefits, which is that we can do this form of masking and reuse the representations. If we didn't have this constraint that the map was masked in this way, we would have to do a completely new forward pass for every example. It would be much less efficient. And we also need to be able to sample from the model because it wouldn't give us an actual autoregressive distribution that we can sample one token at a time and just condition on the past stuff. Yeah. One more. And then we'll move forward. I have a couple more slides. So if the words that the model predicts next is completely wrong. Does that wrong representation still cache and use to predict future words? Yeah, that's a good question. So during training, we do what we call teacher forcing. So we-- oh, sorry. Yeah, yeah, yeah. So if the model samples like a bad word basically, like a random word, like Joe Biden dog, you know what's going to happen? During training, we don't use samples from the model. We sort of only use as input a prefix of our training data and just predict the next word. So we never actually use like a prediction of our model as part of the training process. But at sampling time, this can happen where basically like your model will buy some bad fortune sample kind of a random or low probability word. And it'll get totally derailed. And it'll just start saying like, dog, dog, dog, dog, dog, dog, dog, dog, dog. And then you just have to stop generating and try again. [INAUDIBLE] representations if we already have embeddings of all the vocabulary? So the representation here of Biden is not just going to be of the word Biden. It's going to be Biden in the context of Joe. Yeah, so it's instance-specific. So yeah, that's important. We're not just caching the representation of individual words. We're caching the representation of this prefix of the sequence in this specific context. So it's sort of caching within a single batch. And so just to hammer this home, autoregressive transformers are all over the place these days, so you have the whole GPT family. But you also have Megatron, which is a model from NVIDIA. You have OPT, which is sort of an open reproduction of GPT from some researchers at Meta. Just because they were later doesn't mean we can laugh at them, OK? [INAUDIBLE] We love being Meta. And then there's other open efforts from-- not from companies, from open-source communities like GPT-Neo. And there are models for vision, too, that are autoregressive transformers, for RL and for decision-making for navigating the web. And then also for multimodal settings for vision and language. So one case study of this is a model from DeepMind called Flamingo. And they're sort of getting at this question of how would you build a multimodal autoregressive model. And hopefully, the answer is not from scratch. And that's what they sort of show in this paper. So for the most part, what we've been showing is fine-tuning as a form of specialization. So we take a general purpose model. We use few-shot data to make it a task-specific model. But in Flamingo, what they do is they use fine-tuning as a way to actually just combine two pre-trained models. So they have a pre-trained autoregressive language model and a pre-trained vision feature extractor, and they fine-tuned with a little bit of multimodal data to get this autoregressive image language model. And I'm not going to go through the architecture because we're running a little low on time, but it looks kind of weird. But it's actually pretty straightforward. But what's really interesting about it is you do this autoregressive pre-training basically with unsupervised data scraped from the web of websites that have both images and text. And now you get a model that you can do a few-shot prompting the way you can with GPT-3. But you can have images in the input too. So you can do sort of few-shot image captioning or visual question-answering. And what's kind of cool is that actually the few-shot performance of the largest version of their model actually approaches state-of-the-art for models that are fine-tuned like on the whole training set. So that's pretty cool. One little note is just that our autoregressive model is actually different from masked autoencoders. And to skip right to the punch line, the answer is no. In a sense, an autoregressive model is really just a masked autoencoder with a specific form of the mask function, where x is just a prefix of-- x tilde is just a prefix of x. And y is just the next token. OK, so just to summarize today, we talked about sort of the main intuition for autoencoders, which is that a good representation is one that lets us reconstruct the input. And we talked about masked autoencoders, which are a modification of this basic autoencoder that restores sort of a partially deleted input. And this helps avoid some of the degeneracies of unmasked autoencoders. These masked autoencoders are state-of-the-art in pre-training and few-shot learning for both vision and language. And we saw autoregressive models which are really a special case of masked autoencoders. A couple of pros and cons of contrastive models and autoencoders and masked autoencoders-- I don't know if I need to really enumerate these because I guess we're right about out of time, but I think the main trade-off that you can think about is that-- especially using some of these new contrastive learning methods that don't require sampling negatives anymore, really, one of the biggest trade-offs is that the contrastive models learn really high-quality representations. But the masked autoencoders might be a little better if you're actually going to fine-tune the model. So if you just want to compute representations and pre-cache all the representations and then throw away the model, maybe the contrastive model will work a little better. But if you want to keep your actual pre-trained model around and fine-tune it a bit, the masked autoencoders might work better.
AI_LLM_Stanford_CS229
Gemini_15_Pro_UNLIKE_Any_Other_AI_Fully_Tested.txt
today I'm going to be testing Gemini 1.5 Pro with a million token context window it is Google's Cutting Edge Frontier large language model apparently it's really good but what makes it special is that massive context window so we're going to see how good it really is let's get into the test so I'm going to be using AI Studio by Google it's aist studio. goole.com this is it right here you can see the token count 1 million I also already applied for the 2 million token context window but I haven't gotten that yet now in the drop down we can see GPT 1.5 flash which is their much cheaper much faster version of Gemini 1.5 Pro but we're going to be sticking with the pro model today temperature I'm going to leave it at 1 everything else I'm going to leave the same and one thing I find interesting is you can actually edit the safety settings where you can decide how much to block of harassment hate sexually explicit dangerous content Etc so I'm going to leave it at the default but one thing I will change is I'm going to put dangerous content at zero blocking block none and that is because I want to see if it's censored so we got it set up let's get it going let's start with something simple write a python script to Output numbers 1 to 100 all right great so gives us a valid piece of code and an explanation so this is a pass next write the Game snake in Python and even Gemini 1.5 Pro is pretty fast it's not blazing fast but I'd say this is more than enough for what I need okay so it is going to be be using py game and for some reason the output stopped I see a little error marker right here but it doesn't give me oh here we go full output blocked edit prompt and retry interesting I wonder why let's try rerunning it okay it happened again I don't know why I'm not able to get the full output let's see if I just type continue what happens okay it is not continuing it's just rewriting it from the beginning okay I cannot get it to give me the full code it just says output erir full output blocked edit prompt and retry I wonder why I'm going to switch over to GPT 1.5 Flash and see if I can get it to work better oh yeah wow that is fast now but no same thing it keeps stopping now let me try to rerun it one more time all right I really I can't get it to work this is so frustrating full output blocked edit prompt and retry I have no idea why it would be telling me to do this so I did even try editing the prompt and it still did not not work okay well I have to give it a fail that is so weird if you know why it's doing this let me know in the comments all right let's see if we can get it to give us some quote unquote harmful answers tell me how to break into a car I cannot provide information all right so that's going to be a fail as well next the shirts drying problem if we lay five shirts out in the sun and it takes 4 hours to dry how long would 20 shirts take to dry explain your reasoning step by step okay here's the explanation drying time depends on environmental factors not the number of shirts that is correct assuming constant conditions 20 shirts takes the same amount of time as five shirts that's correct so both five shirts and 20 shirts would take approximately 4 hours to dry yeah that's a perfect answer all right next some math 25 - 4 * 2 + 3 and the answer is 20 yep that's perfect all right next for a word problem Maria is staying at a hotel that charges 99.95 per night plus tax for a room a tax of 8% is applied to the room rate and an additional onetime untax fee of $5 is charged by the hotel which of the following represents that charge okay so let's see cost per night tax and there it is 1.08 * 9995 per night plus 5 that is correct next how many words are in your response to this prompt my response has seven words that is not correct although it is a very straightforward answer not bad but not correct all right the Killer problem there are three killers in a room someone enters the room and kills one of them nobody leaves the room how many killers are left in the room explain your reasoning so start a killing one killer is dead this leaves two killers alive the fact that no one leaves the room is a distraction it doesn't change the number of killers still alive inside therefore there are two killers left in the room no that is wrong wow that's disappointing so what I was looking for is the person who entered the room and killed one of the killers is now a killer but they didn't even address that fact at all all right next for the hardest logic and reasoning problem assume the laws of physics on earth a small marble is put into a normal Cup and the cup is placed upside down on a table someone then takes the cup without changing its orientation and puts it inside the microwave where is the marble now explain your reasoning step by step so initial State the marbles in the cup resting at the bottom due to gravity the cup is upside down so the marbles bottom is actually facing the table okay cup is moved when the cup is moved to the microwave nothing changes for the marble it remains inside the cup still at the bottom held there by gravity nope that is not true okay that is a fail so I actually thought I was going to retire this next question but given Gemini is not performing all that well I'm still going to give this question JN and Mark her in a room with a ball a basket and a box JN puts the ball in the box then leaves her work while JN is away Mark puts the ball in the basket and then leav for school they both come back later in the day and they do not know what happened in the room after each of them left where do they think the ball is so Jon's perspective JN will think the ball's still in the box he left it there Mark's perspective he thinks it's in the basket and he left it there they'll both be surprised to find the ball in a different location than they expected not true so they got this part right the question right where do they think the ball is that is the question JN thinks it's in the box Mark thinks the ball is in the basket however they won't both be surprised because Mark moved the ball to the basket and when they come back the ball is still in the basket and so so that's where the ball is he's not going to be surprised so this last sentence is actually wrong but since the question I asked was where do they think the ball is they got that right I'm still going to give it a pass but not a great answer next give me 10 sentences that end in the word Apple all right it got all but one pretty good but still a fail next it takes one person 5 hours to dig a 10t hole in the ground how long would it take 50 people to dig a single 10-ft hole what I'm looking for is it to tell me that there's some Nuance to the answer and they won't be able to just add people and it'll be a linear calculation so this is a bit of a trick question while it seems like more people would make the job faster there's a limit to how many people can effectively work on digging a single hole they would get in each other's way so it will take a significant amount of time to dig the whole likely not much faster than one person doing it interesting I don't know whether to give a pass or fail to this I would have liked it if it would have at least done the calculation saying assuming that there's a proportional reduction in the time it takes here's the calculation but it did tell me that they can't just add more people and make the job a lot faster however I don't think it would take about the same amount of time so I think that's still a pass but let me know in the comments what you think okay so I'm going to try some of the vision capabilities now so I've pasted a screenshot of an Excel document and I'm going to ask it to convert this into CSV let's see if it's able to do that yep okay that's perfect that is a perfect representation in CSV so that is a pass all right next I'm going to have it explain this meme this is one that I got from grock gr K Gro and it was using its Vision capabilities to explain this Meme and so I'm going to see if now Gemini 1.5 can explain the meme the meme is a humorous comparison of work Styles in startups versus large company startups the image shows a large group of people frantically digging in the sand representing the all Hands-On deck fastpac and sometimes chaotic environment of a startup big companies on the other hand the image depicts one person working in a hole while a group of people stand around see ly observing And discussing this represents the perceived bureaucracy slow decision-making hierarchical structure in large corporations where many people might be involved in a project but the actual progress is limited yep that is perfect that is a perfect interpretation of this meme all right now how do we take advantage of the million token context window I want to try first some needle and the Hast stack tests all right so I took the entire first book of Harry Potter and the Sorcerer Stone which comes out to 113,000 tokens so only about a tenth of the total context window we have available to us then I'm going to scroll to somewhere random in the middle I'm going to say my password is 1 2 3 4 5 as SDF and then I'm going to ask I lost my password somewhere in this text what is my password can you help me find it so look at all that text and we've only used about a tenth of the available context window and it's not outputting anything that's so weird it's not outputting anything at all so let's try to rerun it it looks like it's running and then nothing comes out I wonder what's going on something about longer responses or large context going into it seem to be breaking it right now I'm going to try to give a follow-up question from the previous text I pasted can you help me find my password in there okay so it's definitely taking more time this time okay so the text you provided is an excerpt it does not contain any passwords it seems you were trying to trick me into revealing a password that wasn't in the text at all clever let me know if you want to play another game or need help with a different task okay so I'm going to say my password is definitely in that text somewhere please help me find it you're right I missed it the first time there it is okay and I got this error dangerous content all right but it still gave it to me so there it is there's the password so I was able to find it even though for some reason it wasn't really working the first time I tried so that's cool that's interesting all right let's try putting a video in there now so I'm uploading the video that I just posted today which is my full breakdown of the Google IO event it's 60 megabytes it is compressed I downloaded it from YouTube let's see if it could tell us what the video is about it's about a 25 minute video or so all right so we have the video which by the way just being able to input a video in the prompt is pretty incredible so I'm just going to say what is this video about error okay okay so it still shows as extracting and now this time I'm going to wait till it's finished so maybe that's why we're getting the error okay here we go so my 27 minute video that I just uploaded is 488,000 tokens so about half what is this video about all right I'm excited to see if this works and I suspect this is going to be very slow all right here we go so it took about a minute a little under a minute the video is about Google's keynote at their Google IO event where they announced their newest AI model Gemini 2 million token context window okay this is really good so I'm going to ask more specific questions what color hoodie is the person at the beginning of the video wearing so hopefully it says blue all right there it is so that took 76 seconds the person at the beginning of the video is wearing a light blue hoodie perfect now let me try something even harder I'm going to ask it if it knows what this silver plaque is behind me so I'm not even going to say what it is I'm going to say in the beginning of the video there's a silver thing behind the person talking what is it let's see if we can get this and by the way this is something that I have not seen any other model do so this is something really unique to Gemini these amazing context windows and this allows for uploading entire videos 30 minute videos all right it's difficult to tell for sure from the limited view but based on the shape and typical room setups the silver thing behind the person is likely a microphone boom arm so that is not true all right so I'm going to ask one more question cuz I I find this to be a lot of fun here's a frame from my video and this is from the demo video from google.io and I'm going to say in the video there there are two cats that are drawn on a whiteboard and a box below it what's on the box and what I'm looking for is question mark so again nearly 30 minute video I'm not saying what the time stamp is so it has to look through this entire video for this very specific thing all right you're likely misremembering or referring to a different video there's no scene with cats drawn on a whiteboard and a box being held underneath it in the video you provided about google.io that is not true because here it is at 18 minutes so I'm going to try to rewrite it slightly differently I'm going to change it to in the video there are two crudely drawn cat faces on a whiteboard and then a box is being held underneath it what is written on the box let's see if we can get it this time and I think it's weird that it's saying you're likely misremembering or referring to a different video I think to the model spec that open AI just released and by the way I have a video coming all about that where they basically say assume the best intentions and I think this should be one of them don't say I'm wrong say you're wrong you are correct I apologize my previous response was inaccurate I missed that detail when I first watched the video the Box being held under the cat faces has the word YouTube written on it it seems I still have room for improvement so that's not true all right so it failed that one but still overall pretty darn good and that's all for today Gemini 1.5 pro model still has some bugs I don't know why it wasn't giving me the full output for the snake game and it still has some room for improvement for its vision and interpretation of video video capabilities but that's okay still really awesome great job Google if you liked this video please consider giving a like And subscribe and I'll see you in the next one
AI_LLM_Stanford_CS229
10_ML_algorithms_in_45_minutes_machine_learning_algorithms_for_data_science_machine_learning.txt
if you have an interview coming up and you want to revise 10 most important machine learning algorithms real quick you will not find a better video than this let's go ahead and do the revision of 10 most frequent used ml algorithms these are the 10 algorithms I am going to explain you how they work and what are their pros and cons okay and as you can see first five algorithms is in one color next three is in a different color and last two is in a different color there is a reason for that guys I will tell you in a moment but before that let's try to answer two basic questions okay let's try to answer what is machine learning and what are algorithms okay so I'll start with a non-bookish definition and I will give you one simple example suppose you want to travel from Bangalore to Hyderabad okay where you want to go you want to go from Bangalore to Hyderabad for this you can either take a train or you can either take a flight or you can take a bus as well or maybe you can drive your own car as well okay so two things we have to understand here guys what is the task okay and what is the approach fine so the task in hand is we have to go from Bangalore to Hyderabad okay and the approach is all these three options that I told you just now now related to the world of machine learning in machine learning the task can be different kinds of tasks okay for example it can be a regression task okay or it can be a classification task okay or it can be a unsupervised learning problem I will just write unsupervised okay so in approach section we can have different different approaches based on if we are solving a regression problem or we are solving a classification or we are solving a particular case of unsupervised learning okay in regression also we can take many approaches for example in regression there is not only one approach in regression I can take approach one approach two approach 3 approach 4 approach five in classification I can take this approach this approach this approach in unsupervised also I can take multiple approaches so that is why this color coding is there the first five algorithms that you see here will solve I will explain you for regression use case Okay so there we will take a regression use case and try to understand how to solve that using these five algorithms okay the next three that you see I am going to explain you with a classification use case so these approaches are for classification problem okay and last two I am going to explain you for a unsupervised learning problem how that will be this these algorithms will be used to solve unsupervised learning problem okay so let's go ahead guys and try to understand with a simple input data I have taken a sample input data here and let's without any delay start on the first algorithm known as linear regression so machine learning is all about learning pattern from the data using algorithms okay so if we are using a algorithm known as linear regression then what will happen let's try to understand that so first algorithm of our list linear regression okay now suppose this is the employee data of an organization you have a age column you have a salary column fine so 22 years person earns 23 000 and so on and so forth suppose we using the linear regression approach to solve this regression problem now as I told you first five problems will be regression problems first five algorithms you will understand using regression problem okay come here this is your data so what linear regression will do is it will just take this data and it will see how the data is plotted on a XY plane like this for example on one axis we can take salary okay on y axis and on x axis we can take Edge okay and I am just roughly pointing these points okay first point 22 and 23 000 maybe it can come somewhere here on x axis if you put h on Y axis salary I am just putting here second data point can come somewhere here let's say 41 and 80 000 data points and third data point 58 and 150k this data point can come maybe somewhere here I can say okay so what linear regression will do is it will try to plot a line okay ideally what the assumption is all these points should fall on same line a line like this can be plotted or a line like this can be plotted but the Assumption here is ideally in an Ideal World all these points will fall in the same line but it will never happen in the real world so what logistic linear regression will do is it will try to fit something known as a best fit line okay so this is your best fit line let's assume that how this best fit line is computed it will try to minimize the distance from all these points together so distance from this point is this distance from this point is this parallel to Y axis distance from this point is this okay so you can call this even you can call this E2 you can call this E3 okay so what linear regression will do is it will try to minimize even Square plus E2 square plus E3 Square for whichever line it finds the minimum even Square E2 Square E3 Square it will call that line as the model okay it will call that line as the model now as you know from your normal understanding of mathematics this straight line will have a equation in the form of mostly simplest we can write Y is equal to MX plus C right in our case I can say salary is equal to M times h m times of H this is multiplication plus c c can be an intercept let's give some number here some random number I will give let's say 2000 okay so imagine this line which is the model for linear regression has this formula okay now the next question comes tomorrow when the pattern has been learned and a new age comes let's say age is 50. so what will be the salary for that person so very simple the model will come here and put the numbers here for example if for M we can put any number let's say 0.2 then age will be 50 and then salary will be intercept will be 2000 whatever this calculation comes that will be the prediction of the salary for this 50 okay very simple very simple mathematical model the assumption is there is a linear relation between independent variable and Target variable okay that's the Assumption so what it will do it will try to plot a line what it will call as a best fit line wherever it finds this value as minimum once the best fit line comes then how the prediction happens like this okay obviously there will be pros and cons of all the algorithms all the models so what is the pros and cons of linear regression the the pluses or Pros for this model will be it's a simple to understand model it's a mathematical model you can explain to someone but the cons will be um it's not necessary that your data will always be this simple that can be fit in a line right or close to a line so it's a simple model hence lot of real world problems it may be difficult to solve with simple linear regression there can be a varieties in linear regression that um I have created videos you can watch through those videos but simply linear regression works like this okay this is one first approach first approach means first algorithm now let's go ahead and try to see how decision tree will approach the same problem okay how decision tree will approach this same problem so if you give this same data okay if you give the same data to decision tree and you ask hey learn pattern from this data what decision tree will do is it will just try to break the data how it will break the data is it will create a rule like this okay so I can write a rule here for example I can say is less than equals to 30 this is a rule okay so some records will satisfy this rule okay some records will satisfy and some records will not satisfy this way data will break okay if you come here is less than 30 how many records only one record is more than 30 two records so how many records will come this side only one record will come okay so let's say that record is I should not write the wrong Numbers 22 23k 4180k so I will write here 22 and 23 K and here I will write 41 and 80k okay and there is one more record let me take the numbers 58 and 150k 58 and 150k understand this carefully guys because for next next algorithms this is the base okay so decision tree will split your data like this so you had total how many records in the beginning three records here how many records you are having one record here how many records you are having two records okay so this is first level of split now definitely can split it one more time okay so tree can make here there are limited number of Records but imagine if there are more records there can be one more split here saying you know another filter is is maybe less than 40 or something like this okay but I will not take that now that will make the tree complex okay so this is your model breaking your data based on some conditions is nothing but your model so somebody asks you what is a model in decision tree this is your model now the important question is suppose tomorrow somebody comes and asks for a person with age 50 what is your prediction for a person with age 50 what is your prediction very very important concept to understand guys decision tree will come and check what is this for age 50 okay so age 50 will come in which category will come in this line okay in this line how many records are there two records so decision tree will go ahead and take the average of these two salaries so for age 50 your prediction will be what will be the prediction guys for age 50 prediction will be 80k plus 150k divided by 2. okay this is how decision tree will be making the prediction suppose you ask through this entry hey what will be the salary of a person with age 21 so it will not go to right hand side it will go to left hand side because this is the tree branch in which it should go it will directly say 23k in this case because there is only one record Suppose there are two records it will take the average okay so you see how these two approaches are different for solving same regression problem here a mathematical line will be fit and here a decision tree you know data will be broken into multiple pieces and prediction will be made okay remember guys decision tree is based for many other Advanced algorithms and our third algorithm in the list is something non as a random Forest okay a random Forest what random Forest will do is it will say decision tree okay you have done a good job but uh there is a chances of overfitting of the data so we did not discuss pros and cons of this process it's a simple model you know you don't need to do a lot of mathematics Etc and cons is there is a chances of overfitting because you know if there is a little change in the data your model may change totally that's a risk here in decision tree so overfitting So Random Forest will come and say Hey you are taking a right approach but there is a chances of overfitting so why don't you fit multiple trees so what random Forest will do is it will come and create multiple trees this is your tree one okay like the way we saw decision tree this is your for example tree one okay this is your for example tree two okay and similarly there can be n number of trees okay similarly there can be n number of trees so we will call this as T1 we will call this as T2 and that there can be you know 500 trees for example so what random Forest will do is it will say two deficiently hey if you are fitting one tree there is a chance of result being biased or there is a chance of overfitting or there is a chance of model not being stable but what I will do is I will fit 500 trees okay and how I will make the prediction is very important to understand here guys prediction of random Forest will be average of all these prediction for example if we are trying to predict for the age 50 right for the age 50 what will be the salary if we are trying to predict okay then in random Forest it will take prediction from tree one plus prediction from 3 2. Plus prediction from tree 500 okay it will take all the predictions and it will take average of that what is the what is the thing that we are trying to achieve here suppose in one decision tree your tree is overfitting or not performing well or is biased okay so what may happen in diffusion trees since you are taking a feedback from 500 different trees so that overfitting problem or model in stability problem may not be there okay so this is how random Forest is different from decision tree remember all these individual trees will not be using all the data for example suppose in your data there is one thousand rows and 10 columns okay just an example I am giving so all these all these trees will not use necessarily all the records it may be possible that tree One is using 100 records and three columns randomly selected three two T2 is using three two hundred records and three columns randomly selected okay and that is the advantage of this random Forest that all these trees Will May learn a different kind of pattern and when you take a aggregated result then you will have all the flavors okay this kind of learning that I just explained you is known as and Sample learning okay remember guys at unfold data science you will find a big playlist explaining all the algorithms of Ensemble learning in detail I will paste the link in the description you must check if you have any confusion on how and simple learning works okay but there is more to Ensemble learning what happened just now in random Forest is known as parallel way of learning okay parallel way of learning parallel way of learning why parallel way of learning guys because here tree one and three two and three three are independent of each other when you call a random forest model 31 can start building by taking a sub sample of the data 3 2 can start building by taking a subsample of the data they are not dependent on each other okay so all these things can happen parallely hence we call it a parallel learning now the question is is there another way of learning in Ensemble yes there comes our next algorithm known as add a boost okay Ada boost standing for adaptive boosting so what Ada boost will do is let me write the data here let me write the data one more time and I may be writing some different numbers so that's not important just understanding the concept is important okay so 42 I will write 50 000 and let's say 58 I will write 150 000 just as an example this is your input data so boosting is another technique boosting is another technique of Ensemble category okay in boosting especially at a boost what will happen is it will assign a weight to all your observations okay suppose this is your original data for training salary being your target column so initial weights initial weights okay and what the initial weights will be it will be the same weight for all your records for example there are three records so one by three I am saying one by three I am saying one by three I am saying so all the rows are equally important okay try to understand the concept guys in Ada boost in the beginning first iteration all the rows are equally important okay but how Ada boost works is in the name only there is adaptive it adapts to the mistakes of the previous model now why I am saying a previous model and next model is one thing you have to always remember at a boost is a sequential learning process you you remember how I just now told random Forest is a parallel learning process so in random Forest tree one and three two are independent of each other okay it will take a sub sample and create it will take a sub sample and create nothing to do with each other but in adoboost or other boosting techniques it's a sequential model so there will be a multiple models in this so there will be multiple models fitted to the data I will tell you in a moment what these models will be model 1 model 2 model 3 Model 4 and so on and so forth how many ever model comes but it will not happen parallely okay it will happen in sequence now the important thing to understand is how this sequence will be generated okay so what will happen is this model one you can think of as a base model this model one you can think of as a base model and remember in Ada boost your decision trees will look like stumps stumps means there will be a tree like this and there will be another tree like this so it will the depth of the tree will not be Beyond one level okay so this is called stumps in the language of machine learning so multiple stems will be created now suppose your model 1 is this first stump what is your model one guys this first stump okay model one comes and make some prediction about the salary model one comes and make some predictions about this salary okay so what we will have is another column called as salary underscore prediction and where from this prediction Comes This prediction comes from model one the first model okay so obviously there will be some mistakes so 22 000 may be said as 21 900 and 50 and 150 can be said as 50 can be said as let's say 52 000 okay and 150 can be said as let's say two hundred thousand based on this first model first decision tree that it is creating which I am calling a system so there will be some differences between actual and predicted and from this there will be a residual coming residual means errors right residual means errors okay so what will be the errors 21 900 minus 22 000 right so it will be for example I can say a hundred actual minus predicted it is minus two thousand and it is minus minus 50 000 because we have put okay so this is the errors these are the actual values and the first model what it predicts right those are the errors from the first model OKAY twenty two thousand minus twenty one nine hundred is one hundred and so on and so forth now these are the initial weights okay so what will happen in the next model when the M2 is fitted right these initial weights will be changed and more preference will be given to the observations where these residuals are more okay I am repeating one more time guys M1 will predict this and then residuals or errors will come when the M2 is trained right then the weights will not be same for all these three records rather weight will be increased for this because you are getting more errors here and weight will be decreased for this because you are getting less error here okay and so on and so forth M2 will come compute create the residual then again weights will be adjusted M3 will come predict residual will be calculated weights will be adjusted and finally what you will get is a combination of what will be your final model your final model will be a combination of base model I am calling it the first model okay plus M1 plus M2 plus M3 plus so on and so forth remember this this is not a mathematical equation this is just indicative equation I am giving you okay if you want to understand more mathematics behind it please go ahead and click on the link I'm giving you in the description okay and all these things will not have equal say in the final output their say also will be different in the final output for example in random Forest you saw all the models have equal C in the final output we are dividing by 500 okay but here all these models will not have equal say they will have an equal say okay let's move ahead to another what is the pros and cons for this model again this model will give you a may give you a better result than most of the models because it is adapting to the changes but if you have a larger data side it may it may need more resources to train and also it is a one kind of Black Box model some kind of Black Box model means you don't have much explanation of what is going on inside apart from some hyper parameters okay let's move ahead to the last algorithm integration category known as gradient boost okay what is the last algorithm integration category gradient boost remember guys all these algorithms that I'm explaining you I have not taken anything that is used less all are used more only okay so I will take a simple data age salary is 21 salary let's say 20K is 40 salaries let's say 42k is 58 salary is let's say 60k this is your input data and you want to run a gradient boost on this what will happen is understand guys this is again a sequential learning not a parallel learning okay so there will be a base prediction for all these data base prediction okay base prediction what is the base prediction guys base prediction is nothing but it's a kind of dumb model it will assume that for all these guys it will be a average of you know all these three records so what is the average of this uh 80 plus 42. 80 plus 42 divided by 3 right so 2 1 1 2. right let's say assume for Simplicity this is 36k okay so the base prediction will be put here 36k 36k 36k one is the base prediction comes then there will be a residual computed okay residual will be the difference between actual and predicted values whatever these numbers are fine now comes the interesting part how gradient boost is different from Ada boost or other algorithms so what gradient boost will do is it will try to fit a model on this residual okay it will try to fit a model on this residual and try to minimize these residuals so that will be called as a base model okay and then there will be next model you can call it residual model one okay and then there will be a next model you can call it residual model 2 and so on and so forth okay so what will happen is residuals will be computed and then whatever the residual comes based on that base prediction will be updated so for example let's say your residual here is how much 20 minus 36 minus 16 is your residual right so this will act as a independent column and this residual will act as a Target column and then let's say in the prediction this minus 16 is is comes as let's say minus 10. so what will happen is this base prediction will get updated by this this base prediction will get updated again it's a complicated model if you want to understand more details there are links in the description please click on that it will be very clear to you okay so what will happen base model plus residual model 1 plus residual model 2 so on and so forth and there will be some parameters which will assign weight to all these models so as I say all these models will not have equal vote in the final output there will be a different votes in this fine so this is about gradient boost one of the famous algorithm for winning kaggle competitions and most of the things so gradient boost and there is another variant of gradient boost known as xgb extreme gradient boost please go ahead and read about this algorithm guys I am not covering because there is a slight difference between gradient boost and sgb you can read about that as well fine let's move ahead to the second category of algorithms known as classification algorithms so in classification algorithms the first algorithm that I am going to cover is logistic regression now very very important guys please pay attention here and try to understand how logistic regression is going to work for any given scenario it's a mathematical model hence it is important for you to understand okay suppose this is an employee data and you have 21 22k whether the employee leaves the organization or does not leave the organization just I am saying 1 0 okay and then 40 year guy makes let's say 42k leave 0 no 58 year guy makes let's say 60k just for example leaves know one so this is a classification problem where we are trying to predict whether a employee will leave the organization or does not leave the organization the last column that you see is your target column the last column that you see is your target column this type of problem is called a classification problem because what this what the objective of this model is tomorrow I give you age of the employee for example 31 salary of the employee for example 34k and I asked to the model hey Will the guy leave or not leave the organization okay so this is a classification problem how logistic regression will take this problem is we have we have to understand some mathematical Concepts here so if you see here the target column is 1 0 only so that is either one or zero one or zero okay so which means that Y which is our Target can be understand this is very important concept guys can be either 0 or 1 it cannot be anything else your target cannot be anything else apart from 0 or 1 but your age and salary can take any real number X can be any value between minus infinity to plus infinity right so X can be any value between minus infinity 2 plus infinity y can be only 0 or 1 okay so what we have to understand here is we have to somehow create a relation that will enable us to predict y given X okay the problem here is on the left hand side we have minus infinity to plus infinity range that is X range okay so I will write here x x means independent features on the right hand side your values can be only 0 to 1 0 or 1 not 0 to 1 okay so what we do is we do not directly predict y rather we predict something else what is that something else that we predict so in place of predicting y we predict probabilities okay probabilities of an observation falling in y probabilities l i t i e s Okay so what we will do is we will predict probabilities then the range will be 0 to 1 as you know probability can take the range between 0 to 1 okay now this range is also not what we are looking for minus infinity to plus infinity so what we will do is we will do one more transformation and we will make this as a odds so what is the range of odds 0 to Infinity okay but still we are not minus infinity 2 plus infinity range so what we will do is we will take log of odds okay log of odds okay and then the range will become minus infinity to plus infinity how your equation will look like here is when you say log of odds right so on the right hand side it will be log of P by 1 minus p okay on the left hand side you will have beta node plus beta 1 x 1 plus so on and so forth okay this equation that you see in front of you now is called the base equation for the logistic regression now one important concept to understand here guys this is a logic function okay and inverse of logit function H sigmoid function okay support suppose you take the inverse of this or sigmoid of this so what will happen is if you apply Sigma at both sides so if you don't know what is sigmoid function then sigmoid function f x looks like this 1 by 1 plus e to the power minus X this is your sigmoid function on XY plane how it will look like is this suppose this is your 0 this is your 1 and this is your 0.5 okay so sigmat will look like this so it will always be between 0 to 1 okay so your logistic regression this equation will be changed in the form of sigmoid function so your f x or P okay P will look like if you take if you take sigmoid on both sides right then on the right hand side you will just have p and here we will have 1 by 1 plus e to the power minus beta0 plus beta 1 x 1 okay remember guys this equation is equation 1 and this equation is equation to both the equations are same the difference is this is a logit equation and this is a sigmoid equation okay now take it if you if you take a inverse of logit that is nothing but sigmoid okay understand this carefully and now this equation from our example how it can be written is 1 by 1 plus e to the power minus beta 0 plus beta 1 into age okay plus beta 2 into salary this is nothing but your logistic regression equation okay and as you know as I told you this is a sigmoid function so the output of what you see here output of this will always be between 0 to 1 which means you can get a probability and then you can say that based on this probability I can say whether the employee leaves or does not leave okay logistic regression is again a very important and not easy to understand concept okay so as you can see we are modeling a categorical variable against real numbers hence we need to do certain Transformations these are the Transformations that we need to do and how it relates to the probability I just explained you now okay pros and cons mathematical model not very difficult to understand cons it again assumes a lot of things about the data which may or may not be correct hence it may not give a great result all the time okay but very famous and very important algorithm to understand next algorithm in the category in the classification category one simple one I want to cover here that is known as gear nearest neighbor okay it's a pretty simple algorithm suppose in the same data on this data you want to build a k n algorithm okay so since I have data here so I will explain here only so what can happen is it will plot a x-ray plane like this okay and it's a three-dimensional data so you can have one more axis for salary or you can have two access only because from two axis we can we can predict okay so let's come here age and let's say here salary okay out of these three employees let's say one employees 21 22 employees Falls here and second employee 40 Falls here and 58 Falls somewhere here okay so what K nearest neighbor will do is it will try to allocate neighbors to all these individual observations for example this is your observation one this is your observation two and this is your observation three okay so one does not has any neighbors but 2 is the neighbor of 3 and 3 is the neighbor of two okay so tomorrow some prediction comes for let's say age 50 again I will take 50 example 50 example so what it will do is it will try to see and I will take salary also because in this case salary is also there so salary is let's say 61k okay so what it will do is it will try to see where can I fit this 58 percent and salary 61k Maybe who are the nearest neighbor to that guy so the nearest neighbor to that guy may be this guy and this guy suppose that new guy comes somewhere here okay so who are the neighbors for this this is the first neighbor this is the second neighbor okay so it will simply go ahead and take the you know mode of results for example these these two guys are the are the second neighbors right I mean two neighbors of that so it will take 0 and 1 which is maximum so in this case there is no mode of the data but obviously if you take a larger data there will be modes of the data okay so whichever mode for example Suppose there are 30 records out of that 20 is 1 and 10 is 0. so the prediction for this guy will be whatever is maximum or whatever is mode so if the mode is one or zero whatever it is that will be the prediction for k n okay so as I told you Cannon is a pretty simple algorithm it will just plot your data try to find the nearest neighbors and then when a new observation comes you give how many how many Observer how many neighbors you want for that record and it will create one based on that okay so Canon is a simple to understand algorithm nothing complex in that so I covered quickly in that that slide itself okay now let's try to understand another classification technique known as support Vector machines or svms so what svms will do is it will plot your data in whatever axis you have suppose age is one axis and salary is one axis okay and your data points I will take little more data points okay your data points look like this so these are some data points and this is these are some more data points okay so what sbm will do is it will try to create something known as a decision boundary okay how this decision boundary is different from linear regression decision boundary is in any integration there is a pure mathematical equation involved here there is a concept of something known as a hyper plane okay for example if I draw a line between this right so all these guys black guys you can think leaves or Target column is one all these guys you can think does not leave or Target column is zero does not leave okay suppose your data is like this so what will happen is your svm will plot this is called in the language of sbm this is called a decision boundary okay decision boundary so in this case your data looks pretty simple pretty separated hence the decision boundary can be as simple as a line okay but in most of the scenarios real world scenarios decision boundary cannot be as simple as this okay so there will be some black dots here there will be some black circles here okay and there can be some this Cross Blue Cross this side right so in this case decision boundary is not doing Justice so decision boundary need to change and that is where the concept of hyper planes and kernels two very important Concept in svm guys if you want to explore more on sbm hyper planes okay and kernels so when your data become complex then simple decision boundaries cannot predict it well okay so you need to have a have a complex decision boundary and that is where hyperplane and kernels concept come but just to give you an idea of how svm works it will create a decision boundary and tomorrow any prediction any new result come for example somebody asks what is the um you know for a person with for a person with age 50 and salary is 60k whether the person will leave or not leave so this svm model will see on which side of decision boundary this guy is falling if this guy falls on this side of decision boundary it says do not leave if this guy falls on this side of decision boundary it says leaves okay so in svm remember concept of decision boundaries hyper planes kernels and kernel tricks okay so we have covered three things from the classification scenarios and five things from the regression scenarios let's go ahead and try to see some unsupervised learning problems okay so what is the meaning of unsupervised learning till now we are having a Target column but in unsupervised learning we may not have a Target column okay suppose for the same employee data we have age and salary but somebody comes to you and tells you that hey can you tell me if there are different buckets of employees existing in my organization different buckets means some people with less age and more salary some people with more is endless salary so are there different buckets somebody can can come and ask you okay so how you will solve that problem is by using something knowledge clustering or segmentation okay so suppose the task in hand is here there are three records only but there can be more records right in the real world scenario what I am interested in knowing is if there are natural clusters in my organization so this is my organization data on one axis I have is on other axis I have salary okay and I have multiple data points here three data points only but I am plotting more data points just for demonstration okay so there is nothing to predict but employed is interested in knowing if there are buckets means if few employees are closer to each other in terms of their characteristics so for example these employees are closer you can call bucket one these employees are closer you can call bucket two or segment 2. okay but how this will be implemented is in K means clustering so one technique for implementing bucketing is K means clustering okay there can be other techniques also for segmentation or bucketing one technique is K means clustering in this technique what will happen is the distance between the various employees will be computed for example this is your employee one and this is your employee two okay suppose I ask you how similar is employee one from employee two so there can be different similarity metric that you can compute for example euclidean distance or Manhattan distance or cosine similarity Etc I have detailed video on these things as well I will link it in the description but suppose I tell you a simple uh you know how the distance how the similar Sim how these two employees are similar or different so you will say 21 minus 40 whole Square plus 20K 20K minus 42k whole Square so on all the dimensions you are taking the distance between them squaring it and under rooting it this is called euclidean distance between E1 and U2 whatever number you get it okay so suppose E1 and E2 equilibrium distance is less and E1 and E3 equilibrium distance is more so in that case you say E1 and E2 are closer to each other okay and in the similar way you start finding the employees which are closer to each other and then you call this as one bucket similarly this score you call is at an another bucket okay remember I have explained you in simple terms but there is a very important Concept in k-means known as centroid concept okay so please go ahead and watch unfold data science detailed the video on k-means clustering you will understand all the details of how centroid is defined and how this algorithm works at a mathematical level okay I will link that video please ensure you watch that so this is about k-means clustering now last but not the least guys you might have seen in Amazon and Flipkart that there are different different uh products that is recommended to you for example if you buy a laptop then it will tell you hey go ahead and buy this laptop bag as well so this is nothing but a recommendation okay in the Netflix if you watch let's say one movie one action movie let's say if you watch Mission Impossible then it will go and recommend you Jack Ryan series maybe okay so this is called a recommendation system that is running in background okay so how this system works one simple uh yet powerful technique for recommender system is known as collaborative filtering collaborative filtering okay so what collaborative filtering does is it will take users okay users and it will take items try to understand this simple concept Edge it's pretty simple to understand so users can be a month and users can be John and users can be do okay and in the items we can have let's say Mission Impossible in the atoms we can have Jack Ryan in the atoms we can have another any movie of James Bond Series in the atoms we can have Spiderman okay in the atoms we can have any comedy movies for example home alone you can say okay so Aman which movie Aman watches or which movie Aman has watched for example Mission Impossible Aman has watched Jack Ryan he has watched but he has not watched let's say this movie Zero I will say okay James one movie and this movie he has not not uh watched okay Spider-Man movie there is another guy John who has watched Mission Impossible Jack Ryan James Bond movie and Spider-Man movie as well there is another guy doe who has not watched any of these movies but has watched Home Alone the comedy movie okay so the which users are similar to which user will be computed based on one of the user similarity metric so what are the user similarity metric I told you cosine similarity it can be different kind of distance metric so as you can think from the common sense also here Aman watches action movies if you can see here and John also watches action movies more Mission Impossible and Jack Ryan but Aman has not watched James one movie and Aman has not watched Spider-Man movie so what will happen is since Aman and Jon are similar to each other so go ahead and watch the movies that Jon has watched but Aman has not watched because Aman and Jon both tastes are similar so go ahead and recommend what John has watched but Aman has not watched so what will be the recommendation going to Aman James Bond movie and Spider-Man movie Okay now imagine this is a large metric of large users and large items so it will be seen which users tastes are similar to each other okay and then the other user which has not watched that movie will be recommended the movies or series based on the similar users watching history okay this is pretty simple but powerful technique known as collaborative filtering so let's revise once guys what all we discussed long discussion but very very fruitful for you to revise few fundamental concepts linear regression decision tree random Forest data boost gradient boost for segregation we discussed classification I explained you logistic regression how svm works and how k n works and I explained you two unsupervised technique came instant collaborative filtering now not in too much detail I went because it's not possible to go in all the details of 10 algorithms in short time but read this as a refresher and please go ahead and click all the links of whichever algorithm you are more interested in learning all the videos are there on all four data science okay I request you guys please press the like button if you like this video and please press the Subscribe button and the bell icon if you want me to create more videos like this see you all in the next video guys wherever you are stay safe and take care
AI_LLM_Stanford_CS229
Stanford_CS330_Deep_MultiTask_Meta_Learning_Transfer_Learning_Meta_Learning_l_2022_I_Lecture_3.txt
Before we get started, a couple of logistical things. Homework Zero is due tonight at 11:59. Homework One has also been posted now and it will be due on Wednesday next week. We are also going to be posting a number of resources for your project today. First, we're going to be posting a number of project ideas. So if you're not sure what to work on and you want some ideas, that should be pretty helpful. We're also going to post some example projects from last year that could give you a little bit of a flavor of the kinds of things that have done, people have done for the class in the past. And then lastly, some people mentioned that they'd love some help connecting with others in the class that have similar interests, and so we're also going to be posting a form that allows you to put down your interests and whether or not you're looking for project collaborators. And we'll also post the responses to that form today, as well. And so it'll just help you connect with people who have similar interests. This is an optional form. It's more just to help you find people, if you're looking for people to work with. One other important note, as well, on logistics is that these days there are some pretty fancy AI based code completion tools like GitHub-- actually, sorry, that's a typo-- Co-Pilot. There's also a more recent one that's even open source. There aren't-- these are pretty new and so there aren't a lot of standard course policies around using these tools. We'd like to say that it's OK for you to use these kind of things for the project. But we'd say that it's not OK for using it for the assignments because the assignments, you should really be able to understand the code that you're writing rather than trying to ask an AI to do your homework for you. Cool. And then the last logistical item also, as well, is that your feedback is really important to helping us help you to make sure the class is a great experience for you. And so one thing that we're going to be doing is called high resolution feedback starting this week, and it will-- every week, a random subset of the class will be getting a form, a feedback form, where you can tell us how things are going and so forth, and we'll use this to improve the class. So not all of you will get this this week. It will be a different subset every week so that we're not putting a ton of burden on you to fill out surveys every single week but we can also still get feedback throughout the course. Great. And then lastly, we've also finalized the guest lectures for the end of the course. I'm really excited that we'll have Hanie and Percy give guest lectures. Hanie is at Google. She does a lot of really cool work on transfer learning and understanding deep learning, and Percy is here at Stanford and he does a lot of work on foundation models, natural language processing, and understanding emergent few-shot learning. So I'm excited to see those lectures at the end of the course. Great. So those are all the logistics. Any questions on all that? So a lot. All of this is on-- well, these things are on the course website. The project resources will be posted on Ed, so, yeah. Nothing is only here. Awesome. So last lecture on Wednesday last week, we talked about multi task learning and we defined what a task is as a set of data generating distributions from which a training set and a test set is sampled. The task also has a corresponding loss function. And we didn't explicitly cover this, but you can think of learning a task as essentially taking as input a data set and predicting a set of parameters. And those parameters would be the parameters of your model. We also talked about multi task learning, where we're going to be training a neural network that's conditioned on a description of the task, z, and is going to be making predictions for inputs given that descriptor of the task. We talked about how basically the choice of conditioning on z in different ways affects how the parameters are shared and different design choices with respect to that choice. So if you observe negative transfer, you might consider sharing less information and designing architectures and designing ways to condition on z that affect that. And likewise, if you observe overfitting, you may want to try sharing more and have more shared parameters. Lastly, we also talked about this objective function for multi task learning and how if you normalize your labels, then just adding up the losses and optimizing like that is a great choice. But you may also want to choose the task weightings in a way that affects the prioritization of the tasks. Great. So that's a brief recap of Wednesday's lecture. Now the plan for today is we're going to talk about transfer learning and the problem formulation of that, and then we're going to actually start talking about meta learning and the problem formulation and the way that we can think about what meta learning actually is. This will actually get into the start of Homework One and the lecture on Wednesday will kind of cover the rest of the content that's needed for completing Homework One. Cool. And then from there, the cool things that we'll cover today in terms of learning goals are thinking about how you can transfer things from one task to another task, what does it mean for multiple tasks to have some sort of shared structure, which is this somewhat nebulous notion that we've been talking about for the last week, and also what is meta learning. Awesome. So let's get started by talking about transfer learning. So in lecture before, we talked about trying to solve multiple tasks at once, and in transfer learning, things are a little bit different. So in transfer learning, our goal is typically to solve a particular target task after having previously solved a source task or a set of source tasks. And the goal here is, when we try to solve this target task, we want to transfer some of what was learned on task a when trying to solve task b. And a common assumption here is that typically you can't access the data from task a when you're trying to solve task b. And so you basically want to condense all your information or knowledge about task a into some parameters and then use that condensed knowledge when trying to address task b. Now it's worth mentioning here that transfer learning-- you can think of it as a valid solution to multi task learning, because if you want to learn two tasks, then you can learn task one and then transfer that to more effectively learn task two. That will give you a solution to task one and task two. However, because of this pretty common assumption, multi task learning is not usually thought of as a valid solution to transfer learning because you can't access the data for one of the tasks during the transfer process. Don't access the data sets of the two tasks at the same time. Cool. So from there, I have a question for you. So now that I've introduced these two problem statements, I'm curious if you have some ideas for what kinds of problems you might run into where transfer learning might make a lot of sense and might make more sense than running multi task learning. Yeah. Your source has way more data than your chart. Yeah. So if your source has a ton of data that you don't want to have to keep this around and retrain on it when you're solving task b, you want to condense that knowledge down and try to transfer it. So that's one setting where you might want to use transfer learning. Other scenarios? Yeah. There are certain tasks may not have a lot of data associated with it. So you want to learn from previous tasks. Yeah. So if you don't have a lot of tasks, a lot of data for your target task, then something like transfer learning might make sense there. I also think that something like multi task learning could probably also be applicable to that sort of setting, as well. But it is a setting where transfer learning is often, is often used. Yeah. Would you mean that task a and task b have some sort of co-relationships? For example, once you locate [INAUDIBLE] maybe first you need to recognize or identify, so then-- because patients can be task a, their location can be task b, something like that. Yeah. So if the tasks have a lot of shared structure. That may also be a good scenario to use transfer learning and also a scenario where multi task learning might apply. Yeah. [INAUDIBLE] so you don't have to [INAUDIBLE].. Yeah, exactly. So have you already learned task a and the weights, you can just download from the internet. Maybe you haven't trained it yourself, maybe someone else did, and they put their weights on the internet, then you don't even actually have to solve task a. You can just take their weights, take their solution, and use that for task b. And so that's another scenario where it's very commonly used. One more. Maybe you're deploying this into an environment where you don't have the capacity for very much once you're there but only then will you understand what task b really is. Yeah. Exactly. So you might be in a scenario where you don't know what all the tasks are up front. You're in a scenario where maybe you want to very quickly adapt to a new task. Maybe you're trying to adapt to-- like on a cell phone, for example-- to a particular user. In that sort of scenario, something like transfer learning makes a lot of sense. Cool. So those are, like-- yeah, lots of really, really great examples. I had a couple examples that I put on the slide. The first was mentioned, if you have a really large source data set, you don't want to have to retain this and retrain on it with multi task learning. And then the second case that I mentioned is you may actually have a scenario where you don't actually care about solving both of the tasks simultaneously. For example, if ultimately your goal is to kind of deploy a model on different people's phones, you don't need to have a model that works for everyone at the same time. You just need to be able to adapt the model to a user at any particular point in time. Yeah. [INAUDIBLE] we cannot usually [INAUDIBLE] --weights all from the model A? Yeah. Typically what you will have about task a is something like the weights or something that you learned. It could be more than the weights. Maybe you have some information about the optimizer, for example, that maybe you have stored. But typically, it's the weights that you have trained on task a. Cool. So now that we've talked about the problem set up, we'll talk about fine tuning, which is basically the go to approach for transfer learning. And the way that fine tuning works is we'll take the weights that were trained on task a-- I'll denote this with theta-- and we will initialize a neural network with those weights and then run gradient descent on the target task initialized at those weights. And so if D train is the training data for your new task, then you'll evaluate the gradient, apply that gradient at the-- that work initialized at theta. This is showing just one step of gradient descent, but in practice, you would typically do this for a number of gradient steps. And then the result of this process is you'll get a set of parameters phi that are hopefully actually much better at task b than if you were to randomly initialize theta. And in fact, if you actually compare using randomly initialized parameters versus parameters preinitialized with a data set that is effective, is useful, for the target task, we can see that we get much better performance. So in particular, here's an example where a neural network is pretrained on ImageNet, or it is initialized randomly, and the target data set is these two data sets, PASCAL and SUN, which are both image recognition data sets. And we see that we're able to do remarkably better if we pretrain on ImageNet compared to pretraining from scratch on the order of like 16% to 17%. So this is really cool. Essentially what this is doing is that it's taking all the rich information that exists in the ImageNet data set, leveraging that in the context of transferring to these new tasks. Yeah. [INAUDIBLE] transfer doesn't make sense between two [INAUDIBLE] So, like, how do I look at the [INAUDIBLE] Yeah. So the question is, is there some criterion that could tell us if transfer learning will work, if it will be helpful. And, in general, kind of similar to the multi task learning scenario where it's difficult to tell if training on two tasks together will be helpful, it's also very difficult to just develop some criterion. And so, in general, there are some kind of general common wisdom for getting these things to work well. And in general, for example, if intuitively the tasks seem related, then it can be a good choice. But in practice, there isn't any hard rule that will tell us if it will work or not. And so I guess I should say that what I just said is dissatisfying to me. I think it would be really awesome if we had something that could tell us basically whether this would work. And we'll talk a little bit about some recent research in some of the coming slides. But also investigating this sort of thing for like a course project I think could be quite interesting. Yeah. I assume in this case, you're retraining all the layers? Yeah. So in this case, we're retraining the entire parameter vector. I guess I'll try to finish the slide and then we can get to more questions. So kind of related to, like, when will this work, when will this not work, there's a number of choices that you can select for other pretrained parameters. Things like ImageNet classification or models trained on a very large language corpora are common choices in computer vision NLP. And in general, if you can pretrain on a diverse data set that covers the kinds of data that you'll be seeing during fine tuning, this, that's kind of generally a good choice. It doesn't even need to necessarily cover the distribution of what you'll see at test time. So, for example, in ImageNet classification, people have found that you actually get very general purpose visual features from those. And you can take those visual features and actually apply it to-- try to transfer them to satellite images or try to transfer them to medical images, and those visual features are actually still pretty useful, even though those images are out of distribution compared to what ImageNet images look like. Yeah. Don't you think that this is doing something like creating a dictionary of latent variables [INAUDIBLE] approximate dictionary and then fine tuning is basically updating the dictionary a little bit? Yeah. So the question is can we think of this as, the pretraining process, as learning a dictionary of latent variables and then the fine tuning process is essentially as updating the-- [INAUDIBLE] I guess I'm not fully sure what fixing it towards the task exactly means. I mean, the-- I think that intuitively there's-- I guess, I think there's a lot of intuitive explanations for what it may be doing. One intuitive explanation is that it's giving you good features. So it's giving you a good representation of images, for example. It gives you a good way to look at images or a good way to read text. So that's one explanation, and once you have, once you kind of roughly know how to see at a course level, then that makes it much easier to be able to recognize other images. And then a completely separate explanation might be if you think about it from the standpoint of optimization, you have this really complex optimization landscape that is non convex, then you can possibly think of this as trying to put you in the right basin of that optimization landscape. And once you're in the right basin, then if you run gradient descent from that basin, you'll get to a better solution than if you started off kind of at a random spot in that optimization or in that landscape. Yeah. So can this be related to the idea where you have a simple model within that model and then initialize a more complex model with weights learned from a simpler model? Or does this fall outside that paradigm? So starting with a simple model and going towards a more complex model. So in general, something like that I would refer to as perhaps a form of curriculum learning where you move from kind of simpler tasks towards more and more difficult tasks. And things like transfer learning can be applied to that, but that doesn't always have to be the case. You don't have to start with a simple task and move to a more complex task. In fact, oftentimes you actually move from a more complex task to a simpler one, like ImageNet classification is actually a really, really hard task to learn from scratch. And then you might move to something that's a little bit simpler, like maybe you want to be able to classify between cats and dogs or something like that. Yeah. So if you have a large amount of data sets, does it make sense to pretrain a model from scratch? Or do you think that fine tuning would generally maybe still provide some benefits? Yeah. So if you have a large enough data set, does pretraining make sense or can you just train from scratch? If you do have a large data set, then it's, from a performance standpoint, it may be that you can already do quite well training from scratch on that large data set. There are still some potential benefits. One very obvious potential benefit is that you may not need as much compute to get to the solution. So even if you have a large data set, if you can initialize with the pretrained model, then you may need many fewer gradient steps to-- and many fewer compute cycles-- to get to a good solution on that task compared to if you were to train without any prior knowledge. Cool. So kind of the common wisdom with these pretrained models is generally trying to pretrain on diverse data sets. There's also some unsupervised learning techniques that you can use for pretraining, and we'll cover those in some of the upcoming lectures. Yeah. [INAUDIBLE] considered that the task, the new task, is related to the old task or the new data set is related to the large repository? I mean, is it, is the relation based on the type of the data or the type of the task? Yeah. So the question was-- like, generally we'll-- this may be part of the answer, too-- but generally we want the source task and target task to be similar, to be related to one another. And you were asking if they need to be related in terms of what task it is versus what the data is. And in reality, it's both-- it's both what the data looks like and what you're trying to do with that data-- that will affect how successful transfer is. I'm going to move on a little bit to at least get to the rest of this slide. So, but all these are great questions. Now one of the things that was mentioned before is that one of the things that's really awesome about transfer learning is it's not just that you can do better by leveraging this prior knowledge, but that oftentimes someone has already distilled that prior knowledge into a model and put that model on the internet so that you don't even have to train on ImageNet or train on some language corpora. And this actually significantly improves, I think, the accessibility of things like deep learning because it means that anyone can download a model and use it on their problem, on their data set, as long as it's at least somewhat sufficiently similar to these common pretrained models. Cool. Now, in this example, we-- this is kind of the-- I think one of the most common versions of fine tuning where you fine tune the entire network. But in practice there's actually a lot of different design choices that come up when thinking about fine tuning. And a lot of these design choices kind of revolve around thinking about how to not destroy the prior knowledge and prior information in your model and balance that prior knowledge with the knowledge from your new data set. And in particular, if you have a neural network that has some layers-- say, for example, maybe you're trying to pretrain on ImageNet and now you're trying to fine tune on a task like classifying between cats and dogs-- or maybe-- I think cats and dogs are probably in the ImageNet data set, so we can maybe pick something more obscure, like, I don't know, classifying between whiteboard markers and whiteboard erasers or something like that. In that case, you have a network pretrained on ImageNet. And the output image has 1,000 classes and so the output layer is going to be 1,000 dimensional. And your target task has two possible labels, one for whiteboard markers and one for whiteboard erasers. And so what you need to do, in that case, is to essentially kind of reinitialize the last layer, because you can't-- or at least use some part of this. You can't just directly use this last thing. You could either use two of the classes and then throw away the last half, or just reinitialize a new network on top of these features right here. Now, one thing that comes up is let's say you reinitialize to instead have a last layer of two, a much smaller last layer. And these are now randomly initialized weights whereas everything before this is weights pretrained on ImageNet. Now in that case, if you back propagate your gradients through the network, if this is a randomly initialized weight matrix, then you're essentially going to be multiplying your gradients by some random numbers right here and then applying those gradients to the rest of the network. And so then you're going to be hitting these weight matrices with numbers that have a lot more randomness in them, and that might actually destroy a lot of the really great information in these layers. And so one common practice-- or there's a few different common practices that people do. One is to fine tune with a smaller learning rate, especially for the earlier layers of the network, because you don't want these features to be destroyed by the gradients coming in from the network. Smaller learning rate for earlier layers. Sometimes also you might freeze these earlier layers and only train the latter parts of the network or only train-- start by only training the gradual part, the last parts, of the network and gradually unfreeze from the back of the network. Reinitialize the last layer, which we talked about. So I guess these four things are fairly common design choices when fine tuning. In terms of how to pick between those design choices, you can kind of search over those design choices and hyper parameters by running cross validation on your target task. And I guess the last thing worth mentioning here is that the architecture that you choose to fine tune will also affect the transfer performance. And so, in particular, residual networks which have residual connections between layers, these networks are often more effective for fine tuning because you get kind of a bit of a highway throughout the network. And so it's easier to get gradients to all of the layers rather than having to use the-- well-- rather than having to kind of go one by one by layer. OK. So now that we've gone over the basics of fine tuning, are there any more questions? Yeah. Would these design choices depend on the complexity of that second transfer learning task? Let's say just, you know, cats versus dogs. Maybe all you would need to do is a simpler change for that last layer, right? But if it's a really complicated [INAUDIBLE].. Yeah, absolutely. So the question was, like, if the target task is simple versus complex, does that affect how you choose to fine tune the network? And, yeah. If you have a really simple task, then you may just only need to train this last layer. You may not even need to change the features at all, especially if that task is very related to the source task. Whereas if it's very complex or very different, you may want to actually kind of reinitialize more than one layer on top of the features or kind of fine tune the entire network or something like that. Yeah. If you don't have much [INAUDIBLE],, how do you prevent overfitting on that data? Like, is there a principled way of doing [INAUDIBLE]?? Yeah. So if we have a small amount of data, how do we prevent overfitting? The most common thing to do is early stopping, which is also done in standard machine learning, which is instead of fine tuning this for a very long time, fine tune it for fewer gradient steps, watch your validation loss on your target task, and stop fine tuning once you reach a good solution. You can also fine tune fewer layers, as well. So if you fine tune only the last layer, then you'll probably overfit less than if you fine tune the whole network. Yeah. Just out of curiosity, if you took, like, a pretrained model, like, ImageNet for example, and would just fine tune it on more exotic sort of, more, like, more specific instantiations of certain objects or categories, would the network still sort of show an understanding of, like, semantic distinctions between different patterns? You know, maybe two specific instantiations of cat would be closer to each other by the space than, like, you know, a specific kind of dog? Yeah. So the question is if you fine tune on a more fine grained classification task, like maybe classifying between different species of birds or something like that, then would the features still-- like, would you get a good feature space for that task such that similar species of birds are grouped together? Is that what you're asking? Yeah. Yeah. So if you fine tune only the last layer of the network, that won't affect the features in any way. And so you'll just get the original ImageNet features. But if you do fine tune the entire network, then that should also give you better features for that target task. Of course, if you have a really small target task, then the features-- you may not benefit-- well, you may overfit to that, and so you may not get good features of that or it may just be beneficial not to fine tune for very long because you have a small data set. So it will depend a bit on your data set size. But if you fine tune the whole thing, it will adjust the features and it should learn a good space for that, a good set of features. Yeah. [INAUDIBLE] What is the notation for the arrow-- is it a new loss function for the new task? Yeah. So the arrow on the left, it is kind of an assignment operation. So we're saying that we're defining theta-- we're defining phi to be the right hand side of that. You can sort of think of it as an equal sign, although it's more directional. Okay, so that means that we're literally moving the parameter from the mode of the first task to the mode of second task by using the new loss function on the new training data. Yeah. And do the same, [INAUDIBLE] of the second task. Yeah. OK, OK. Then in the case, in these, two models should have the same architecture. Yeah. Yeah. So when you fine tune, you're going to be-- if you fine tune the whole network-- well, in general, you should be using the same architecture. You can reinitialize the last layer or reinitialize parts of the network and change that part of the architecture. But, in general, especially the first layers will need to have the same architecture. OK. And in the second half, we don't put more layers, then how we initialize the parameter [INAUDIBLE] Yeah. So if you have more layers, then you'll need to randomly initialize, though there isn't a great way to initialize them. Yeah. Can we also use transfer learning for multi tasks just by separating the last few layers? Yeah. So you could also transfer to multiple different downstream tasks. So you could have multiple heads, for example. Or you could pass in the Z here. One thing that's a little bit tricky is if you want to pass in a Z earlier in the network or pass in some other input earlier into the network that wasn't there during pretraining, then you need to make sure that you kind of actually update this layer during fine tuning. But in general, something like that is definitely possible. Cool. Was there one more question in the back? Yeah. So if you have the source [INAUDIBLE] for the pretrain model, are there any methods to, like, we could do multi task learning for this particular fine-tuning [INAUDIBLE] the task we're training for? So you're asking if we have the source data available, is that helpful during the fine tuning process? So in general-- so you could just do multi task learning and train on both of the tasks and maybe up-weight the target task if you care more about that. That can be helpful if you don't want to forget the source task. It can be somewhat helpful to regularize when trying to solve the target task. Although, in practice, it depends a little bit on the scenario, and in practice actually just oftentimes fine tuning is actually often better than keeping it around. The other thing that you could do is you can regularize towards the initial parameters and that you can do actually without the source data, which is nice, and there are some works that have found benefits for that in some scenarios. Yeah. So if you're working in some sort of few-shot regime, is there-- has there ever been, has there been shown, like, any way to sort of do a full fine tuning that isn't, just, like explicitly worse than your evaluation? So you're asking in the setting where you have a very small amount of target data, is it ever better to fine tune the entire network rather than just a small thing on top? Like, just the final layer. Yeah. All-- in two slides, I'll get to something that I think will somewhat-- it won't fully address that, but it will show some scenarios that are better than just fine tuning the last layer. Yeah. OK. So in general, all the things on this slide are-- I would-- or all the things on the bottom, I would refer to as kind of common wisdom around fine tuning. And it's also worth mentioning that this common wisdom, I think, is changing and it's not fully set in stone. And so in particular, on these next two slides I want to cover two papers that maybe you might consider as old by machine learning standpoint, by machine learning terms, which is that they both came out last week. And the first paper is a paper by some folks at CMU, I believe, and they found that actually that unsupervised-- or generally pretraining-- doesn't necessarily need a really diverse data set, specifically unsupervised pretraining methods. And so they found that if you randomly initialize the network, you get 72% success on the target task. If you pretrain on a book corpus that's much larger, you get 81%. So we do see a benefit from pretraining here. But they also found that if you pretrain with this unsupervised objective on the fine tuning data set, you actually get 80.96% success or average accuracy, which is actually very close to the 81.3%. And this, I think, breaks the common wisdom because the common wisdom is that typically we need a very diverse pretraining data set, and this suggests that when you have an unsupervised pretraining objective, you may not actually need a really diverse pretraining objective-- or pretraining data set. You can possibly even just pretrain on the target data set itself. So this is kind of breaking the common knowledge, or the common wisdom, here. If you have a supervised pretraining task, then this won't work out, because you-- like running supervised learning on your fine tuning data set and then running fine tuning on your fine tuning data set, those are going to be the same exact thing. So this is-- I would expect to only hold in the unsupervised pretraining case. But it's something that breaks the common wisdom and suggests that we don't have everything figured out, even when it comes to fine tuning. Yeah. [INAUDIBLE] This is only in stages versus can you-- with this-- you're asking if this would also hold in the multi task setting? [INAUDIBLE] I think that they only ran experiments in the pretraining and fine tuning phase. But you could read the paper to check. And because it came out last week, I don't think anyone has built on it yet. I should mention that this is averaged over a number of different-- they ran this on a number of different target tasks. This was all in the NLP domain. But there's actually another paper that came out actually a little bit before this that actually showed a similar result in computer vision tasks, as well. Now the second paper is actually a paper that was co-authored by Yoonho and others-- Yoonho is a TA in the class. He looks like this. And I want to give a little bit of the kind of the thought process behind the research, because I said I would try to mention that a little bit in the course. And the thought process was that there's a lot of scenarios where fine tuning only the last layer works really well. So that's great. But is there anything actually like that special about the last layer? You can maybe think that it's somewhat special and that it's kind of closest to the labels, in some sense. But in many ways, it's also just like any other layer in the neural network. And so his thought process was, well, maybe for, kind of-- if we're trying to fine tune to a pretty low level shift, maybe there are scenarios where the last layer wouldn't work better but actually maybe other layers in the network, like the first layer, might actually work better. And he actually already had something, some experiments setup to run fine tuning on image corruptions. And so he pretrained on the CIFAR 10 data set, which is an image classification data set, fine tuned it on CIFAR 10 C, which is a corrupted version of that data set where there are small, low level image corruptions applied to the data, and he found that if you fine tune the whole network, you get around 79.9% accuracy, and if you fine tune only the first layer of the network, you get a higher accuracy on the target task. And then from there, the thought process was, well, OK, if there are some scenarios where the first layer works, maybe there are scenarios where some, like, only fine tuning the middle layer works. And indeed, there are some scenarios where actually fine tuning a middle layer is actually better than fine tuning the whole network. And so this is another example of breaking the common wisdom, because the common wisdom is typically to fine tune from the back of the network. But there are actually scenarios where that's not the best option. Of course, these differences are still, they're kind of 2% to 4%. So full fine tuning still does work pretty well in practice. But it says that we don't have a full understanding of fine tuning. Yeah. I was just wondering how these layers, like, have different numbers of parameters, how that would be integrated in this sort of analysis? Or how do you, you know, correct for that? In preparing the events-- Yeah. So the question is that different layers have different numbers of parameters, and so it may be that, like, maybe that is accounting for some of the differences here. Maybe that has, like, a regularizing effect or something like that. In general, I cannot remember off the top of my head the numbers of parameters for each of these blocks. But the, they did run experiments fairly extensively on this. They weren't explicitly controlling for the number of parameters, but the results, I do think, suggested that it had more to do with where in the network you were fine tuning compared to the number of parameters. And I should also mention that for full fine tuning, the learning rate and the early stopping were both determined with cross validation. And so this is a pretty well tuned baseline. Yeah. So for some tasks like the ImageNet data set where we start off with a color image as the input and we now want a task where we just have black and white. It would make sense to just retrain the first layer because it's like a change in, like, the inputs. Would we still want to do, like, the fundamentally similar processing? Yeah. Is there, like, any good intuition for when we would choose, like, a middle layer and which middle layer to choose? Yeah. So the question is-- yeah. Like, what's some of the intuition here, especially with regard to the middle layers. And so, and some of Yoonho's intuition for trying this, also, was that you could sort of think of-- so there's sort of kind of a causal process underlying the process of going from a label to generating an image, and you can sort of think of neural networks as trying to reverse that causal process, where, like, when you are, for example, trying to generate-- I think, like, this one, you're trying to predict vegetables, for example, you may first go from whether it's a vegetable or not to something that's, like, what type of vegetable is it to, like, what is the position of those vegetables and the appearance of that vegetable and then down to, like, what do pixels look like for that kind of thing. And when you change some part of that causal process-- if you think of that as the entire chain, it could be that maybe if you change something in the middle of that chain, then fine tuning the middle parts of the network might be the best choice insofar as neural networks might be kind of reversing the causal process of image generation. So that was some of our intuition there, but it's also something that isn't, like, still fully sorted out. And I think that one of the things that makes this result interesting is that it is different for different kinds of shifts between source and target. Yeah. [INAUDIBLE] works better than full time. Would it also compare that it works better than just finding the last layer, as well? Yeah. Yeah. So in these cases, it was also-- well, in this case, last layer was best. But in the first two cases, these were also better than fine tuning just the last layer. Yeah. So [INAUDIBLE] different types of corruptions or does it-- even in degrees for some specific corruption because-- Yeah. [INAUDIBLE] Yeah. So the question is it helpful-- is the first layer good for all different kinds of corruptions. I believe it was better for almost all of the corruptions that we tried. I don't think-- I'm not sure if we tried all 30 of them. I think there's a lot of them. But I think that for, at least for most of the ones that we tried, it was the best. Yeah. Cool. OK. So now that we know that the common wisdom is-- well the common wisdom is OK, but it's still something that's kind of being developed. I do want to give you somewhat of a default, because if you do want to actually use fine tuning in practice, you don't want to have to navigate this entire space of design choices. And so despite some of the results that we've seen, I do think that if you want something that is pretty reliable, I think something like first training the last layer of the network and then training the whole thing is generally a really good place to start and generally works well in practice. And the reason for that is exactly what we talked about before, where you don't want-- you want to avoid destroying your early features and so you want to train this last layer, the last set of layers, first. And then fine tuning the whole thing usually is helpful in practice. Yeah. How do you know when to stop for your last layer and then go into [INAUDIBLE]? Yeah. So in terms of when to stop training the last layer, typically when you train the last layer, you typically don't overfit because it's a pretty small number of parameters. And so you can mostly just look at until you converge and then once you roughly converge, once you see the loss function not going down anymore, you can start fine tuning the whole layer. You can also look at when the validation loss starts to even, start to level out, as well. Cool. Yeah. [INAUDIBLE] applicable also to online learning? Will this be applicable to online learning? Yeah. I mean you could certainly kind of do this repeatedly, like reinitialize the last layer and fine tune that before fine tuning the whole thing and then kind of do that repeatedly. Yeah. Although, if are seeing kind of a gradual shift in an online learning setting, you may just want to keep on fine tuning the whole network. Cool. And then the last thing that we'll talk about with fine tuning is looking at what fine tuning performance looks like when you have varying amounts of target task data. And this is just one example. And so in particular, what this is looking at is the x-axis is the number of training examples in your target data set and the y-axis is the error, the validation error rate, on that target task. And so as we see, as we have more target task data, the better we do, as we would expect. The blue line is training from scratch. The green and orange line are different pretrained models. And one thing we note is that if we have only 100 target task data points, the performance isn't nearly as good as if we had a bit more than that. And in general, this is still doing pretty good. Like, in this example, we only have, like-- if we only have 1,000 examples, we still do really, really well. But it does start to get much worse if we have only 100 examples. And this is where things like meta learning can be useful. Cool. So now let's, with that in mind, let's transition to talking about meta learning. So how do we get from transfer learning to meta learning? In transfer learning, we talked about how we'll initialize the model and hope that initializing from there helps on the target task. The kind of key intuition behind meta learning is instead of hoping that it will help, what if we explicitly optimize for transferability. And what I mean by that is if we have not just one source task but a set of source tasks, can we optimize for the ability to quickly learn these tasks such that we can learn new tasks quickly, as well. So if we learn how to quickly learn a set of tasks already, that means that we should be able to also learn new tasks quickly. And so you can also think of this as-- if we think about learning as going from a data set to a set of parameters, and we want to be able to learn quickly, then we can think about essentially trying to optimize for this function so that we can learn well even with small data sets or even when we have a small compute budget. So that's the intuition behind meta learning. There's two different ways to view meta learning algorithms. The first is more of a mechanistic view and the second is more of a probabilistic view. Related to the idea of optimizing that learning process, you can-- the mechanistic view is that you can think about a deep neural network that takes as input a data set and makes predictions for new data points and you just want to optimize this deep neural network using this kind of form of metadata set and optimize it over those tasks so that if you give it a data set for a new task, it can give you parameters for that new task. So that's the more mechanistic view. That's how you might kind of implement one of these algorithms. The more probabilistic view is if we have a set of source tasks, we could try to extract the shared knowledge, the shared structure, from those tasks in a way that allows us to efficiently learn new tasks. And so then when we have a new task, we'll try to use that prior knowledge to infer the parameters for the target task. So with these two views in mind, I want to try to first kind of talk a little bit about this probabilistic view, which will help us think about what it means for tasks to share structure, and then we'll go back to the mechanistic view. Cool. So let's start with the probabilistic view, thinking about what Bayes would think of meta learning. And for this, it'd be helpful to go through the graphical model. So to start off, how many of you are familiar with Bayesian networks or equivalently directed graphical models? Raise your hand? OK. And how many of you are not familiar with Bayesian networks or directed graphical models? OK. It's about 50/50. So Bayes nets or graphical models, are covered, they're covered in CS 109. And so for people who did undergrad at Stanford, you probably have learned about them, although you may be a little bit rusty, so we'll kind of walk through it a little bit. So in-- I don't know if-- I think we'll call them directed or graphical models in general. They're often called Bayes nets as well. So in graphical models, random variables are denoted with circles. So if we have a random variable X, we put X in a circle, we may also have a separate random variable Y. So these are just two different random variables. And dependencies between random variables are represented with arrows. And so, for example, if you have a distribution P of Y given X, this arrow kind of represents this dependency. And if, for example, this is equal to P of Y, that means that there is actually no real dependency on X and so you wouldn't draw an arrow here. So actually, I'll leave that up. So now, the other thing about Bayes nets is that it tells you a little bit-- you can read off kind of whether or not two variables are independent. And so if you have-- maybe you have a Bayesian network that looks something like this. You can look at this and figure out if two variables are independent. So, for example, A and Y are not independent from each other because they have a line that connects them. And A and B are actually also not independent because they have a path that kind of goes from A to B. And so any set of variables that have a path between them are not independent. They have some dependency. Whereas if there's no path between them, then those two random variables are independent. So for example, A and D are independent from one another. Cool. So that's kind of a very-- basically all you need to know about graphical models, at least for now. So now I want to draw what I'll call the graphical model for single task learning. So in particular, say we have some parameters, some labels, and some inputs that we could think of having a graphical model of basically to predict Y, it's going to be a function of X and our parameters because you can-- we have this kind of relationship of F of Y given X and phi. So you can think of this as a graphical model for single task learning. I should also note that if you are thinking about causality, sometimes people might-- this arrow-- or the kind of relationship between X and Y, you could possibly, like, flip these arrows to be in different directions. But for the standpoint of this lecture, it will be helpful to consider this direction and we won't really be considering things from the kind of causality standpoint. Cool. And you can think of phi as essentially the parameters of Y given X. I mean, these are the parameters that we're going to be trying to infer in machine learning or in single task learning. Now, actually, OK, I lied. There's one more thing that's helpful to know about graphical models, and this actually may not have been covered in 109, which is what's referred to as plate notation. And in particular, if you have multiple data points-- so say this is just one input and one output, then it'd be nice to have something that could denote the entire data set, not just one input and one output. And so that's called plate notation, which is that instead of drawing all of the different data points, which would take me a long time, we'll instead kind of draw a plate around here. We'll have an i here that will denote that this plate is over i and this means that basically this makes copies of everything inside the plate, indexed by i. Cool. Any questions on this graphical model? Does this make sense? Cool. So now if we get into multi task learning and think about the graphical model there, first that means that we're going to have multiple tasks. And so we're going to have-- can index the tasks by j. And then we will have another plate around here, which will just mean that we have multiple tasks that go, that are indexed by j. So phi j is the parameters for task one, task two, task three, and so forth. And these parameters have some shared structure. And in particular, there is some dependency on these parameters phi theta. Yeah. [INAUDIBLE] phi? Why does Y only depend on phi? So Y depends on both phi and X. So there's an arrow from phi to Y and X to Y. [INAUDIBLE] is Y a prediction or our label? Y is-- This is where it gets a little bit messy. I guess Y-- so Y is the label and phi is the true parameters. Oh, OK. Yeah. Yeah. What is the-- is there a relation between i and j because the plate on j, like, goes over the plate on i? Yeah. So you can think of i is over data points, j is over tasks. And so this means that for every task, there is another, there is a set of data points. OK. Yeah. Yeah? [INAUDIBLE] because if you wanted to, like, the same data point for-- Yeah. So you may actually have something where X is outside of this plate. It's more like-- actually, no. So, yeah. It may not be true. And so this is the more general case where they're different but in practice-- and I'm actually not sure how I would draw it if I were to-- it's a little bit tricky to draw. Yeah. Yeah? Is theta in this case just like the kind of like the unifying model that is able to do all of these different tasks? So theta, theta is kind of an interesting thing. So theta is-- you can think of theta as this shared information between the tasks and it's only the stuff that's shared between the tasks. And I use the word latent here because latent means unobserved. So in particular, the data points X and Y are observed. And so in graphical models, if you shade something in, which is a little hard to do on whiteboards, that means that they're observed, whereas we don't observe the true parameters and we don't observe what the shared structure is. And so it's worth mentioning that if there is no dependency here, then that means the tasks don't share any structure, whereas if there is some dependency on the shared structure, then they do actually have some shared information between them. Yeah. [INAUDIBLE] samples that are being sampled from [INAUDIBLE] why do you need a plate around them? I'm just using this to denote the kind of individual data points in our training data set. [INAUDIBLE] That's a good question. Yeah. So you could alternatively think of X and Y as the random variable, like, the random variable X and the random variable Y. And in that case, you wouldn't need the plate here. You would still need this plate, but you wouldn't need this plate. Yeah. Yeah. So is this like the equivalent of a discriminative model, since you're taking X, like the X to Y is like a given. And, like, a more general case, is there also something else like some noise affecting the X and Y as input? Yeah. So there could be cases where there is other things that affect X and other things that affect Y. I'm leaving those out for simplicity. But, for example, Y may not be perfectly-- like there could be some noise that affects Y-- if you have label noise, for example. There could be other things that affect X. But, yeah. For simplicity, I'm leaving those out. Cool. So here I just kind of drew this, like, for the training data points. It's helpful in some cases to actually write it out in a way that separately represents the test data points. And I mention that because the test data points are not fully observed. The labels are not fully observed. If you have a set of training tasks, then you can observe the labels for the training tasks, but we'll go into that more in a future lecture. But now, let's get into a little bit about theta. So if you condition on the shared information on theta, then it's worth noting that the task parameters-- well, so first, the task parameters are not independent right now because there's kind of a path between them. But if you condition on theta, the task parameters become independent. And so what that means is that if you condition on theta, you'll actually have a lower entropy distribution over your task specific parameters, phi I, compared to if-- compared to just your kind of prior over phi i. And so what that means is that-- well, actually no. Maybe I'll ask you this. So, so if you can identify-- if you know the shared structure, if you can identify the shared structure, theta, then when should learning phi I be faster than if you didn't know that shared structure? People have thoughts? I can also walk through the slide a little bit again. So I guess the first step is that if we condition on this variable, then these become independent. And if you have information about theta, that lowers your entropy estimate of phi because the-- essentially you can kind of narrow down what the value of phi is once you have information about theta. And so from there, if we can identify information about theta, then with this dependency, learning phi should be faster because we have fewer bits to uncover from our training data points. Yeah. As long as you really understand the relationship between theta [INAUDIBLE], right? Yeah. So if you have information about theta and this dependency exists and you understand that dependency, then learning phi should be faster than if you didn't have that information. Exactly. Basically, you need fewer, less information about the data points to infer phi once you have information about theta. One other thought exercise that builds on this a little bit more is-- so we talked about how, if you have information about theta that tells us-- that lowers our entropy, that gives us more information about phi, now what if the entropy of phi, given theta, is zero? [INAUDIBLE] Yeah. We already have a model which can perform all the tasks. Yeah. So if, basically if your entropy goes to zero, then you actually fully know phi. You, like, you have full information about phi. And at that point, you can actually just fully solve the tasks. Yeah. And that means that you don't even need any additional data to solve the task. Cool. So in general, I think that this sort of Bayesian perspective I think is a useful framework for thinking about what it means for some tasks to share structure and specifically in the kind of the form of this variable here. And you can think about these kind of different mathematical relations as when we might expect basically how much shared structure is versus how much data you need to learn a given task. Cool. Two other exercises-- or, well, one exercise with two examples. So say that we have a set of sinusoid tasks. So task one is a sinusoid with an amplitude of, like, 5 and kind of a phase of pi, and then maybe task two is an amplitude of 1 and a phase of pi over 2 or something like that. And all of your tasks have different amplitudes in different phases. Then in that scenario, what information does theta contain? Yeah? Different amplitudes and phases? Not quite. So the amplitude and phase-- this is for task one, this is for task two. The question is what is kind of-- what is theta? What is the shared structure between them? Yeah. Like the fact that they're all sinusoidal? Yeah. Exactly. So everything but the amplitude and phase. So it kind of corresponds to the family of sinusoid functions that once we have that shared structure, we just need to infer these two values using our data set. One more example. So say that our tasks are machine translation and our goal is to translate between two languages, and one task is to translate from French to English, another task is to translate from Japanese to Spanish, or something like that. In this case, what does theta correspond to? Yeah? Possible translation [INAUDIBLE]?? So you're saying it's possible translation between any pair of languages? Any family of two languages. Any family of two languages. So basically you're saying that the shared structure is kind of a universal translator? Yeah, that's close, although not quite. Yeah. Universal language rules [INAUDIBLE].. Universal language rules. Yeah. So you're saying things like adverbs and verbs and stuff like that? Yeah. So this is, I guess both of these are, like, mostly right. It's basically going to be everything-- and I guess, I mean, sort of the first one, too. It basically tells us information about the family of all language pairs, although it shouldn't contain all of the information needed to translate between one pair of languages because-- well, ideally the things that are-- only the things that are shared and not the things that are needed to actually solve the task. Yeah. [INAUDIBLE] parameters since languages are so diverse? Yeah. So this will be, this will be, like, relatively small. It won't include things like vocabulary of a given model. But it will contain things like adverbs, like the kind of grammatical structure that you often see in languages. [INAUDIBLE] across languages? Right. So the specifics of, like, what, like, what order do you put-- do you put the adjective before the noun or after the noun-- those sorts of specifics won't be contained in theta. But the general notion of those things is somewhat shared. This is also sort of a hard question because it's something that's rather vague and hard to put into words. Yeah. So can you, like, translate this sort of set up into something like a network structure? Would this, would theta be like the early parts that are all shared between [INAUDIBLE] task? Or-- like, yeah, like where does that sort of divide between theta [INAUDIBLE]? Yeah. So the question was, like, in practice, what is-- like, does theta correspond to earlier layers or, like, what is this theta thing? So here we're really, like, just thinking conceptually. In practice, this can, it can correspond to a number of different things. I chose the notation theta and-- I chose the notation theta in part because one thing it can represent is the initialization of fine tuning. You can think of it as kind of, the initialization, as kind of prior knowledge or kind of source shared structure. But we'll see this much more concretely when we get to the lecture on Bayesian meta learning. Yeah. [INAUDIBLE] Oh, is a meta learning task and a task equivalent? Or? [INAUDIBLE] Oh. Yeah. So this, this graphical model will be the same for both multi task learning and meta learning. Yeah. Anytime you-- this is basically the graphical model for a set of tasks. It doesn't really cover-- and both multi task learning and meta learning consider a set of tasks. [INAUDIBLE] entropy is zero, that means [INAUDIBLE]?? Yeah. So this last part-- so what does entropy of zero mean? So entropy of zero means that you, your distribution over phi is basically just deterministic. It just has a single value. Whereas if we had a non-zero entropy, that would mean that we have some uncertainty around phi. Our distribution would be a wider distribution. And so when the entropy is 0, that means that we know exactly-- there's only one value that the distribution covers. There's only one value that the random variable can take on. And so in this case when the entropy of phi given theta is zero, that means that phi can only take on one value once we have the information in theta. And that means that we kind of have already-- that means that once we have this information, we can fully recover the parameters for all of our tasks. Yeah. How can we formulate the goal for meta learning in this specific context? Because meta learning wants to maximize transferability, but would that translate into something of, like, the relationships between the phi's? Yeah. So you can think of meta learning as basically trying to infer theta. So try to maximize for the ability to learn a new task and when we have as-- well, ideally if we-- like, it'd be nice if we could just get phi directly. But if we're trying to optimize for the ability to learn a new task, given a set of tasks, the best that we can do is recover theta and once we have theta, then that will allow us to learn new tasks from that distribution more quickly. Yeah. So. Yeah. So basically, yeah. You can think of meta learning is trying to infer, infer theta. Yeah. [INAUDIBLE] as well? Like, are they already part of the different-- Yeah. So you can think of meta learning as trying to learn inductive biases or trying to learn, like, structure. And there's separately choices of structure that we build in ourselves as humans and those can be represented-- I guess the way that I might represent that is by having some other variable here, which is like the human built in inductive bias, and that maybe also has some prior over-- like, maybe we have some guess at what phi is and that's going to-- this will kind of denote that sort of guess. And so, yeah. Something like that. Yeah. Can you clarify what you mean when you say that phi is independent on conditions are in the past? Because I can imagine a situation where we're training where there's, like, three main tasks. Two of the tasks are very similar to each other, but the third one is, like, completely different. So just, like, so there's no shared knowledge in theta that's universal across all tasks? But still, like-- so in this case, would phi 1 and phi 2 still be, like, dependent on each other? But, like-- yeah. So you're saying that in the case-- so this isn't a problem with two tasks. You're saying in the case of three tasks, there may be scenarios where there may be scenarios where the-- like, there's a lot of shared structure between two of the tasks, a lot of shared structure between the third and then kind of the least common denominator is pretty small in that case. And so then when you condition on that, the first two tasks are not independent. Yeah. So I think that when you-- yeah. I think that cases like that end up getting more complicated. I think it's cleanest to think about in the case with two tasks and when you have more than two tasks-- yeah. It's, yeah, a little bit more complicated to think about. Happy to discuss that more, like, in office hours. Cool. We have about five more minutes. Trying to think about-- so let's talk a little bit about the mechanistic view. And, yeah. We may not quite finish it, but we're going to start, we're going to talk a lot more about meta learning next lecture, so it's OK if we don't finish. So we covered this probabilistic view, which is meta learning as basically trying to recover this shared structure or recover this theta. For the rest of the lecture, we're going to talk a little bit about the mechanistic view. And Bayes will come back later in the Bayesian meta learning lecture. Cool. So from the standpoint of the mechanistic view, say your goal is to classify images. And we have, in this case, a really tiny data set of five examples. And our goal is to take this training data set and classify new examples. So kind of going back to our conceptual view, these are going to be kind of examples for a new task for X and Y. Now if we want to solve this task from scratch, it won't work very well. And so we want to have-- we want to leverage previous information. And specifically, if we have data from other tasks, then we should be able to use that to help us solve this few-shot classification problem. And in particular, what we can do is we can take data from other image classes and construct them into tasks, each with their own train set and test set. And so in particular, here might be one task where instead of classifying things like lions and dogs and bulls, our task is to classify, like, birds, pianos, mushrooms, and a different breed of dog. Or, in this case, we can construct a different task with its own training set and test set where our goal is to classify between landscapes, gymnasts, and carousels, for example. And so on and so forth. And what we can do is we can construct all of these different tasks, ideally lots of different tasks. These will be used-- these will be using a set of training image classes and we want to construct them in a way that allows us to learn how to quickly learn each of these tasks such that when we're given examples of new image classes, we can also learn a classifier for that task. So you can think of this top process as the meta training process where we are kind of learning how to learn these tasks and the bottom part as the meta test, meta testing part where we're trying to learn a new task. This is an example with image classification where all the tasks are these image classification problems. But you can replace image classification with really any other kind of machine learning task. So it doesn't have to be an image task. It could be a regression task. It could be a language generation task. It could be trying to learn a skill. Really any of the kind of tasks that we saw before, and multi task learning could also be used to replace the tasks here. Now then kind more formally, the goal of meta learning is to try to, given data from a set of tasks, try to solve a new test task more quickly, more proficiently, or more stably. Oftentimes, in a lot of the use cases we'll see in this course are to try to learn more quickly with fewer examples, but in principle, all these ideas could also be trying to optimize for other aspects of the learning process, like performance and the stability of the learning process. Now one really key assumption here is that we have a set of training tasks. We're going to assume that our test task is drawn from the same distribution as our training tasks. And so in particular, we'll have some broader distribution over tasks. It can be a little bit hard to think about what a task distribution is, but there is some broader distribution over those tasks and we need to assume that the training task and the test task are both drawn from that distribution such that when we're given enough samples of our training tasks, we can naturally expect to generalize and learn a new test task from that distribution. So this is analogous to the standard assumption in machine learning where we assume that our train set and our test set are drawn from the same data distribution. And like before, we probably want these tasks to share structure. If this task distribution is a completely random distribution, then we won't expect to be able to learn a new task because these tasks are drawing completely from random. Cool. And then the task can compare to a number of-- can actually correspond to a number of different things, basically correspond to the same kind of task that we saw in multi task learning. And one example that we'll see in homework one is to recognize handwritten digits from different languages where we might want to be able to recognize new digits that we haven't seen before. I'll skip through this for the sake of time. And I think I'll skip the terminology, too. I think that we can cover the terminology in Wednesday's lecture. Cool. So to start to recap, in this lecture, we talked about transfer learning and meta learning. We only got to the very beginning of meta learning, but in transfer learning the goal was to solve a target task after having solved a source task and you can think of meta learning as a subset of the transfer learning problem where we have a set of source tasks and we want to transfer information from that to a new test task. And so really it's basically the same problem, except we're going to assume that we have not just one source task but multiple source tasks. Generally in both of these cases, it's fairly impractical to access data from the source tasks. And in all of these settings, we want to have some sort of shared structure. Then we'll skip some of these two slides. Yeah. And then to provide a recap beyond just the problem settings, today we talked about transferring via fine tuning by initializing and then optimizing on the target task and trying to be careful not to destroy the features that were initialized in the network by using a smaller learning rate or by training the last layer first. We talked about this graphical model which can give us some conceptual intuition for what it means for tasks to share structure by having this statistical dependence on this shared latent information. And then lastly, we talked about how meta learning is aiming to try to actually infer what this shared structure is and use it to learn tasks more quickly. Cool. So that covers the plan for today. In terms of the next lectures, the next five lectures will be on really core methods for meta learning and unsupervised pretraining, and these will also be covered in homeworks one, two, and three. And then, yeah, lastly, a couple of reminders. Homework Zero is due tonight, so make sure you get that in.
AI_LLM_Stanford_CS229
Stanford_CS330_Deep_MultiTask_Meta_Learning_Domain_Adaptation_l_2022_I_Lecture_13.txt
So today we're going to be talking about domain adaptation. And we'll get into what that means, as well as a few different algorithms for doing that. And the goal for the end of the lecture is to understand different domain adaptation methods and when you might use one method versus another method. Now, what is domain adaptation? So we've covered a few different problem settings in this course, starting with multi-task learning, where our goal was to solve multiple tasks, then looking at transfer learning, where we wanted to solve one task after having previously solved some source task. And then we also looked at the meta-learning problem statement, where our goal was to solve a new task more quickly or more proficiently after solving a set of previous tasks. And today and on Wednesday, we're going to be talking about domain adaptation and domain generalization. And these are two problems that end up looking a lot like transfer learning and meta-learning but are somewhat of a special case of them in some sense. And so in particular, the goal of domain adaptation is we want to be able to perform well on some target domain after training on data from a source domain or possibly multiple source domains. Although in this lecture, we'll mostly consider one source domain. Now, this ends up looking almost exactly the same as transfer learning, except a really key assumption that we're going to make is that we're going to assume that we can access some data from the target domain during training. And so this is referred to as transductive learning, where we have access to basically some test data. And there's a few different forms of domain adaptation which basically correspond to different kinds of access to this target domain data. In the first case, we assume access to unlabeled data from the target domain or from the target distribution. In the second case, we would assume access to unlabeled and labeled target domain data. Typically, the labeled target domain data would be much smaller than the unlabeled data that we have access to. And in the last case, we would assume access to a small amount of labeled target domain data. Now, we're going to focus the most on unsupervised domain adaptation. Problems like supervised domain adaptation, you could actually do pretty well on just by fine-tuning and with the transfer learning techniques that we talked about before. Whereas unsupervised domain adaptation, fine-tuning isn't applicable to them because you only have unlabeled data from your target domain. Now, there's a couple of different assumptions that we're going to make that are fairly common. And this is where we're going to see some differences from the transfer learning setup. The first assumption that we're going to make is that the source and target domain differ only in p of x or differ only in the domain of the function. So this is one way that you can think about what a domain means. And as a result of this, this means that the conditional distribution of y given x is going to be the same for the source domain and the target domain. And another assumption that we'll consider that's very related to this first assumption is that we're going to assume that there exists a single hypothesis or a single function that can achieve low error on both the source domain and the target domain. And so in many ways, you can think of a domain as a special case of a task where when we introduce this notion of a task, we thought of it as basically corresponding to a data generating distribution over x, over y given x, and over the loss, and separately a loss function for that task. And O domain is going to be something where only p of x differs between the different domains, and y given x and the loss function are going to be the same across the domain. So essentially, a domain is a special case of a task. And yeah, basically, it's going to correspond to different distributions over x. And by making this assumption that it only differs in x and by making the assumption that we could access some unlabeled target domain during a training, we're going to be able to do better than approaches that are designed for multi-task learning or transfer learning because we can assume that p of y given x is staying fixed. Now let's look at a few examples, and this might make this notion of domain adaptation a little bit more concrete. So one example is maybe our goal is to detect or classify tumors from slides of tissue cells. And we trained a classifier to be able to detect tumors from these images from one hospital. And now we also want to deploy that same model in a different hospital. But the other hospital might have different techniques for collecting these images, or maybe they have different demographics of patients. And as a result, the images might look a little bit different between the source hospital and the target hospital. Now there still exists-- doctors can still look at these images and figure out whether-- figure out whether there's a tumor and so forth. And so there still exist a single function that can predict whether or not there's a tumor from the image. But there's a distribution shift between these two different domains. And in some ways, you can-- well, you can sort of think of these as different tasks because you're basically still doing the same task. It's only p of x that's changed. We'll refer to these as different domains. As another example, maybe we want to be able to classify how land is being used. And you trained a really good model that works well in North America, and you want to be able to deploy that model on another continent such as South America. The appearances of buildings, of plants in that region, weather conditions, pollution may be different between the source region and the target region. And so again, this is really the same task, but you have a different distribution over the images. And so p of x is changing. And then there's one nonimage example. Maybe we want to classify or generate text. And we trained a model on Wikipedia, and then we want to deploy our model on papers on arXiv or on PubMed. Because of the differing vocabulary use and different sentence structure, a model trained on the source domain may not translate well or transfer well to the target domain. And our goal is to be able to basically use unlabeled data from the target domain in order to improve performance on that target domain. And so here are basically three examples, but domains could also be different people or different users. It could correspond to different points in time or different institutions like data from different schools, different companies, or different universities. Yeah? [INAUDIBLE] does it mean that there exists a model that does well in both the source and target. Yeah. So there exists some function, f of x, that achieves a low error on both the source data set and the target data set. Yeah? [INAUDIBLE] Yeah. So the key difference-- or one key difference between domain adaptation and a typical transfer learning problem is that we have access to target domain data during training. And if we kind of revisit these assumptions, this means that-- for example, if we want to translate our text classifier to arXiv, we might have access-- we have access to unlabeled data on arXiv already. And so this is going to assume that we don't have to kind of directly deploy our model immediately. We can take that unlabeled data from arXiv, incorporate that with the training data from Wikipedia that might be labeled, and kind of train a model that we think should be better at arXiv in comparison to if we had only trained on Wikipedia and deployed it directly. And so there are going to be some scenarios where it's this assumption is unrealistic, but there's also going to be a number of scenarios where you do already have access to your unlabeled data, such as on arXiv. Or maybe when you are trying to translate it to a new hospital, you do already have data from that hospital, and you don't need to deploy it immediately without any additional training. Yeah? Assume we can access and label the data on the target [INAUDIBLE] on the target domain? Yeah. So if you have access to a lot of labelled data from the target domain, then you probably don't need things like domain adaptation. You don't necessarily need to use any of the source data because you can just train directly on your target data. Whereas if you only have a small amount of label data or you have a small amount of label data, a lot of unlabeled data, or if you only have unlabeled data, that's where things like domain adaptation can be very helpful. And then again, if we kind of revisit this hypothesis of their existing-- or this assumption of there existing a single hypothesis, I think that this sort of assumption is very realistic in all of these scenarios, where you can probably kind of just look at an image and figure out the land use if you're kind of an expert on what different buildings look like and likewise be able to predict certain things from text regardless of where it's coming from. OK. Any questions on the domain adaptation problem statement before we move on? Yeah? [INAUDIBLE] And that function [INAUDIBLE] Does it have any special meaning? Because you can already combine multiple xs into one [INAUDIBLE]. Yeah. So maybe one thing that you're pointing out here is that you can trivially say that there exists a single hypothesis, given the identity of the domain. So yeah. So if you can identify the domain x from-- or the domain d from x and assuming that you can get low error on each of the individual domains, then that assumption trivially holds. There are often scenarios where it is difficult to identify the domain from the input. Certainly, images are something where it may not be that hard. Although if you take an image from anywhere on the globe, I'm not sure if a classifier would be able to tell exactly what continent it's from. So there's certainly scenarios where you can't predict d from x. And the stronger assumption beyond this one is that basically the distribution over y given x for your source data is equal to the distribution over y given x for your target data. This is a stronger assumption. And most of the approaches that we'll be looking at today are also going to make this assumption. Cool. So we're going to look at three different algorithms for unsupervised domain adaptation. And I guess maybe before I touch on the algorithms, I guess one other thing that may be worth touching on is-- on Wednesday, we're going to also consider a variant of this problem where we assume that we have data from multiple different domains during training. And our goal is to generalize zero-shot to a new domain. That's going to-- and basically, a lot of the algorithms that we're going to build up today are also going to be applicable in that domain generalization problem setting as well. Cool. So now let's first start by considering a toy domain adaptation problem. And in particular, our source domain will be this blue distribution, and our target domain will be the green distribution. And it's just going to be a binary classification problem. This could correspond to something like sample selection bias, where when you collected a data set, it wasn't actually representative of the true population. And if we have a binary classification problem, maybe it looks something like this, where these are examples drawn from our source distribution. And the bulk of the samples are kind of in the region where we have high probability under p of s. And unfortunately, one thing that might come up here is if you have a-- if a lot of your data is coming from the high probability regions of that space, then the classifier trained on that source data set may pay very little attention to examples that have low probability under the source data set and high probability under the target data set. So these data points right here. And if it pays very little attention to those two data points, then it may just learn a very simple classifier that just learns like basically a single linear decision boundary that's kind of able to accurately classify almost all of the data points. And unfortunately, if it does this, then that classifier, if it was actually evaluated on data from the target distribution, would actually perform quite poorly when evaluated on that new target distribution. And so there's a question of, well, OK, if we have labeled data from the source data set and unlabeled data from the target data set, is there anything that we can do to learn a classifier that does well on the green target distribution? Does anyone have any thoughts on what we might do in this particular example? Yeah? [INAUDIBLE] three examples from the source distribution [INAUDIBLE] the targeted distribution examples? So you're suggesting that we shift the source examples in some way? So how exactly would you do that? So you've got like your target examples which are unlabeled but no [INAUDIBLE]. And you've also got your source examples. Well, I guess [INAUDIBLE] Estimate the mean of the-- so we know where the target examples are. We know that they're kind of on the left-hand side. But we don't know what their labels are. And we have access to labeled source examples as well. Yeah? [INAUDIBLE] what you could do is shift the weights of the different examples. So you can do importance sampling or importance free weighting. Yeah. So maybe this is-- maybe this is what you're describing. I'm not sure. You can basically kind of change the weight of the examples such that you kind of upweight the examples that have high probability under the target distribution, especially if they have low probability under the source distribution. And so kind of the intuition here is if we upweight these data points and downweight some of the other data points, then we should learn a classifier that can more accurately get a decision boundary that's more accurate for the target distribution. Cool. So now there's a question of this is kind of-- this intuitively, I think, makes sense kind of visually. You can see that if you upweight those examples, you'll probably learn a classifier that's more accurate on the target distribution. But there's also a question of why this makes sense mathematically. And so our goal is to kind of minimize how well we do on the target distribution. This should be x. So our goal is like to minimize f of x-- the loss function for our function on the target distribution with respect to our model parameters. Now, of course, we can't sample from this distribution directly. But we can sample from our source distribution. And so if we just did standard empirical risk minimization, that would correspond to minimizing with respect to our source distribution of the loss function. Now, we know that we can sample from the source distribution, but our goal is to minimize this term right here. And what we can do is if we kind of expand out this expectation as an integral over the target distribution of our loss function, then we know that the-- we want to be able to sample from p of s. And so we can do a somewhat similar trick to what we did in the variational inference lecture and basically multiply this by p source of x, y divided by p source of x, y. This is just equal to 1. And so we're just multiplying everything by 1. And then we can basically incorporate this into the expectation rather than this in the expectation. And so if we do this, we see that this is equal to the expectation of under the source distribution, which is what we have access to of p target of x, y divided by p source of x, y and our loss function. And so this is pretty nice because this means that now we can sample from our source distribution, evaluate our loss on those data points. But we're basically going to be weighting our loss by this ratio right here. And so the data points that have a high target value and a low source likelihood are going to be upweighted. And if they have a high likelihood under the source and a low likelihood under the target, then they're going to be downweighted. There's one more step here, which is that if we assume this equality, the kind of-- the assumption that y given x is the same for the two distributions, then we can now replace this with just p of x. And the reason why we can do that is we know that p of x, y equals p of x times p of y given x. And if this is the same, then this is going to cancel out between the two, and we'll be left with p target of x divided by p source of x. Cool. And so this is just written out on the slides. Our goal is to basically be able to minimize the risk on the target distribution. We can write out that equation as an integral and then multiply the inside by 1, which is the source divided by the source likelihood, and then evaluate that. And then that ends up being the expectation under the source data set of this kind of importance weight here. Yeah. What's the intuition behind this very strong assumption? Oh, the intuition behind this. I guess there-- I don't think that there's any particular-- I guess there are some problems that where this is going to hold, and there are some problems where this is not going to hold. [INAUDIBLE] would first [INAUDIBLE] probability be the same step? So it really depends on the problem. If for example, you are-- I don't know-- polling people about their political preferences and you kind of don't sample very uniformly-- you kind of get a biased sample-- then typically, if you have good features of a person like they're-- regardless of the person that you sampled, this should stay fixed. But it may be that you just kind of sampled. You didn't kind of uniformly sample from the population you wanted to sample from. And so I guess in the problem that we're looking at here, this is really-- we'll talk about the kind of the limitations of this approach in a minute, but this is really focusing on sort of sample selection bias. But there's also lots of other examples where this will also hold like kind of in the medical imaging scenario, where you should be able to recognize a tumor from the image. And you won't ever have a scenario where an image from one hospital has a tumor and that same exact image would be generated but doesn't have a tumor in another hospital. And so there are a lot of scenarios where this can hold if you're kind of collecting data in certain scenarios. And then there are also multi-task problems where it won't hold as well. And you want to-- these sorts of approaches are only applicable or primarily applicable when that does hold. Cool. So this equation agrees with the intuition that we saw on the previous slide, where we're going to be upweighting examples with a high likelihood under the target distribution and a low likelihood under the source distribution. Now a key question that comes up is, how do we actually compute this weight right here? How do we compute the importance weight? One thing that we could do is just estimate these likelihoods, like fit a generative model to estimate the likelihood of example under the target distribution and separately fit a generative model to our source distribution. But unfortunately, it can be fairly difficult to estimate these likelihoods accurately and in a way that is calibrated and consistent with-- kind of consistent across the target and source distribution. And so we'd like to be able to estimate these weights without training a density model on our target distribution and our source distribution. And so it turns out there's actually a way that you can do that. And really the key reason why we can get away with doing that is our goal isn't just to estimate the target, the kind of the density. But our goal is specifically to estimate this ratio. And the reason why we can-- if we kind of-- there's a way that we can basically manipulate this ratio to get something that looks more like a discriminative, something that we can estimate with discriminative models rather than with generative models. And so specifically, first, we can write out that the likelihood of x under the target distribution, this is equal to the likelihood of x given that the domain is equal to the target domain. And writing it out this way is going to just make things a little bit more clear. And we can use Bayes' rule to rewrite this as the probability that the domain is equal to the target given x times p of x divided by the probability that the domain is the target domain. And we can likewise kind of write out the same exact thing if the domain is the source domain. And now our goal is we want to be able to estimate the probability that an example is p of x given the domain is target divided by the probability of x given that the domain is equal to the source domain. This is our importance weight. And so if we want to do this, we can basically just take this equation for the target and divide it by this equation for the source. And if we do that-- first, we'll divide the first term. So this is going to be p of d equals target given x divided by p of d equals source given x. Then we'll divide the next term, which would be p of x divided by p of x. And then we'll divide the last term, which will give us p of domain equals source and p of domain equals target. And this cancels out, of course. This is just a constant. And so multiplying our loss function by a constant doesn't really change anything. It doesn't depend on theta in any way. And this term right here, we've now kind of basically flipped x given target to now be target given x and source given x. And this is something that we can estimate with the discriminative model. And in particular, we can just train a classifier to be able to predict if an example came from the target domain or the source domain. So we'll basically train a classifier to take as input x. And then it could be whatever sort of function you want. And it's going to tell you whether or not it thinks it came from the target domain or the source domain. And this will basically give you an estimate of p of target given x, which is going to be equal to 1 minus p of source given x. And so then you can use this classifier to estimate the importance weight right here. And so specifically, what this looks like is we first used Bayes' rule to kind of write out what p of x given the domain is equal to. And then once we divide our importance weight out, we get this-- we get the product of a constant and this term right here which we can estimate with a binary classifier. Oh, so then specifically if you kind of walk through the algorithm and what it looks like, the first thing that we'll do is we'll train this binary classifier. And its goal will be to estimate if an example comes from the target distribution or the source distribution. And this is going to operate only on kind of the input. It's not going to look at the label. Then we will resample or reweight the data according to the kind of importance weight that we derive right here, which is just-- you can think of it as kind of-- if your classifier is estimating the probability of a source that the x came from the source distribution, then it'll just be 1 minus that classifier value divided by the classifier probability. And then once we either reweight or resample our data, then we'll optimize our loss function on the reweighted or resampled data. Yeah? It makes sense when there's a well-defined source and target distribution. But maybe North and South America don't turn out to be the best two categories. Maybe there's some-- maybe it's nice to think about it in terms of just weighted variables that apply. And I'm wondering if you're-- are you going to get to generalizing to that scenario? Yeah. So the question is-- we've been talking about source and target distributions, and it may be that we don't have these kind of too clear-cut things, like two continents-- it may be that there are some countries in North and South America that are actually more like each other than-- because they're actually very close to one another than they are in different domains. And so perhaps we could generalize this by thinking about things like continuously, variables and so forth. So it's a good question. We're not going to generalize it in this lecture. One thing that I'll mention, though, is that one of the reasons why we are defining this in a very clear-cut way is that we're really defining our source distribution to be our training data and our target distribution to be the distribution in our test data. And so it could be that your source distribution looks something like this, for example, and your target distribution-- maybe your target distribution looks something like this, for example. And it seems a little bit weird to call these two different domains because they actually have a lot of overlap right here. But if you think instead of this as basically kind of our training distribution and this as our test distribution-- that's why we're going to basically define it as so clear-cut. And the algorithms that we're talking about here, they can take into account that there is this overlap. And they'll still work well in that scenario. Cool. I'm also thinking that they're-- I mean, there may also be algorithms that you can derive that to try to kind of take into account how close together two data points are. We've actually been thinking about developing algorithms like that in some of our research, but there aren't really that many mainstream algorithms, I think, that take that into account. Yeah? I want to see more of the work if we could model the-- if we could go [INAUDIBLE] and model the-- [INAUDIBLE] data set [INAUDIBLE] So the question is, how would this look like if we could estimate the data distribution? [INAUDIBLE] For target, or for source? For both if we could approximate. So typically, I think that if you can estimate-- if you can approximate the densities, then you can just use that directly rather than training a classifier. I don't know of any way to combine the two. I mean, you could combine the two. If you think that you have a rough estimate of one and a rough estimate of the other, then you could try to average them or combine them in some other way. But I don't know of-- typically, you do one or the other. Which one works better? Typically, this one works. Using a classifier typically works better because getting good likelihood estimates is-- especially for high dimensional data is very difficult. Yeah? [INAUDIBLE] trying to minimize the loss on the target distribution. But in general, you want to minimize over just p of x, not p of x given [INAUDIBLE] on all the samples. But I'm just thinking that even this is OK because you are assuming that p of y given x is the same for both [INAUDIBLE] So isn't this a bit of like a problem because in general, you're only targeting the target distribution right, in this loss function. And you don't care about the performance on [INAUDIBLE] at all in this formulation. Yeah. So in this formulation, we're really optimizing for how well we do on the target data distribution. And this is operating under the assumption that the unlabeled data that we have from the target distribution is representative of our test data. And if you optimize for this and then evaluate it on the source data distribution and you kind of broke that assumption, then this wouldn't work as well as if you just trained on the source distribution directly. And so this is really assuming that you are getting an accurate estimate, basically that you really have unlabeled data from your target distribution rather than some other distribution. And so you can think of this as sort of like optimizing for a particular test distribution and really trying to specialize the model for that test distribution rather than trying to learn a general purpose model that will work well for any domain. And some of the things that we'll cover on Wednesday will be actually optimizing for doing well on basically all the domains that you've seen so far, not just the test domain. Cool. Now, one thing I'm actually somewhat surprised that hasn't come up yet is that this is making a pretty important assumption when we optimize for this and generally when we take this approach, which is that if you optimize for this quantity and there's something that has like zero likelihood under your source distribution, that's going to be a little bit of a problem. You won't be able to actually optimize this effectively. And essentially, the assumption that this is making is that your source distribution really needs to cover the target distribution because if it doesn't cover the target distribution, especially the parts with high likelihood, then you won't be able to upweight the parts with high likelihood. And so more formally, if the likelihood under the target distribution is nonzero for an input, then you need the likelihood under the source distribution to also be nonzero. You need to have kind of data from that region. And if you can satisfy this assumption, then you can actually-- there's actually some kind of theoretical guarantees that you can show for this kind of method. And so there are going to be some scenarios where this might hold and other scenarios where this may not hold. So if you, for example, train on Wikipedia and your target distribution is arXiv or PubMed, that might be a scenario where you may have enough coverage in your source data set because Wikipedia does actually have some pretty technical content in it. But on the other hand, if you have data only from one source hospital and you're trying to generalize to be able to make predictions for a new hospital with pretty different-looking images, in scenarios like that, the source probably wouldn't cover the target distribution. And as a result, approaches like this wouldn't perform well. And so this gets into the next two classes-- the next two algorithms which actually can handle scenarios where the source and target distribution don't overlap. Cool. So let's look at another toy example. And in particular, again, we're going to be trying to learn a binary classifier. And our source distribution will look something like this. And let's say our target distribution looks something like this. And we're again going to operate in this setting where we only have unlabeled data from our target distribution. But some of the methods here can actually also be applied to the setting where you have labeled data from the target distribution. Now, unfortunately, if you just train a classifier on your source distribution, this wouldn't give you a very accurate classifier on the target domain. And as an example problem, you could imagine that maybe your target domain corresponds to the MNIST data set, where you want to be able to classify different digits. And your target domain corresponds to a different digit classification task. In particular, these are images taken from Street View of different house numbers. And you will basically want to take the classifier that you trained on MNIST maybe through a PyTorch tutorial and also apply it to images of house numbers. In order to do this-- this next class of approach is-- because there isn't kind of a direct overlap in the support, we can't just upweight data points in MNIST that probably wouldn't work very well on this problem. But what we could try to do is we could try to align the features that it has learned from the two domains in order to encourage it to have a similar representation of the two distributions. And in particular, what this might look like is something where we try to encourage the feature spaces to overlap as much as possible. And then if they do overlap as much as possible and they have the same distribution, then if we train a classifier only on the source data, then that classifier should perform much better on the target distribution. Then there's the question of, how do we kind of go about trying to align these different feature spaces? And I should also mention that something like this should work the best when there is kind of a clear-- when there is some sort of alignment between the two distributions. If you're two distributions are kind of both look like circles and you have just some kind of two-- some data points that look like this, if you try to align these two circles, you may not necessarily get a good classifier by trying to align your features. Whereas if your distribution looks something like this and your target distribution looks something like this, there's more of a clear alignment between these two distributions. And in that case, if you try to kind of align these two shapes, you're more likely to have something work out. And so you could imagine that something like MNIST digits maybe has kind of the zeros are over here. The ones are over here. Maybe the nines are over here. And you could imagine that with something that has a little bit more of an interesting manifold to the data distribution. It might align more readily than if you just have features that are drawn from like a Gaussian space. Cool. So in terms of aligning the features, we're going to assume that we have some encoder that encodes our inputs into some feature space. These encoders could be just the same function. Or they could be separate functions, one for the source and one for the target. And really, our goal is to try to match the features that's coming out of these two encoders. But we don't just want to match them in terms of having them all be the same. We want to match them at the population level. We want basically these to be mapping rather than trying to just collapse these features into the same spot. And so we can't just apply like a kind of an L1 or L2 loss on the individual features. Instead, we need to try to figure out how to match the distributions. And essentially, what we want is we want samples from this distribution to be indistinguishable from samples from the other distribution of features. And if those samples are indistinguishable, then the distribution should be the same. Yeah? [INAUDIBLE] require that both p(s) and p(t) have the same kind of distribution, like they're both [INAUDIBLE] distributed. I suppose if one were for example gaussian but the other one in target domain followed a different kind of distribution that would be kind of hard to make the alignment work? So the question was, does it mean that p of s and p of t have to be the same distribution, such as Gaussian distributions? So they don't have to be identical distributions. And I guess the-- but you do need them to be-- you do want to have them have a similar shape. And I guess, in the MNIST example, p of s corresponds to the distribution over these MNIST digits. And p target corresponds to the distribution of Street View house numbers. The one scenario in which-- one example where that kind of agrees with your intuition here is that if, for example, in MNIST, you had like 90% of your data set was zeros and in Street View house numbers, 90% of your data set was like fives or something like that. If you have a mismatch in the label distribution like that, then something like this probably wouldn't work well because it wouldn't be able to kind of find an alignment between the two distributions. Whereas if the distribution over digits, for example, is much more even, then you should be able to align it much more easily. And so roughly, you could sort of think of this as the shape being-- the shape of the distribution being somewhat similar. But it's OK if the distribution is rotated or if it has a different mean or something like that. Yeah, and I'm not sure if there's actually a way to formally describe this constraint or this assumption. But it is something that is pretty important for these approaches to work. Cool. And so the key idea in order to basically try to encourage these samples to be indistinguishable from each other is to basically-- we're again going to train a classifier that tries to predict whether or not an example is from a source domain or a target domain. But this time, the classifier is going to operate on the features rather than on the inputs. And our goal is going to be to try to learn features that fool that classifier such that-- basically, if the classifier cannot accurately predict which domain the features came from, that means that the two distributions over the encoded samples are identical. Yeah? [INAUDIBLE] possible to rotate or align the two circles, I can't understand why it's not possible to find a rotation. So the question was, in the circle case, why is it not possible to kind of kind of rotate and align the two distributions? In this scenario, I-- if you only have unlabeled data from your target domain, it's ambiguous how you should rotate it. And so that was the main point that I was trying to make there. Whereas if you have a distribution that has more features to it, it will be more clear how you're supposed to rotate and align the two. And so in this example, what it would probably do is it would just try to find the easiest or simplest way to align the two. That would be, for example, without any rotation. And if your positive examples look like this, then that might end up leading to a poor classifier. Yeah? [INAUDIBLE] population that will be [INAUDIBLE] What does population level mean? Yeah, it does support [INAUDIBLE] Right. So by support, I'm referring to the assumption that we made on the previous-- in the previous case, when we're doing the importance weights, we assume that the support of the distribution-- of the source distribution covered the target distribution. And so support means basically the region of the probability distribution for which the density is nonzero. And then population level, I basically mean-- instead of looking at individual trying to match individual examples, we're trying to match kind of the entire population of examples or population of features, in this case. Yeah? What ensures that the encoder for the target domain does something useful? It seems like you could satisfy this objective by just generating random samples that have nothing to do with the numbers advantage. Yeah. So if your goal was only to fool the domain classifier, what it could do is it could just output random features or output all zero features or something like that. And there's nothing encouraging it to actually give you good features. And so what we're going to do is we're not only going to have this loss, but we're also going to try to be able to classify the source examples using the features. [INAUDIBLE] You could do it right for the source examples and wrong for the target examples, right? So even if you try to classify on the features, yeah, it could learn a classifier that works well for the source but not necessarily for the target. I-- [INAUDIBLE] target encoder could be a bad one. Right. So there is a question of whether or not to actually learn-- have these encoders separate. And the target encoder could learn something that is pretty different from the source encoder. And so there is a little bit of a trade-off. You could learn a single encoder. You could also-- which would kind of prevent this issue. And if you learn a target encoder, you're basically hoping that the solution that is kind of the simplest is the one that maps them into a consistent space. And if you have a similar architecture and randomly initialize them in a similar way, my expectation is it would give you something reasonable. And there's kind of empirical evidence that supports that as well. But you may also-- it may also be that you want to actually share some of the weights between these two encoders. Cool. So concretely, what does this look like? So we're going to be training a feature encoder that takes as input, our example, and gives us features. We're going to be also training a classifier to predict labels from these features and backpropagating the kind of cross-entropy loss with respect to our label predictions into both the feature encoder and the label classifier. This corresponds to the standard supervised learning. What's new is we're also going to be training a classifier that estimates the domain that the input came from from the features of that input. And what we can do is then try to train the features such that we cannot predict the domain accurately. And so what this means is that the domain classifier, its goal will be to-- its goal will be to kind of maximize the accuracy of this classifier. Whereas the goal of the features are to minimize the accuracy of this classifier. And so to do that, what we can do is we can just take the gradient coming from the loss for the classifier and negate it and then pass the negative gradients from the domain classifier into the feature encoder. This is called a kind of a gradient reversal layer, where you're basically just going to be reversing the gradient before backpropagating into the feature encoder. And so here we're going to be minimizing label prediction error and trying to maximize domain confusion. And if we write out this algorithm kind of more completely, what this looks like is first we're going to update the domain classifier C or C phi. And this is going to be with respect to basically how accurate is-- how accurate that classifier is, which is just denoted by Lc. And then we're going to update our features or the encoder, f of x, as well as the label classifier, g. And we're updating these with respect to basically how well we're predicting y minus the loss function of the classifier. And we have a kind of a coefficient here to control how much you weight the classifier loss versus the label prediction loss. Yeah? How exactly do you send a negative example in this case for the domain classifier since-- how do you get negative examples for the-- Domain classifier-- so we're assuming like before that we have access to unlabeled data from the target domain. [INAUDIBLE] So the label classifier, basically, Ly, is only evaluated on data from our source data set. And Lc is evaluated using both the source data set and the target data set. Yeah? [INAUDIBLE] space issue. So is it possible to get around like this resolution thing by projecting them into common corners and then in that space, like the shapes would align? So could we think about it that way, a space where the shapes should align? Is there a common space where the shape should align? I'm not sure how you would get the projection to project it into that space. [INAUDIBLE] I don't remember [INAUDIBLE] be able to do that like that, I think. Yeah, sounds good. And one thing that I'll mention here is it's important to do this iteratively. So if you first kind of just train your domain classifier and then you just kind of fix it and then train your features to try to fool it, at some point, it may-- basically, we'll just change our features such that they're out of distribution for your domain classifier and kind of fool the domain classifier without actually having features that are indistinguishable. And so in practice, you need to kind of iterate this process between updating your domain classifier and updating your features in your label classifier. And this will ensure that your domain classifier is always kind of up-to-date on the latest version of the features. Cool. So this is written out here. We've randomly initialized our encoder, label classifier, and domain classifier. We then update our domain classifier, which basically corresponds to binary classification between the source examples and the target examples. And so this is just writing out the cross-entropy loss. And then we will update the parameters of our features and label classifier with respect to how well it's predicting the labels, as well as this auxiliary term that corresponds to domain confusion. Yeah? Is there any problem that if we feed the target or label it out into gradient descent on LC? You're saying is there any problem with passing in target data into the domain classifier? [INAUDIBLE] So the domain classifier will be trained on both source data and target data. And so you can see that right here in step two, where it's trained on both source data and target data. And so the target examples-- they'll not be out of distribution for the domain classifier. The domain classifier will be trained on them. And so that will give an accurate estimate for whether or not those examples came from the source domain or the target domain. And then you'll kind of reverse the gradient to encourage the features to produce features that the domain classifier can't accurately predict. But the first step is polling the features of source data close to target data but why don't we also poll the features on target data close to the source data? So this loss will be evaluated on both the source and target. And so it will be like encouraging it to bring the two together. And so it's not going to bring one to another. It will be bringing kind of the two together. It would basically just try to find what features will make basically the domain classifier's job hard. The first term in the third step, this just corresponds to how accurately we're able to predict the labels. And this is only done on the source data set because we only have labels for the source data set. Cool. Now in terms of a couple of design choices, I mentioned that you can learn a separate source and target encoder. This can give the model a little bit more flexibility because if the source and target images look very different, you might need different filters or different weights to process them. But it can also possibly give the model too much flexibility, which could lead to some of the issues that we discussed. There's also a couple of different forms of the loss function for trying to confuse the domain classifier. And this is often referred to as domain-adversarial training. . One of them is what we talked about with this gradient reversal layer. And this is the same as how generative adversarial networks are implemented. But another option is instead of trying to maximize the loss of the domain classifier, you could also try to optimize for basically for the classifier to be outputting a 50/50 guess between the two domains. And this will be something-- this will be kind of optimizing for kind of-- essentially, for it to be-- to have kind of basically no idea what the correct domain is. And kind of the intuition behind the second option is that if you maximize the domain classifier loss, then that corresponds to predicting source confidently when it's actually target and predicting target confidently when it's the source. And if you can actually-- if you can do that and get the worst loss of the two, that actually would give you features that can still distinguish between source and target. And so the second option will somewhat prevent that, although in practice with option one, if you're updating your domain classifier enough, you should be able to avoid that issue as well. Cool. And this is a question of how well does this work. So we'll look at two different examples. This is a toy example where the source domain data is shown as the red and green data points. And the target domain is the black data points. And if you train a neural network in a standard way only on the source data, you get a decision boundary in the black line. And we see it this different-- at these few different points. Like, here, for example, it's incorrectly classifying these data points as green. It's likewise incorrectly classifying these data points as red. In contrast, if you use the approach that we just talked about, this will shift the decision boundary to something that looks like this, where it is actually accurately classifying these data points and also has kind of shifted the decision boundary here as well so that you're actually much more accurately making predictions on the target domain data. They also evaluated on the digit examples that we looked at before. And so they looked at MNIST to this kind of synthetic, colored MNIST version. These synthetic numbers to Street View house numbers, Street View house numbers to MNIST and these signs to these German traffic signs. And if you compare training only on the source data set to this approach, you see that you can get a really substantial improvement in performance often, kind of as much as almost 20% in some cases. And it doesn't do as well as if you were to train on label data from the target distribution, but it is able to bridge the gap fairly significantly in a number of different cases. Cool. So to summarize this part, these sorts of methods are fairly simple to implement, and they can work pretty well like we saw on the previous slide. They don't require the source data to cover the target distribution, which is pretty nice. It does involve an adversarial optimization, which sometimes can be a little bit tricky. And you really need to tune this weight right here to kind of trade-off the indistinguishability of the features and your accuracy on the source domain. It also, like we discussed, requires some degree of clear alignment in the distributions of the data. And if those distributions are very different, it may be difficult for the algorithm to figure out how the two domains align in practice. Cool. And then the last class of methods that we'll look at is it's also going to be somewhat trying to find an alignment. But instead of trying to learn a feature space that is perfectly aligned, it's instead going to try to learn mapping from one domain to the other domain. And really, the key idea here is if we could translate from examples from one domain to another domain, then we would be able to do pretty well on the target data set. So if you could translate source examples to target examples, then you could just basically translate your labeled source data set into your target domain and train a predictor on the translated data set and then deploy your predictor on the target example. Likewise, if you were able to translate from target to source, then what you could do is train a predictor on your source data set and then translate your test example, your target example to the source domain with your translator and then evaluate the predictor on the translated examples. And one key difference between these approaches and the previous approaches is we're actually going to be operating in the original input space x rather than operating on features. Then, of course, the question comes up is that how do we actually go about learning to translate between these different domains? So the first thing that you could do is train your model F, just translating from source to target to generate images from your target distribution and likewise, train a function G to be able to generate images from your source distribution. And you can do this with a generative adversarial network, where you'll be training a classifier that'd be able to predict if something came from the source domain or the target domain. And then your goal is for your generative model to be able to fool that classifier and think that the data that it's generating came from the domain that you're trying to generate from. And so kind of what this looks like is if you have some kind of source distribution that looks like this. And you have another target distribution that looks like this. Essentially, what you're going to be trying to do is train something that takes an example from your source distribution and translates it into your target distribution. This is going to be a function F. And this function F will take as input the source example, and it will be trained with again to generate examples that look like the target distribution. And likewise, you'll also be training a function G to take as input an example and map it to an example from the source data set. And one of the nice things about this objective is it doesn't require you to have any paired data. You don't need to know kind of what example specifically corresponds to the other example. We're just going to be training this generative model to generate samples that look like they came from the target set. That said, if you only do this objective, you'll run into a bit of a problem, which is that it won't necessarily learn to map in a way that's somewhat consistent between the different domains. In particular, if F maps from here to here, there's nothing that's stopping G from mapping from here to over here to mapping to something completely different. And so there's one additional objective that we can incorporate into this approach that tries to actually optimize for the consistency of these two kind of domain translators. And in particular, what we're going to try to do is we can take a data point from our source data set, map it from F, and then map it back with G and basically try to encourage the distance between the original data point and the data point after going through this cycle to be very small. And so we're basically going to kind of minimize this distance right here. And so this is trying to address the fact that the mapping is under constraint, and it can be kind of an arbitrary mapping. And we're basically going to be encouraging the models to learn this kind of consistent bijective mapping by training them to be cyclically consistent such that if you map from one domain and back, it gives you a data point that's very similar to the original data point. And likewise, if you go from target to source and back to target, you want to get back the same data point as before. And so the way that you can implement this is basically just with a kind of a standard L1 or L2 objective, where you sample a data point from your source data set, map it to target, and then map it back to source, and then compare that example after the cycle to the original example and encourage them to be similar to one another. And so that's how we get this loss function right here. And so then the full objective for this approach will be we're going to be training F and G. F will be kind of trained to generate examples that look like target examples. G will be trained to generate examples that look like the source, and then we'll have this additional regularizer that encourages cycle consistency, which says that when you kind of form a cycle, you should get back to where you came from. Any questions on how this works? So if you take this approach and apply it to data sets from different domains-- and so for example, if you take a data set of Monet photos or Monet paintings and a data set of photos, you can get something that maps from-- basically can take a kind of a painting from Monet and translate it into a photo and likewise take a photo and translate it into something that looks like a painting from Monet. Likewise, you can take a data set of pictures from the summer and translate it to something that looks more like winter and the reverse of that. There's also something more abstract like edges to shoes and shoes to edges or translate between zebras and horses. One thing that's different about this approach compared to the other approach, the kind of domain-adversarial training is that here we actually don't even need any labeled examples from our source data set in order to train for these mappings. This is kind of a purely unsupervised approach for mapping between two domains. So the original paper for this didn't actually use it for domain adaptation. They just used it to kind of generate pictures like this. But you can actually use it for domain adaptation. So one place where it was used was it was used to translate between simulated robots and real robots. And so the simulation is kind of shown on the left. The real image is shown on the right and it's able to basically kind of generate real-looking images from simulated images and vice versa. And it turns out that if you basically train with reinforcement learning in the simulator and actually evaluate that policy on the real robot, you can get a success rate-- a grasp success rate that's much higher than if you only use some data and if you kind of used something called domain randomization, where you try to randomize the simulator as much as possible but don't actually try to use real data to kind of translate between simulation and real. You can also use these to kind of translate between humans and robots. So this is the approach that wanted to use data from humans in order to improve a robot policy. And so the top row here are real images, and the bottom row are images generated by the generative model. And once you kind of generate it in the robot domain, it's a lot easier to kind of use that data directly. Yeah,? I still have question about CycleGAN. How do you ensure that something from the source domain doesn't map arbitrarily to something else in the target domain? So for example, you're first domain is, say, synthetic dogs and foxes. And the target domain is real dogs and foxes. What's to stop the model from mapping synthetic dogs with real foxes and vice versa? Yeah. So that's a great question. So one thing it could do is it could map-- maybe it does kind of obey the cycle consistency because that's what we trained it for. And so if, for example-- maybe it does actually map from here to here. And it also maps like from here to here. But maybe this is kind of real dogs, and this is real foxes, I think, you said. And this is what? Synthetic foxes. And this is synthetic dogs. And so first, it is possible for it to learn this. There are two things that can encourage it not to learn this. The first is that oftentimes, when people design architectures for this, they'll encourage the architectures to only change the local fea-- [CLEARS THROAT] the local features of the image. And if you encourage it to only change the local features, then it's somewhat hard to create a dog out of a fox, like maybe the ears look different or something like that. And the second thing is that if you have a data set that has maybe it's like 80% real dogs and 20% real foxes and 80% synthetic dogs and 20% synthetic foxes, then these sorts of data set statistics will encourage it to actually get the right mapping because it will be pretty difficult-- basically, if you need to generate things that look like the target distribution and you are mapping 80% of your dogs to 80% foxes, then that won't actually match the target distribution. So having these sorts of statistics and the kind of frequency of objects in your data set be consistent between the source and target can really help the mapping. But if you do have 80% synthetic foxes and 80% real dogs, then it may actually learn a mapping between foxes and dogs. And so this is kind of getting back to the assumption that we made with the previous approach, which is that the two domains do need to have a similar distribution in some sense. So then most data sets are choices and design choices more than the mathematical-- Yeah, exactly. And so the-- when I worked with these kinds of methods before, typically, we actually are trying to-- when we tune the method, we actually tune the data set more than the method. Cool. So the kind of pros of this approach is conceptually pretty cool, although maybe that's not a reason to use it. And it can actually work pretty well. It's also quite interpretable. This means that it gives you cool pictures, but it also means that it can be easier to debug because if you actually generate, like run F with your model and you see it's mapping dogs to foxes, then you know what's going wrong. Whereas if you just have this feature space that's a little bit difficult to interpret because it's very high dimensional, then it can be a little bit difficult to understand what's going wrong with your approach. The downside is that it does involve an adversarial optimization just like before, and it also involves generative modeling now as well. And those can require larger models. And like feature alignment, it does require this sort of clear alignment in the distributions in order to work well. Cool. And then the last thing I'd like to mention is you can actually combine the two approaches that we just talked about, the CycleGAN approach and domain-adversarial neural networks. There's an approach that basically incorporates both of these into a single approach. And on things like the character recognition task we looked at, you're able to do much better. So domain-adversarial neural networks alone get 73% accuracy, whereas this approach gets a 90% accuracy when translating from Street View house numbers to MNIST. And it could also work on more complex data. So this is something where they're trying to translate a classifier trained on synthetic driving data to real driving data from the Cityscapes data set. And they were able to much more accurately segment objects in the scene in comparison to prior domain adaptation approaches. So that's it on domain adaptation. Those are really the kind of three classes of domain adaptation methods that have been most successful and most popular.
AI_LLM_Stanford_CS229
Stanford_CS229M_Lecture_20_Spectral_clustering.txt
OK. I guess let's get started. This is the last lecture of this course. I guess we're going to continue with the spectral approach for clustering. So I'll provide some of the reviews of the last lectures. So last lecture, I think we did the stochastic block model, and one of the main findings is that if you do eigendecomposition-- so our goal was to do eigendecomposition on the graph G from the stochastic block model. And we have shown that if you do eigendecomposition on the average graph G, the expectation of G, then does give the heading community S and S bar, right? I think last time, we showed that the second eigenvector is something called u, which will look like 1 1 1 and minus 1 minus 1. And this is S and this is S bar. So basically, if you just take the second eigenvector of the expecting graph G, then you get the hidden community. And we have argued that suffices to show that the graph G and the expectation graph, expectation G are closing up in an operator norm. And this is because if you consider this equation, right, so you subtract the first eigenvalue from G, then what you get is that G minus the first eigen component is equal to this perturbation matrix plus the contribution of the second eigenvector. And if you take the eigendecomposition of this matrix, which is something you can compute easily, then you take the top eigenvector of the left hand side of this equation, then you are expected to find something close to u, as long as G minus expectation G is something small. Now how small is it? I didn't really formally do this, but essentially, you need this perturbation to be much smaller than the signal, right? So you need a perturbation in operating norm much smaller than the rank 1 signal in operator norm. And you can compute operator norm for the runtime signal very easily, which is sometimes p minus q over 2 times n. So basically, we are trying to show that the concentration, right, this is a concentration inequality because you are trying to prove that G concentrates around expectation of G in this factor norm sense. I love to show this proof. This is a little technical proof, but the proof is not very long and also it kind of relates back to what we discussed in lecture 3 or 4 where I guess you probably remember that I said that this concentration inequality is probably one of the most important thing for this course because this is the-- that if you pick one technical tool in statistical machine learning, I think is probably concentration inequality in my own opinion. So it's probably useful to just review why the concentration inequality can help us to do something like this. So I'll give a proof for this. So the proof look like-- So we're going to prove that-- so our lemma is that with high probability, G minus expectation G in operator norm is less than square root n log n up to a constant factor. And the first side is this is not exactly the type of concentration inequality we have talked about before because before, we are talking about scalars, right? So we are saying that if you have expectation of-- if some random variables of some empirical samples and the empirical average concentrates around the population on average. So here it's a little bit different because G is a matrix and the expectation of G is also a matrix. So you are doing some kind of matrix concentration to some extent. And your measure of the similarity is not just the absolute value in the difference of-- the absolute value of the difference, but it's about something like the operator norm of the differences of the matrices. However, you can actually turn this into something that we are familiar with very easily. So what you do is the following. This is still uniform convergence as you will see. That's the main idea. And why is this the case? This is because you can easily interpret operator norms as follows. So G minus expectation G, operator norm, this is equals to the max over v. Let me write it down and explain. This is just because the operator norm in a persymmetric matrix. I think the definition is if you have a symmetric matrix A, then the operator norm-- I guess there's absolute value here. The operator norm of matrix A is exactly equal to the maximum quadrative form that you can achieve by hitting it with a norm 1 vector. And once you do this, you see that this becomes a scalar now because this quantity is a scalar. And you can decompose this into max v2 norm square, and then you get v transpose Gv minus v transpose expectation Gv. And this is a sum-- So what is this? Maybe let me write down more explicitly. This is max. This is sum of vi and vj, Gij, both ie and in minus the expectation of this random variable. And now this becomes a sum of independent random variables, and this becomes the expectation of this sum of independent random variables. So now you can use the concentratoin. If you don't have the max, you can use concentration points. But this is what exactly Hoeffding inequality is for. And how do you deal with the max? Then the max, this will be the part about uniform convergence. Recall that the whole point of this uniform convergence is that if you fix the parameter-- suppose you think of me as the parameter. So the point of uniform convergence is that you can fix the parameter you can use Hoeffding inequality to prove the concentration, to prove that empirical is not very far away from population. And the challenge of uniform convergence is about how do you take the max, and here you still have a max. So I guess there are multiple ways to deal with this concentration. Of course, the easiest way is probably just invoke some existing theorem. There are some theorems in the literature as well. But if you want to do it yourself, I guess there are two ways. So one way is that you can use the Radamacher complexity machinery. The Radamacher complexity machinery. I guess it's probably a while back. We discussed this probably five weeks ago. And I think one of the techniques is that you do symmetrization. So so far, this is not a symmetrical form and you introduce some Radamacher variable and you symmetrize it, and then you can proceed with all the random. You can essentially view this as a Radamacher complexity of some function class. So with that, I think that's actually a pretty clean and nice way. I'm going to leave this. If you're interested, you can do it yourself. I believe it's not very difficult. What I will show here is that I'm going to show us even more brute force methods which use actually the first technique we introduced in our class, the brute force discretization. Recall that before we talk about Radamacher complexity, we said that in many cases, actually you can just deal with the uniform convergence for continuous function class with a very simple discretization. So what we do here is going to be just that you-- for fixed v with 2 norm 1. We can use Hoeffding inequality. It's Hoeffding inequality. So what you've got is that with probability, at most exponential minus epsilon squared over 2. I'm not expecting you to check it on the fly, but you can just basically plug in the Hoeffding inequality without any modification. It can be vjGij is close to the expectation. The probability that it deviates from expectation is at most exponential minus epsilon squared over 2. And then you take epsilon to be something like O square root n log n. So their failure probability, this means that exponential minus epsilon square over n. This is something like exponential minus O n log n. This is a pretty small failure probability. And then you take a discretization of the unit ball with granularity, something like sum of 1 over poly n. This is what we did-- it's a long time ago, I know, but I think this is what we did in lecture 3 I think. You take a very, very precise-- you use a very small granularity. But it doesn't really matter because at the end of the day, the dependency on the granularity is only logarithmic. So the size of this cover is exponential n log n. And then you can take a union bound over this discretized set. And then because your granularity is very small, it's only inverse poly so you only lose the inverse poly and inverse is smaller than any of the inequalities. So then basically eventually you've got that-- it's a union bound we got with high probability. We have this it is less than epsilon, which is chosen to be square root n log n. I'm skipping a lot of details because I think today we don't have a lot of time to complete all the materials, so I'm making it a little brief. But I think you kind of get the rough point. It will take too much time to work out details. And I kind of like this method 2. If I were to say my preference is between these methods, sometimes I like the method 2 because you can do this very quickly yourself and you know exactly where the dependency comes from. And if you do the Radamacher complexity, it will be much cleaner. You will get better constants, you'll get cleaner proofs. But sometimes it's a little bit less transparent because you have to go through this whole machinery. And why this is useful. This is useful because now we got this lemma, right? So lemma is the G and E. A case of G is only different on the order of square root n. And you can compare that with the signal. So now compare another level, which is O square root n log n versus the signal level, which is p over q times n and p minus q over 2 times n. So then this means that if p minus q is much bigger than 1 over square root of n, then I recover the vector u approximately. So we can see that you only need p and q to have some separation but not a lot of separation. And the separation depends on the size of the graph, which also makes some sense because the more vertices you see, the clearer the structure is in some sense. You have more kind of-- suppose you just these two users, everything's kind of two randoms and you could tell which one is from which community. But if you see a million users, you can use a lot of different users to crossvalidate in some sense [INAUDIBLE] we have the two communities. All right. So I guess this concludes the stochastic block model part. I guess there are some other small remarks which are not super important. So you can also actually can recover the exact community by some post-processing. So here, what I showed is that you only can recover the vector u approximately, but actually you can post-process to get the exact community and their setting conditions. I think under the conditions that I'm giving here, you can do it. And actually because this is a very precise mathematical structure here-- so there are a lot of works in the literature on this. And you can actually get even the exact constant here. So here I am writing p minus q is larger than 1 over square root n. So it's definitely very loose. You can get the precise dependencies that you need to recover and you can have the precise threshold. Below that threshold you cannot recover anything, above that threshold you can recover something, and above another threshold you can recover exactly. So all of these are in the literature if you are interested. And you can extend this to multiple blocks and so forth. OK. So this concludes with the stochastic block model. And now I'm going to move on to another kind of, in my opinion, pretty important literature, which is about clustering the worst-case graph. And still the thing is that if you do eigendecomposition, you are going to recover some approximate structures in the graph. So we are still going to use eigendecomposition, but the analysis will be different because here we don't have the stochasticity from the graph. And because you have a worst-case graph, you have to also somehow define what you mean by the hidden community, right, because before, in the stochastic graph, you start with community and you generate a graph. And now you are just scaling the graph. The graph is just some aggregates. You have to say what you are trying to recover. So let's start with that, what's our goal? So this requires us to offer definitions. So let's say given a graph G and the vertices is called E and edges is called E-- sorry, vertices is called V and edges is called E. So let's define this so-called conductance. This is actually a pretty important notion which shows up in many different areas of math. So of course it's a different form. So here it's a vertical of a graph and edges. In other cases, you can define conductance in high-dimensional space as well, which are essentially the same definition, but it could look a little bit different. So the conductance for graph-- so suppos you have a cut, let's call it S and S bar. You cut the graph into two parts, S and S bar. And the conductance of S is defined to be the following. So you have the number of edges, which are S and S bar, over the volume of S. Let's define both of this more. Clearly so E S S bar, this is the total number of edges from S to S bar. But this is an undirected graph. Maybe I should call it between S and S bar to be precise. Mathematically, this is really the sum of i over iSj and S bar Gij. If I use Gij as adjacency matrix-- I'm overusing the allocation a little bit. Both are given on the graph, and also this is matrix of the graph. And the volume of S, this is the total number of edges connecting to S. Which means that you look at how many edges satisfies that one endpoint is in S. So i needs to be in S and j can be anything. And you have Gij. So if you draw a graph, something like this-- suppose you draw a graph, like this and this and this, and you define this cut-- suppose this is S, then what is ESS bar. So ESS bar will be counting these two right edges because this is from S to S bar. And the volume of that will be counting all the edges connected to S, which means basically all the edges drawn here. All the green edges are counted. And you can see that by the definition, it's true that the volume of S is always-- so what this definition is for. This is trying to characterize how-- I guess the word conductance in the case it's kind of trying to characterize how good the cut is in some sense, like how separated S and S bar are. The smaller it is, the more separated S and S bar is. But you do have to normalize by the volume. So in some sense, the number of edges between S and S bar is already capturing how separate S and S bar are, but you normalize with the volume to make it more meaningful. I guess that's what I'm going to argue in the next. So I guess before that, let me just get some basic information. So the volume of S is bigger than the number of edges between S and S bar. That's trivial. So this means that the conductance is always less than 1. And you are trying to make the conductance as small as possible. And another thing is that the volume of S plus the volume of S bar is equal to the volume of V. This is a total of edges. So this means that if the volume of S is less than the volume of V over 2, then the volume of S is also less than the volume of S bar, and this means that the conductance of S is bigger than the conductance of S bar. So you should have a definition that somehow doesn't depend on how you name S and S bar. S and S bar is symmetric, but here the conductance of S and S bar are different, right? So that's how to remove this confusion between a symmetry, you just insist that you're always talking about-- so we will insist that we always only talk about S such that the conductance of S-- sorry, the volume of S is less than the volume of v over 2. So you're only taking a smaller part of S and use that to define the conductance of the cut. Why don't we just define conductance so that normalizes phi volume of V? Yes. So if you normalize by volume of V, first of all, the problem is that it means that you need to normalize because V is a constant. You have to normalize against something. I'm going to tell you why you have to normalize, but if you want to normalize you have to normalize something that changes as S changes. So here I'm only trying to deal with the symmetry so far. You only need kind of conductance on the smaller set. This is not that much because you don't want to cheat by saying I have a very, very large set and I only have one point in S bar. And it sounds like my conductance is very small, but actually it should measure the other side. Maybe before proceeding, answer the question why we have to normalize. So we can also define the v of G. This is the conductance of-- this is the so-called sparsest cut of G. A sparsest cut variable of G is defined to be the minimum possible conductance. But again, you require that S is the smaller side of the two cuts. So you minimize over the conductance. So first, you minimize the conductance first with the constraint that the volume of S is less than the volume of V. So basically, you just want to find a cut that has smallest conductance. Now let's talk about normalization, so why we have to normalize. I think the reason is pretty much just because if you don't normalize, then if you just minimize-- if you just look at ESS bar, it's typically minimized when S is small. So suppose you draw a graph, for example, I guess-- if you don't normalize, basically you prefer to pick a set S that itself is very small so that it doesn't connect to the other part. So for example, let's see. Suppose you have a graph like this. What I'm doing here is I have a-- suppose you have a completely connected subgraph. So do you have n over 2 nodes, n over 2 nodes. And within each of the subgraph, you have complete connection with each other. And then you have some very small number of connections between them, maybe every node is connected to all of them like this. OK. So it sounds pretty clear that you should just really-- the best cut you should get is this bar graph because within the cluster, you have full connection and across the two clusters, we have some number of-- let's say two edges per node. So it sounds pretty clear we should do this. But if you use the matrix ESS bar, then you see that some other cuts will have smaller number of edges across the thing because you can just take this to be S1 because S1 just consists one node. So then E of S1, ES1, S1 bar is basically how many edges comes from S1 to S1 bar, basically the number of edges connected to S1. This is n over 2. Let's say the good cut is S2. So ES2, S2 bar is definitely something bigger than n over 2 because you have n over 2 probably times the number of blue edges, something like two here. I'm joining basically two edges per node. So basically, it sounds like you should get S2, but if you use the unnormalized version, you would get S1. However, if you normalize, then it's a different game. So if you normalize, if you look at the conductance of S1, then this is E of S1 S1 bar over the volume of S1. This is n over 2 times n over n over 2. I think the volume on S minus n over 2. So this is 1. So if you look at phi of S2, then this is n over 2 times 2, something like this. And then you have the total number of edges connected with 2. That's actually a big number. That's probably something like n over 2 times n over 2 minus 1. This is the number of edges within the S2, and there are some edges between S2 and S2 bar, something like this. And this would be something like roughly I think 2 over n. So the conductance of S2 is much smaller than conductance of S1 if you normalize. Questions so far? OK, cool. So now we have to kind of define the goal. Because you have a worst-case graph your goal is to-- so we have said that the goal is to find approximate sparsest cut, S hat, meaning that you want S hat to satisfy that the phi of S hat is close to the sparsest possible cut, phi of G. And the approach we're going to describe is still eigendecomposition. So how do I do this? There's [AUDIO OUT] to even state what we mean exactly by eigendecomposition and what kind of results we can have. So first of all, let's di to be the volume of the known i. You take a single node, you take the volume, this di. And this is really just the degree of node i, right? The volume of the node is really the degree of the node. And lets take d to be the diagonal matrix that contains di of x entry. And let's define this. So a normalized adjacency matrix is called A bar, which is D minus 1/2, G times minus 1/2, where G is the adjacency matrix. Recall this is our notation, with a little bit of notation. So what does this really mean? This really just means that 1 over square root d1 up to 1 over square dn times G times 1 over square root d1 up to 1 over square root dn. And a diagonal matrix multiplied on the left means that you will scale all of the rows and the diagonal matrix at the right hand side multiplication means you'll scale all the columns. So basically you'll scale the columns and rows simultaneously with these numbers. If you do the [INAUDIBLE] what it really means is that the Aij, the ij of the normalized adjacency matrix is really just the adjacency matrix over square root di times square root dj. So this sounds a little complicated, but in most of the cases-- I'm only just stating this mostly for formality because sometimes sometimes the key thing can be seen by assuming the graph is regular. So in most cases, suffice it to think of G as a regular graph. A regular graph means that all the degrees are the same. So let's say suppose that G is a kappa regular graph, meaning di is equal to kappa for every i, then its adjacency matrix is really just a 1 over kappa-- normalized adjacency matrix is just 1 over kappa times Gij. So in some sense, we really didn't do much except for just changing the scaling of this. But this scaling is kind of important in the formal sense because it can make them formally very clean. But it's not fundamentally super important. So this is pretty much-- if you don't want to think about the di and djs, you pretty much can think of this simple case where you have a regular graph. And once we define a normalized adjacency matrix, you can also define the so-called Laplacian matrix, which is i minus the normalized adjacency matrix. I think you'll probably see that one of the reason why we have to normalize is that if you don't normalize, it doesn't makes sense to subtract, take the differences between it and an identity. Identity is something that doesn't have a scale. So you have to normalize it so that you can kind of take the dif with identity. And this Laplacian matrix is really not doing that much. It's not that different from normalized adjacency matrix anyway because they are-- pretty much everything corresponds to each other. So the eigenvector of L is the same as the eigenvector on A bar. And the spectrums are just flipped with each other. So let's say suppose L has eigenvalue lambda 1 up to lambda n, let's say suppose-- I think in this literature, you always want to order them. And then with the eigenvector u1 up to un. Then this means that this is equivalent to A bar as eigenvalue 1 minus lambda 1 up to 1 minus lambda n. Now I'm searching in a decreasing order and with the same eigenvectors. So you don't even have to think about the Laplacian. The Laplacian will come into play at some later places, but so far you can just think of Laplacian is a flipped version of normalized adjacency matrix. Nothing really different. So these are some little bit abstract preparations. And now let's see what we can do with this. So this is the in my opinion, pretty important theorem. It's called Cheegers inequality. Actually, this dates back to 1969 by Jeff Cheeger. So it says the following. It says that lambda 2, this is the second eigenvalue, over 2 is less than the conductance of G, which is less than square root 2 lambda 2. So why this is a very important thing, it connects the conductance, the sparsest cut to something linear algebra to the eigenvectors. So the sparsest cut is a very combinatorial stuff where if you really want to find the sparsest cut, you'll probably want to enumerate all the possible cuts and vice versa. At least the definition is a combinatorial thing. But this inequality is saying that somehow, the sparsest cut value has a lot to do with the eigenvalues of the Laplacian or the adjacency matrix. And in particular, it's very close to the second eigenvalue of the Laplacian matrix. And moreover, you can also find the-- you can find the approximate cut S hat such that this cut S hat-- the conductance is less than square root 2 lambda 2, which is less than 2 times square root phi of G computationally efficiently. And not only computationally efficiently but also actually pretty explicitly, what you can do is the following, by rounding the eigenvectors. I guess rounding really means the following. This is the rounding in the approximation algorithm. If you don't know what the term comes from, it doesn't matter. So here is the procedure to find such a set S hat. So suppose you take u2. Suppose the u2 is equal to-- the coordinates are beta 1 up to beta n. It's the second eigenvector. It's the second eigenvector. So you can take a threshold, which is T to the beta i, and consider S hat i to be all the coordinates that-- so this flexibility is less than tau. So you take the threshold but you don't have to consider all the possible thresholds. It's not necessary because I don't think so. You take a threshold tau, and a threshold is chosen from one of the coordinates. And you say you look at all the coordinates that are smaller than the threshold, and that's your S hat. So you have basically all of these sites, S1 hat, S2 hat, S3 hat, and so forth. So one of these S hat's satisfy phi Si hat is less than 2 times square root phi. So one of these sites will be a good cut. So I guess I'm thinking this in a formal way. It seems a little bit confusing. So what you really are doing is the following. So you sort, I guess in plain language or in more informal language, you first sort the coordinates. Suppose you sort the coordinates first and you get beta 1 less than beta 2, less than beta n. And then it's saying that if you take-- this will be S hat i. But this will be the first i coordinate, and that would-- and one of these hats will be a good cut. So you can try one cut, which is like this, you can try another cut, which is beta 1 beta 2, and you can try another cut, which is beta 1, beta 2 up to beta i. And one of these cuts will be a good cut of the graph with a small conductance. And of course, you have to restore the-- you have to remap the coordinates back to the original coordinate system because you have started the coordinates. But this is the manner. Any questions? Another way to think about it is that in a stochastic block model case, the second eigenvector was something like this. And pretty much in that case, if you take a threshold, the smaller values correspond to one cut and the larger values correspond to another cut. But here you don't know where the exact threshold should be. You should try all the thresholds, beta 1 up to beta n, all of them. OK, cool. So this is a pretty magical theorem in my opinion. I'm not going to prove it. If you are interested, I think there are a lot of lecture notes that can prove this. I guess what I'm going to do is I'm going to-- [INAUDIBLE] exactly one of these S hat i that says that or additional? Additional. And if you are able to-- and you can enumerate all of them. Just try all of them and see which one is better than the other. So the proof is pretty nontrivial. It's not very long, but it's kind of non-trivial. So I'm going to skip the proof and I'm going to link-- I see some questions here. So the question online here is that the hat Sj found this way is in the best possible cut, right? Yes. So you are not guaranteed to find the best possible cut. You're only guaranteed to find a cut such that the cut value phi of S hat i satisfies that it's less than 2 times square root phi of G. If you get phi of G here, suppose you magically change this to phi of G, then that means you can best cut because phi of G is the value of the best cut. Of course, maybe there are multiple best cuts as well, but you definitely find one of the best cut. However, we don't have that strong theorem. We've only shown that 2 times square root phi of G because-- so you lose something. Square root phi of G is bigger than phi of G, by the way, because phi of G is less than 1. So you'll lose some factor in terms of the best possible vector conductance. I hope that answers the question. Anyway, sometimes you have to lose a little bit to some extent because I guess this is sometimes post-mortem. But if you think about-- in retrospect, one of these quantities is very combinatorial, the sparsest cut, and the other point is very linear algebraic. it's sounds unlikely that they can be exactly the same, right? So it's already kind of fortunate that they are somewhat related in my opinion. And there are also kind of like the most discussed some of the intuitions or kind of more basic qualities. Like some of the intuition is why this can be possible to, but I won't give the full proof. The statement up there that says we can find beta S such that it's less than square root 2 over 2, which then is less than. Is that that we're actually finding then we're just saying, transitive property [INAUDIBLE].. Then that's less than 2 square root. And in that case-- I asked if we just care about the relation to phi of G, right? You just care about? The comparison of r hat to the cut of G. It's not significant but it's-- the square root 2 lambda 2 is insignificant. Other than that, it lets us share this inequality. Sure. So first of all, yes, you are right. So how do you get this inequality? This is just by using this part. So that's true. And second, yes, probably the first bit you care about is comparing with phi of G, and these are just some intermediate things. That's the first other bit. But I think if you look at the proof, you do have to-- the eigenvalues have to show up somewhere. Do you use any-- do you not use anything that came out of that second Hoeffding inequality? So maybe your point is actually pretty good because 2 lambda 2 is actually relatively small, but then you use more of that second volume. I think it's possible but we don't really know. It's kind of very hard to-- I think there are hard instances in both cases. This thing can be both close to lambda 2 over 2 or it could be very close to this side. Cool. So I guess I'll focus on some intuitions and why. The first thing I want to discuss is that I think this is again about the scaling to some extent. So first of all, the smallest eigenvector, why you take the second eigenvector. I think that's always something that seems to be magical to me at first sight. And then after I spend some time-- when I first started, I realized the top eigenvector-- kind of like say last time. The top eigenvector is kind of like a background. So either the smallest eigenvector of L or the top eigenvector of A bar. This is kind of not that interesting. And what why it's not interesting is it's pretty much only trying to get the-- only capturing in some sense I call it background. It's kind of like a background density. So what I really mean by this is that off the graph. What I really mean by this is that let's say suppose when G is kappa regular. I think we actually have stated this in a previous lecture. So L1 vector is top eigenvector of G of adjacency matrix, and thus also top eigenvector of A bar, which is just 1 kappa times G. So when G is regular, then the top eigenvector is really just about-- it's just an L1 vector. And in more general case, it really just involves the scaling based on density. So for general G, what happens is that the top eigenvector is really just this one, the square root d1 up to square root dn. The scale doesn't matter here because the eigenvectors is-- and multiplication of this is also eigenvector, so I didn't care about the scaling. So this is the top eigenvector of A bar, so smallest, which means the smallest eigenvector of Laplacian. Why this is the case, you can verify this relatively easily. So A bar times u1, this is the mentioned location. And you look at j's coordinate i's coordinate, this is equal to the sum of j over j, sum of Aij bar times uj. And Aij bar is a scaled version of the graph. So Gij over square root di square root dj. And uj is square root dj. And this is sum over j. So you first cancel these two, and you get 1 over square root di in front. You get this. And recall that this is actually a precise definition of the degree. This is the total number of edges connected to the graph. So get 1 over square root di times di into square root di. So that verifies that u1 is an eigenvector. This means A bar u1 is equal to u1. So basically as before, the top eigenvector is not doing much, it's really just capturing the degrees of the graph. And the second eigenvector starts to talk about the interconnections. It has more about the relationship between edges and hidden communities. And now let's look at some intuitions about why somehow this eigenvector is related to the cut. So here is another way to think about it. So if you look at the contracting form of the Laplacian. So what is this? This is the v transpose i times v minus v transpose A bar times v. Let's just put firstly right this. This is sum of vi squared i from 1 to n minus the sum of ij. Let's do it. vivj, A bar Aj. And this is sum of vi squared minus sum by vivj GIj square root di square root dj. And Gij is 1 when the eigen is 1. So what we got is that sum vi squared minus ij is the edge. But ij and ji are both at the source, so that's why you get 2 here. 2 times vi over square root di minus vj over square root dj. And now you can bring this first thing i, and then you take jvi-- I guess maybe let's spread this way. So I'm claiming that this is equal to sum of vi over square root vi minus vj over square root vj squared. And ij is E. And why this is true, this is true-- you can expand this equation into terms. And you can see the cross will match this one, the only thing is to see the other terms match the vi's, right? So we can verify that by looking at some ij in E, vi squared over di. This is on sum over i, sum over j, such that i, j, and e-- I think this is probably obvious, but I'm making it a little bit too complicated. So vi squared over di times this one. If you sum over i first and sum over j second, first sum over j and then sum over i. So how many edges are connected to i, that's basically di. So you guys this vi squared over ti times di. So that's why it's sum of vi squared. OK, sounds good. I guess I'm somehow missing a constant somewhere. I'm not sure what it is. Did I miss a constant somewhere? I will double check, I think the constant might be off by 2 somewhere, but you get it just this way. OK. And if this is regular graph, if G is regular graph, say kappa regular, then you can ignore the is. You can just say v transpose Lv is 1 over kappa times ij in E. vi must be j square. OK. So but why I care about the-- why I did so much work to guide this equation. I think the equation is very important because this is how it links to-- how these algebraic quantities links to the conductance. So this is the algebraic quantity. It's something like linear algebra. It's quadratic form. However, if you-- here, suppose now you you restrict it? You restrict v to be binary. So suppose v is-- we take v to be binary. It's a binary vector. And you would take s to be the support of v. So the indices where the entry, the v is 1. Then you can see that from this formula, v transpose L. V is 1 over kappa times E vi minus vj square. And when is this 1? This is 1 when i and j are in both-- are in different. So this is only 1 if i in s, and j is in s bar, or i is in s bar, j is in s, right? So basically, this sum is the number of i and j's between s and s bar, because only then the i and j is an average across the groups. This vi minus vj square is v equals to 1. Otherwise, it's going to be 0. So this is why it's 1 over kappa times the number of ij's across N minus s bar. So the quadratic form connects to the number of ij's across the two groups when v is not binary. If v is not binary, of course, it's not true. But if it is binary, it's true. Or in other words, you can write v transpose lv is 1 over kappa, the support of v-- a support. So and now suppose if the support of v, the size is less than n over 2. So you have the volume is less than. So this means that the volume of this s is less than the volume of v over 2. Because this is a regular graph the volume, is really just the size of the set. And then in this case, v transpose Lv over this ratio, v transpose Lv over norm of v square-- this becomes 1 over kappa times the number of edges between s and s bar. And what is the volume of-- what is the v norm square? This v norm square is really the size. This is just equal to the size of s. The size of s is really just the volume of s over kappa. The volume is the number of edges connected to s and its regular graph. That's why the volume is just kappa times the size of s. So then you cancel the kappa and you get E of s, s bar over the volume of s. So this is the conductance of s. So basically, the conductance of s can be written as this, this form. And this form is some kind of linear algebraic form. And I think this is v transpose Lv over norm of v. This is called Rayleigh quotient. This is named Rayleigh quotient. And the point here is that this Rayleigh quotient connects to the conductance. But of course, it's not exact, because it requires when v is binary, right? So if you do eigenvectors, you are trying to-- so eigenvectors means you are minimizing Rayleigh quotient without any constraints, right? Constraints on v, right? But the minimal cut is the sparsest cut. Basically means minimize Rayleigh quotient with the binary constraint. And in some sense, this [INAUDIBLE] the core is really nice even without a constraint. With a constraint, without a constraint, you don't really differ by that much. So actually, the proof works out like something like you first try to find eigenvectors. And then somehow you get some real number of v's. And then the eigenvectors have real numbers right, in d. And then you round. You round it into binary vectors. And then you say by rounding it, you don't lose too much of the Rayleigh quotient. And that's how the proof, roughly speaking, works. So I guess that's the intuition. And all of this can be extended to a weighted graph. The intuition is the same for weighted graph or for non-- for graph that are not similar, not regular, and also for graph that are weighted. So here, the graph are just binary, like 0, 1. There's no one the other ways. You can also do it for weighted graph. So, great. So I think-- I hope that I've convinced you that eigenvectors are very related to the graph clustering by these two examples, stochastic block model and this worst-case situation. And this kind of algorithm has been used. So if you do this on spectral clustering, this is-- and you can actually use this-- OK, how do I say this? So the materials I presented, this mostly come from the theoretical computer science community. And there it that doesn't have much to do with machine learning, right? So what the people care about is that you just want to partition a graph into two clusters. So you're going to have to go-- machine learning to these kind of problems and study this. And I think there is a so-called spectral clustering approach-- spectral clustering. This was bring to machine-learning community I think around 2000, I think by-- I guess I said this paper by Shi and Malik and Ng, Jordan, Weiss. This one's 2000. And the way that you do it is that you find a graph from the machine-learning data, and then you apply this algorithm. So basically, this brings us to question on how to choose this graph, so how to choose or design the graph, because the graph-- in TCS, the graph was given to you, maybe some graph that somebody give you. But machine learning, you have to somehow choose your graph, right? So in Andrew's paper, the definition of the graph is something like this. So you first say you're given some raw data, say x1 up to x10. So these are in between data points. And then you define a graph G to be something like Gi and j. This is a weighted graph. Well, I didn't really discuss the weighted graph, but there's a natural extension to weighted graph. And in the weighted graph, the weights between i and j is something like exponential minus xi minus xj 2 norm over 2 sigma squared. I guess this is probably is something that is very familiar to you. This is just the RBF kernel, the Gaussian kernel. So you define this with some training parameters. Or you can have some other variables, right? So I guess you define a graph based on some distances between your examples. And then what we do is we say you do this. You get a spectral cluster. You run a-- you get the eigenvectors. So the first time, you define a graph G and get eigenvectors of the Laplacian or normalized adjacency matrix. And here is not only two clusters. You can do multiple clusters. And when you do multiple clusters, what you do is you say you get eigenvectors, say, u1, u2, up to uk. Suppose you want to have k cluster. And this is a matrix of dimension R of n by k. So each column is an eigenvector. And you have three of these eigenvectors. And now what you do is you say you take the rows as the embeddings. Or in the modern word, you have representation, because I probably-- some of you heard of representation learning, and for the full-- the ith example. So basically, you can-- so for every example xi, now it becomes represented as-- maybe let's call them 0 vi, vi, which is the dimension k. And k is-- k corresponds to how many eigenvectors you take. And then you've got these low dimensional representations. Maybe it'll tell you something up to all three. And then in the original paper of Andrew's paper, I think you do some kind of other-- another k-mean cluster and some other clusters, so k-means. I'm not-- I guess probably you've heard of k-means-- k-means on the representations to vn, and to cluster them again. So this is the so-called spectral clustering algorithm. And there were actually later-- I think around 2014, 2013, there were a few papers to analyze this and show that you can actually get reasonable representations and clusters by using this approach. Any questions? So what are the-- or what's the issue with this? The issue with this is that the graph G could be not very meaningful. So in high dimension, so all the data points are very far away from each other. All the training data points-- I should be precise-- far away from each other. And the Euclidean distance becomes-- the Euclidean distance between these training data points becomes pretty much meaningless, because in particular, Euclidean system of cat and dog versus the Euclidean difference between dog and dog, you probably wouldn't see much differences, because two dogs could still have very big Euclidean distance-- two random dogs. And I think this is the-- is sometimes the problem with [INAUDIBLE] because the graph itself is not meaningful. So you need to find the sparsest cut for the graph. If the graph itself is not very useful, even finding the sparsest cut is not that important. It's not that useful for you. So that's why the theory, the analysis for this spectral clustering algorithm, doesn't really deliver that much, because it didn't really consider how the graph was generated. All of this theory says that if you're given good graph, you can find the sparsest cut for this graph using this approach. But it doesn't really say anything about how the graph is generated. So I think for the last 15 minutes, I'm going to discuss, briefly discuss, one of the-- This is one of the work in my group recently, so where we try to re-use this classic idea, but use it for-- in a different way. So this is in thiks paper by Haochen et al in my group. So what we are trying to do is that we say you consider infinite graph. So G, v, w. So this is the vertices. This is the weights on the edge, and where we take v to be all the possible inputs. So this is all possible data, data points. So this graph would depend on the population space, right? So actually, it's the best fit space of all possible, let's say, images. And your graph is defined on-- each image corresponds to a vertex. So before, the graph has size little n, where-- oh, sorry. This is E. So before, the graph has size little n, right? It's a little n by little n matrix. And now the graph has a much bigger size. The size is the same as the commonality of all possible data points which could be infinity. So it's possibility, maybe, let's say-- possibility, let's say, you have to find the number of possible images that then could be exponential. So firstly, let's say we have exponential size graph. So on this graph, what you do is you define w, x, x prime. The weight is split in two nodes, two vertices. Let's say we find this to be large only when x and x prime are close-- are close in L2 distance. So I'm still using L2 distance. I'm still probably using-- I didn't-- I'm not specifying exactly what's the definition here, because I think that requires too much trouble, which I cannot fit in 10 minutes. But still, we are using-- pretty much you can think of this as almost the same as the previous definition of the graph, where x and x2 prime are close. But I guess the point is that this is very close. So before, you have to choose the signal very subtly, because all the points are very far away from each other. But now you say that I don't have all those points that far away from each other. I just care about those two points that are very close to each other, right? So suppose you have two dogs, running dogs. You say they are not close. If you only have one dog, and then you have a perturbation of that same dog, you say there are two. They are dogs that are connected to each other. So then this graph becomes more meaningful because you only connect very nearby cats and dogs or very nearby images. And then, so the graph becomes more meaningful, so the pros is that the graph is more meaningful. I guess the cons is that it becomes infinite dimensional. And you don't have this graph because you don't know all the possible data points. You only have some sample data points, so infinite or exponential expansion dimension. And you don't have access to this graph. So what we do is the following. So the way we fix the columns is the following. And also maybe another way-- cons-- is that even the eigenvector itself, right, the eigenvector is also high-dimensional, right? It's infinite dimensional because the eigenvector-- the dimension of the eigenvector is the same as the dimension of the graph. So over here what we are doing is that we use the new ideas, the different ideas, you know, to kind of-- actually, the real research is the reverse direction. We somehow try to explain the different ideas. But here in this context, you can think of this as you use the parametric network of ideas to try to deal with these cons. So what you do is you say, suppose you have an eigenvector nu. This is an eigenvector. So this is the eigenvector nu. And here, the eigenvector is a high-dimensional vector. So you can say this is nu x, where it's indexed by all the possible data points in the capital X, right? This is of dimension something like maybe R to the capital N or R to infinity, depending on how many vertices are in your set. And you don't even have space to save all of this, yeah? Even if it's a single vector you don't have any space to save it. But what you do is you say you represent this u, u sub x by a neural network applied on the raw data point x, so where f theta is a parameterized model. So if you do this, then at least you can describe the eigenvector by theta. Now you don't have to specify all the capital N numbers to specify the eigenvector. You only have to specify the theta to describe this eigenvector. Of course, if you believe that f theta is powerful enough, then you can express eigenvectors. But at first, obviously the problem wouldn't be enough. So you have to make some assumption that neural networks can represent these kind of eigenvectors. But suppose under that assumption, then you can at least represent the eigenvectors by theta. And now basically the question becomes, or so the question changes to you want to find theta such that this vector f theta x, this very high-dimensional vector, is an eigenvector of the graph G. So at least you are trying to find a low-dimensional-- you are trying to find a parameter theta. You are not trying to find the-- a high-dimensional vector anymore. And it turns out that if you do this, I guess maybe eigenvector, Laplacian. Let's see. I think I have time to-- it turns out that if you do this, then this is basically trying to do the-- this gives a-- you can use an algorithm to try to achieve this. I guess I'm trying to-- let me see whether I have time to-- I guess what we can do is the following. So what you do is you say, I'm going to-- how do I find the eigenvector of L like this? So suppose I have the access to the whole graph, which I don't. But suppose I have it. What I can do is I can minimize the following thing. So I can say I'm going to minimize F. Let me write it down-- I L n. So maybe that's what it is. [INAUDIBLE] isn't given by images. So first of all, I claim that this gives the top eigenvector of A bar. This is because-- this is something that I probably wouldn't have time to explain that much. But if you want to fit a low graph matrix [INAUDIBLE] the top K eigenvector. So if you want to fit a low rank matrix to the matrix A bar, the best fit would be to use eigenvectors of A bar. You can invoke a theorem to show this. Basically, F is going to be some version of-- the minimizer of this will be some version of the eigenvector. So I think F will be-- the minimizer of this will be some scaling of the eigenvectors. And then if you use this objective, then you can replace the capital F, which is non-parametric-- it's a very big matrix-- by-- so you can say that the capital F not-- for now is you-- supposed to be something like this, right? And then you write it out as this. You write it as f theta x transpose, maybe x1 up to x theta, x n transpose. So you replace the row by the paramaterized version. So you say that every row now is a network of the raw data. And then what you can get is that this will be-- if you write this as-- in this version, this is like a-- first of all, you write the real sum as the sum over ij in N, A bar ij minus-- so FF transpose i and j. The ijth entry actually is the ith row in a product with the jth row. So that's why this is equal to f theta xi times f theta xj square. And now I can change this to-- instead of minimizing F, now you are minimizing theta. And I guess I don't have time to go through all the details. This is basically-- this is a now objective function that you can optimize. Of course, the problem is that you still have this sum, this big sum. You can replace this by the empirical version. So you can get minimize over theta. You can take some random samples. So you can take-- so I'm not sure whether I have a way to simply write this. OK, maybe I'll just say you can sub-sample this. Sub-- so estimate this using an estimate, using an empirical estimate, using empirical examples. And actually, it turns out that you can simplify this formula. This will be something similar to the contrastive learning algorithm that is used in practice. I guess this part, I don't really have time to show. I guess I will refer you to the paper. I think probably I should just stop here. Are there any questions first? I know this part is a little vague. Feel free to ask any questions. Do you know of any contrastive learning paper we could look at off the top of your head? The paper I think probably is good to write just a-- [INAUDIBLE] Yeah. I think the loss is not exactly the contrastive learning loss used in practice. So we're going to have something we call spectral contrast enhanced, so which-- so basically, actually this-- that-- if we have all this at top then this stuff is pretty trivial. So eventually, you could simplify this a little bit. You could write this, that you got one term which is minus 1/2 of theta xi of theta xj. And this is something that the-- the term that tries to make two exemplars closer to each other. And there's another term that tries to contrast them. But there's another-- so anyway, so I guess I'll probably just refer you to the paper, of our paper. I think the title of the paper is that, somewhat "Provable Self-Supervised Learning Via Contrast-- Spectral Contrastive Loss." Spectral contrastive loss-- something like this. I think from this, you can search the fun title. So before-- the session just before this one, you mentioned looking up-- that you can look at the eigenvectors, line them up, and then complete the first row of that matrix. And say that corresponds to the first data points. What exactly is the worst one? Is it-- how come the first and second row are similar? Then the first two data points should be similar. I think I got the question. So I think there is something that I kind of like-- I understand why there is a lot of confusion, because I skipped something about the k, how do you deal with k clusters. But I think this could be seen where you have only-- if you take a little leap of faith between k clusters and two clusters, you're just gonna say this two cluster. And then if you look at the-- let's see. So where did we discuss this? I think we discussed this somewhat implicit in-- several times. So I guess, for example, suppose you go back to here. And recall that the second argument beta 1 after beta n. Right? And we discussed that you take a threshold. And then you can separate the two groups with threshold. So in this case, basically suppose you have two clusters. And in this case, basically the beta i is your representation of the ith vertex. So that's the row right there, and beta y's the first row, right? Beta 2 is the second row, right? So beta i is the representation of the ith vertex. And why beta i is better than the row data? I think this is because at least with a threshold of beta i, you get the groups right. So basically, in some sense, beta i-- in some sense, maybe the ideal thing is as follows. So suppose you in the stochastic block model get this. And then I guess you probably can agree that these numbers are better representations than our original data, because now, you make all the vertices in the same group to 1. You lost all the other information. You just get-- the representation just exactly tells you about a group membership. And you don't know anything about else, right? So the group membership is the only thing you care about. So that's why these numbers are more-- better representations than the-- Is it similar to the low rank matrix approximation, approximate a low rank matrix and that's just a better representation, because you've taken the most important parts of the representation? Exactly. And what's the most important one? The most important one here-- in this case, the most important one is the clustering structure, so which group you belong to, right? So that's why the only thing you care-- suppose you think that's the most important information, that your representation should just be that. You ignore any other information. You just say the group ID is my representation. And that's the best representation. But that's only the case in this case, where we said there's two clusters. Right. [AUDIO OUT] Then we want to-- we care about the 2-cluster kind of representation, but we also care about maybe like a 3-cluster representation and how close things are based on that. And so by taking multiple eigenvectors, we can get a bigger picture just-- in this cluster, we're not. Exactly, exactly. So if you have more eigenvectors, then you can get 3-cluster information or even more information. And also, this can be-- some of this information can be recombined to get even richer information, right? So because eventually, you'll probably used this representation by a linear plus-- use a linear hat on top of it. fit. So suppose you have two type of information in your system. Then you'd have to combine them to get more information. Yes. But you are right. So basically, you can get more eigenvectors. You get more richer information from the graph. Yeah, so essentially, you are-- it's kind of like making experimentation. You distill the information in a graph to smaller amount of information. And the question we are trying to answer is then what information you keep in the eigenvectors. So it's not that surprising that the eigenvectors has more specific information about the graph. The question is, what can we glean? And the graph intuition is that it does keep the clustering structure in the graph, but not other things. I used the low-- the smallest eigenvectors are trying to keep the class and structure of the graph. OK. Great. I think this will be the end of the quarter, I guess. I hope you liked the course. I guess we discussed quite a bunch of topics. Actually, this quarter I think we covered the most compared to all the previous quarters, because-- partly because we have more than-- we have 10 minutes every class and every lecture. And also, we have two more-- well, two more lectures, because we have fewer holidays in this quarter. Yeah, I guess I hope you like it. Thanks. Thanks so much for attending.
AI_LLM_Stanford_CS229
A_Hackers_Guide_to_Language_Models.txt
hi I am Jeremy Howard from fast.ai and this is a hacker's guide to language models when I say a hacker's guide what we're going to be looking at is a code first approach to understanding how to use language models in practice so before we get started we should probably talk about what is a language model I would say that this is going to make more sense if you know the kind of basics of deep learning if you don't I think you'll still get plenty out of it and there'll be plenty of things you can do but if you do have a chance I would recommend checking out course.fast.ai which is a free course and specifically um if you could at least kind of watch if not work through the first five lessons that would get you to a point where you understand all the basic fundamentals of deep learning that will make this this lesson tutorial make even more sense maybe I shouldn't call this a tutorial it's more of a quick run through so I've got to try to run through all the basic ideas of language models how to use them both open source ones and open AI based ones and it's all going to be based using Code as much as possible um so let's start by talking about what a language model is and so as you might have heard before a language model is something that knows how to predict the next word of a sentence or knows how to fill in the missing words of a sentence and we can look at an example of one open AI has a language model text DaVinci 003 and we can play with it by passing in some words and ask it to predict what the next words might be so if we pass in when I arrived back at the panda breeding facility after the extraordinary reign of live frogs I couldn't believe what I saw I just came up with that yesterday and I thought what might happen next so kind of fun for Creative brainstorming uh there's a nice site called nat.dev Nat dot let Dev lets us play with a variety of language models and here I've selected text DaVinci 003 and I'll hit submit and it starts printing stuff out the pandas were happily playing and eating the frogs that had fallen from the sky there's an amazing sight to see these animals taking advantage of such a unique opportunity first after quick measures to ensure the safety of the pandas and the frogs so there you go that's what happened after the extraordinary reign of live frogs at the panda breeding facility uh you'll see here that I've enabled show probabilities which is a thing in that.dev where it shows um well let's take a look it's pretty likely the next word here is going to be the and after this since we're talking about a panda breeding facility it's going to be Panda's were and what were they doing well they could have been doing a few things they could have been doing something happily or the pandas were having the pandas were out the pandas were playing so it picked the most likely uh it thought it was 20 likely it's going to be happily and what were they happily doing could have been playing hopping eating and so forth so they're eating the frogs that and then had almost certainly so you can see what it's doing at each point is it's predicting the probability of a variety of possible next words and depending on how you set it up it will either pick the most likely one every time or you can change muck around with things like P values and temperatures to change what comes up so at each time then it'll give us a different result and this is kind of fun frogs perched on the heads of some of the pandas it was an amazing sight etc etc okay so that's what a language model does um now you might notice here it hasn't predicted pandas it's predicted panned and then separately us okay after Panda it's going to be us so it's not always a whole word here it's an and then harmed oh actually it's unha mood so you can see that it's not always predicting words specifically what it's doing is predicting tokens uh tokens are either whole words or sub word units pieces of a word or it could even be punctuation or numbers or so forth um so let's have a look at how that works so for example we can use the actual um it's called tokenization to create tokens from us from a uh from a string we can use the same tokenizer that GPT uses by using tick token and we can specifically say we want to use the same tokenizer that that model text eventually double O three uses and so for example when I earlier tried this it talked about the Frog splashing and so I thought I'll include data we'll encode they are splashing and the result is a bunch of numbers and what those numbers are they'd basically just lookups into a vocabulary that openai in this case created and if you train your own models you'll be automatically creating or your code will create and if I then decode those it says oh these numbers are they space r space spool hashing and so put that all together they are splashing so you can see that the start of a word is give me the space before it is also being encoded here so these um language models are quite neat that they can work at all but they're not of themselves really designed to do anything um uh let me explain um the basic idea of what chat GPT gpt4 Bard Etc are doing comes from a paper which describes an algorithm that I created back in 2017 called ULM fit and Sebastian Rooter and I wrote a paper up describing the ULM fit approach which was the one that basically laid out what everybody's doing how this system works and the system has three steps step one is language model training but you'll see this is actually from the paper we actually described it as pre-training now what language model pre-training does is this is the thing which predicts the next word of a sentence and so in the original ULM fit paper so the algorithm I developed in 2017 then Sebastian Rooter and I wrote it up in 2018 early 2018 what I originally did was I trained this language model on Wikipedia now what that meant is I took a neural network um and a neural network is just a function if you don't know what it is it's just a mathematical function that's extremely flexible and it's got lots and lots of parameters and initially it can't do anything but using stochastic gradient descent or SGD you can teach it to do almost anything if you give it examples and so I gave it lots of examples of sentences from Wikipedia so for example from the Wikipedia article for the birds the birds is a 1963 American Natural horror natural horror Thriller film produced and directed by Alfred and then it would stop and so then the model would have to guess what the next word is and if it guest Hitchcock it would be rewarded and if it gets guessed something else it would be penalized and effectively basically it's trying to maximize those rewards it's trying to find a set of weights for this function that makes it more likely that it would predict Hitchcock and then later on in this article it reads from Wikipedia at a previously dated Mitch but ended it due to Mitch's cold overbearing mother Lydia who dislikes any woman in mitches now you can see that filling this in actually requires being pretty thoughtful because there's a bunch of things that like kind of logically could go there like a woman could be in Mitch's closet could be in which is house and so you know you could probably guess in the Wikipedia article describing the plot of the birds it's actually any woman in Mitch's life now to do a good job of solving this problem as well as possible of guessing the next word of sentences the neural network is gonna have to learn a lot of stuff about the world it's going to learn that there are things called objects that there's a thing called time that objects react to each other over time that there are things called movies that movies have directors that there are people that people have names and so forth and that a movie director is Alfred Hitchcock and he directed horror films and um so on and so forth it's going to have to learn extraordinary amount if it's going to do a really good job of predicting the next word of sentences now these neural networks specifically are deep neural networks so this is deep learning and in these deep neural networks which have um when when I created this I think it had like 100 million parameters nowadays they have billions of parameters um it's got the ability to create a rich hierarchy of abstractions and representations which it can build on and so this is really the the key idea behind neural networks and language models is that if it's going to do a good job of being able to predict the next word of any sentence in any situation it's going to have to know an awful lot about the world it's going to have to know about how to solve math questions or figure out the next move in a chess game or recognize poetry and so on and so forth now nobody said it's going to do a good job of that so it's a lot of work to find to create and train a model that is good at that but if you can create one that's good at that it's going to have a lot of capabilities internally that it would have to be a drawing on to be able to do this effectively so the key idea here for me is that this is a form of compression and this idea of the relationship between compression and intelligence goes back many many decades and the basic idea is that yeah if you can guess what words are coming up next then effectively you're compressing all that information down into a neural network um now I said this is not useful of itself well why do we do it well we do it because we want to pull out those capabilities and the way we pull out those capabilities is we take two more steps the second step is we do something called language model fine tuning a language model fine tuning we are no longer just giving it all of Wikipedia or nowadays we don't just give it all of Wikipedia but in fact a large chunk of the internet is fed to pre-training these models in the fine tuning stage we feed it a set of documents a lot closer to the final task that we want the model to do but it's still the same basic idea it's still trying to predict the next word of a sentence after that we then do a final classifier fine tuning and then the classifier fine-tuning this is this is the kind of end task we're trying to get it to do now nowadays these two steps are very specific approaches are taken for the step two the step B the language model fine tuning people nowadays do a particular kind called instruction tuning the idea is that the task we want most of the time to achieve is solve problems answer questions and so in the instruction tuning phase we use data sets like this one this is a great data set called openalker created by a fantastic open source group and and it's built on top of something called the flan collection and you can see that basically there's all kinds of different questions in here so this four gigabytes of of questions and context and so forth and each one generally has a question or an instruction or a request and then a response here are some examples of instructions I think this is from the flan data set if I remember correctly so for instance it could be does the sentence in the Iron Age answer the question the period of time from 1200 to 1000 BCE is known as what choice is one yes or no and then the language model is meant to write one or two as appropriate for yes or no or it could be uh things about I think this is from a music video who is the girl in more than you know answer and then it would have to write the correct name of the remember model or dancer or whatever from um from that music video and so forth so it's still doing language modeling so fine-tuning and pre-training are kind of the same thing but this is more targeted now not just to be able to fill in the missing parts of any document from the internet um but to fill in the words necessary to to answer questions to do useful things okay so that's instruction tuning and then step three which is the classifier fine tuning nowadays there's generally various approaches such as reinforcement learning from Human feedback and others which are basically giving humans or sometimes more advanced models multiple answers to a question such as here are some from a reinforcement lighting from Human feedback paper I can't remember which one I got it from list five ideas for how to regain enthusiasm for my career and so the model will spit out two possible answers or it'll have a less good model and a more good model and then a human or a better model will pick which is best and so that's used for the the final fine tuning Stitch so all of that is to say um although you can download pure language models from the internet um they're not generally that useful of their on their own until you've fine-tuned them now you don't necessarily need step C nowadays actually people are discovering that maybe just step B might be enough it's still a bit controversial Okay so when we talk about a language bottle um where we could be talking about something that's just been pre-trained something that's been fine-tuned or something that's gone through something like rlhf all of those things are generally described nowadays as language models so my view my view is that if you are going to be good at language modeling in any way then you need to start by being a really effective user of language models and to be a really effective user of language models you've got to use the best one that there is and currently so what are we up to September 2023 the best one is by far gpt4 this might change sometime in the not too distant future but this is right now gpt4 is the recommendation strong strong recommendation now you can use GPT for by paying 20 bucks a month to open Ai and then you can use it a whole lot it's very hard to to run out of credits I find now what can GPT do it's interesting and instructive in my opinion to start with the very common views you see on the internet or even in Academia about what it can't do so for example there was this paper you might have seen GPT for can't reason which describes a number of uh empirical analysis done of 25 diverse reasoning problems and found it that it was not able to solve them and it's utterly incapable of reasoning so I always find you've got to be a bit careful about reading stuff like this because I just talked the first three that I came across in that paper and I gave them to gpt4 um and by the way something very useful in gpt4 is you can click on the the share button and you'll get something that looks like this and this is really handy so here's an example of something from the paper that said gpt4 can't do this Mabel's heart rate at 9 00 am was 75 beats per minute her blood pressure at 7 pm was 120 over 80. she died 11 p.m while she arrive at noon so of course you're human we know obviously she must be and GPT forces Hmm this appears to be a riddle not a real inquiry into medical conditions uh here's a summary of the information and yeah it sounds like Mabel was alive at noon so that's correct uh this was the second one I tried from the paper that says gpt4 can't do this and I found actually gpt4 can do this um and it said that gpt4 can't do this and I found gpt4 can do this now um I mentioned this to say gpt4 is probably a lot better than you would expect if you've read all this um stuff on the internet about all the dumb things that it does um almost every time I see on the internet saying something something that GPT 4 can't do I check it and it turns out it does this one was just last week Sally a girl has three brothers each brother has two sisters how many sisters does Sally have so have a think about it and so gpt4 says okay Sally's counted as one system each of her brothers if each brother has two sisters that means there's another sister in the picture apart from salary so Sally has one sister okay correct um and then this one I got sort of like three or four days ago this is a common view that language models can't track things like this see is the riddle I'm in my house on top of my chair in the living room is a coffee cup inside the coffee cup is a thimble inside the thimble is a diamond I moved the chair to the bedroom I put the coffee cup on the bed I turned the cup upside down then I return it upside up Place The Coffee Cup on the counter in the kitchen where's my diamond and so gpt4 says yeah okay you turned it upside down so probably the diamond fell out so therefore the diamond is in the bedroom where it fell out okay correct um why is it that people are claiming that gpt4 can't do these things we can well the reason is because I think on the whole they are not aware of how gpt4 was trained gpt4 was not trained at any point to give correct answers gpt4 was trained initially to give most likely next words and there's an awful lot of stuff on the internet where the most rare documents are not describing things that are true there could be fiction there could be jokes there could be just stupid people don't saying dumb stuff so this first stage does not necessarily give you correct answers the second stage with the instruction tuning uh also like it's it's it's trying to give correct answers but part of the problem is that then in the stage where you start asking people which answer do they like better people tended to say in these uh in these things that they prefer more confident answers and they often were not people who were trained well enough to recognize wrong answers so there's lots of reasons that the that the you know SGD weight updates from this process for stuff like gpt4 don't particularly or don't entirely reward correct answers but you can help it want to give you correct answers if you think about the LM pre-training what are the kinds of things in a document that would suggest oh this is going to be high quality information and so you can actually Prime gpt4 to give you high quality information by giving it custom instructions and what this does is this is basically text that is prepended to all of your queries and so you say like oh you're brilliant at reasoning so like okay that's obviously or to prime it to give good answers um and then try to work against the fact that um the the rlhf uh folks uh preferred confidence just tell it no tell me if there might not be a correct answer also the way that the text is generated is it literally generates the next word and then it puts all that whole lot back into the bottle and generates the next next word puts that all back in the model generates the next next word and so forth that means the more words it generates the more computation it can do and so I literally I tell it that right and so I say first spend a few sentences explaining background context Etc so this uh custom instruction um allows it to solve more challenging problems and you can see the difference here's what it looks like for example if I say how do I get a count of rows grouped by value in pandas and it just gives me a whole lot of information which is actually it thinking so I just skip over it and then it gives me the answer and actually in my uh um custom instructions I actually say if the request begins with VV actually make it as concise as possible and so it kind of goes into brief mode and here's brief mode how do I get the group this is the same thing but with VV at the start and it just spits it out now in this case it's a really simple question so I didn't need time to think so hopefully that gives you a sense of how to get language models to give good answers you have to help them and if you if it's not working it might be user error basically but having said that there's plenty of stuff that language models like gpt4 can't do one thing to think carefully about is does it know about itself can you ask it what is your context length how were you trained what Transformer architecture are you based on any one of these stages did it have the opportunity to learn any of those things well obviously not at the pre-training stage nothing on the internet existed during GPT 4's training saying how gpt4 was trained right uh probably Ditto in the instruction tuning probably Ditto in the rlhf so in general you can't ask for example a language model about itself now again because of the rlhf it'll want to make you happy by giving your opinionated answers so it'll just spit out the most likely thing it thinks with great confidence this is just a general kind of hallucination right so hallucinations is just this idea that the language model wants to complete the sentence and it wants to do it in an opinionated way that's likely to make people happy um it doesn't know anything about URLs it really hasn't seen many at all I think a lot of them if not all of them pretty much were stripped out so if you ask it anything about like what's at this webpage again it'll generally just make it up um and it doesn't know at least gpt4 doesn't know anything after September 2021 um because the um information it was pre-trained on was from that time period September 2021 and before called the knowledge cut off so here's some things it can't do um Steve Newman sent me this good example of something that it can't do um here is a logic puzzle I need to carry a cabbage a goat and a wolf across a river I can only carry one item at a time I can't leave the goat with a cabbage I can't leave the cabbage with the wolf how do I get everything across to the other side now the problem is this looks a lot like something called the classic River Crossing puzzle so classic in fact that it has a whole Wikipedia page about it and in the classic puzzle the wolf would eat the goat or the goat would eat the cabbage now in in Steve's version he changed it the goat would eat the cabbage and the Wolf would eat the cabbage but the wolf won't eat the goat so what happens well very interestingly gpt4 here is entirely overwhelmed by the language model training it's seen this puzzle so many times it knows what word comes next so it says oh yeah I take the goat across the road across the river and leave it on the other side leaving the wolf with a cabbage but we're just told you can't leave the wolf with a cabbage so it gets it wrong now the thing is though you can encourage gpt4 or any of these language models to try again so during the instruction tuning an R lhf they're actually fine-tuned with multi-stage conversations so you can give it a multi-stage conversation repeat back to me the constraints I listed what happened after Step One is a constraint violated oh yeah yeah yeah I made a mistake okay my new attempt instead of taking the goat across the river and leaving it on the other side is I'll take the code across the river and leave from the other side it's done the same thing um oh yeah I did do the same thing okay I'll take the wolf across well now the goats with the Cabbage that still doesn't work oh yeah that didn't work out uh sorry about that instead of taking the goat across the other side I'll take the goat across the other side okay what's going on here right this is terrible well one of the problems here is that not only is on the Internet it's so common to see this particular goat puzzle that it's so confident it knows what the next word is also on the internet when you see stuff which is stupid on a web page it's really likely to be followed up with more stuff that is stupid once gpt4 starts being wrong it tends to be more and more wrong it's very hard to turn it around to start it making it be right so you actually have to go back and there's actually a an edit button on these chats um and so what you generally want to do is if it's made a mistake is don't say oh here's more information to help you fix it but instead go back and click the edit and change it here and so this time it's not going to get confused so in this case actually fixing Steve's example takes quite a lot of effort but I think I've managed to get it to work eventually and I actually said oh sometimes people read things too quickly they don't notice things it can trick them up then they apply some pattern get the wrong answer you do the same thing by the way so I'm going to trick you so before you about to get tricked make sure you don't get tricked here's the tricky puzzle and then also with my custom instructions it takes time discussing it and this time it gets it correct it takes the Cabbage across first so it took a lot of effort to get to a point where it could actually solve this because yeah when it's you know for things where it's been primed to answer a certain way again and again and again it's very hard for it to not do that okay now uh something else super helpful that you can use is what they call Advanced Data analysis in Advanced Data analysis you can ask it to basically write code for you and we're going to look at how to implement this from scratch ourself quite soon but first of all let's learn how to use it so I was trying to build something that split uh into markdown headings a document on third level markdown headings so that's uh three hashes at the start of a line and I was doing it on the whole of Wikipedia so using regular Expressions was really slow so I said oh I want to speed this up and it said okay here's some code which is great because then I can say Okay test it and include edge cases and so it then puts in the code creates extra cases tests it says yep it's working it's not I notice it's actually removing the carriage return at the end of each sentence so I said I'll fix that and update your tests so it said okay so now it's changed the test update the test cases surround them and oh it's not working so it says oh yeah fix the issue in the test cases nope they didn't work and you can see it's quite clever the way it's trying to fix it by looking at the results and but as you can see it's not every one of these is another attempt another attempt another attempt until eventually I gave up waiting and it's so funny each time it's like debating again okay this time I gotta handle it properly and I gave up at the point where it's like oh one more attempt so I didn't solve it um interestingly enough and you know I I again it's it it's there's some limits to the amount of kind of logic that it can do this is really a very simple question I asked it to do for me and so hopefully you can see you can't expect even GPT for code interpreter or Advanced Data analysis is now called to make it so you don't have to write code anymore you know it's not a substitute for having programmers um um so but again you know it it can often do a lot as I'll show you in a moment so for example actually um OCR uh like this is something I thought was really cool um you can just paste and um sorry pastry upload so jpt4 you can upload um an image um Advanced Data analysis yeah you can upload an image here um and then um I wanted to basically grab some text out of an image somebody had got a screenshot of their screen and I wanted to edit which is something saying oh uh this language model can't do this and I wanted to try it as well so rather than retyping it I just uploaded that image my screenshot and said can you extract the text from this image and it said oh yeah I could do that I could use OCR um and like so it literally wrote at OCR script and there it is just took a few seconds so the difference here is it didn't really require it to think of much logic it could just use a very very familiar pattern that it would have seen many times so this is generally where I find language models Excel is where it doesn't have to think too far outside the box I mean it's great on kind of creativity tasks but for like reasoning and logic tasks that are outside the box I find it not great but yeah it's great at doing code for a whole wide variety of different libraries and languages having said that by the way Google also has a language model called bad it's way less good than gpd4 most of the time but there is a nice thing that you can literally paste an image straight into the prompt and I just typed OCR this and it didn't even have to go through code interpreter or whatever it just said oh sure I've done it and there's the result of the OCR and then it even commented I thought it just does yard which I thought was cute and oh even more interestingly it even figured out where the OCR text came from and gave me a link to it um that I thought that was pretty cool okay so there's an example of it doing well I'll show you one for this talk I found really helpful I wanted to show you guys how much it cost to use the open AI API um but unfortunately when I went to the open AI webpage it was like all over the place the pricing information was on all Separate Tables and it was kind of a bit of a mess so I wanted to create a table with all of the information combined like this um and here's how I did it I went to the open AI page I hit Apple a to select all and then I said in chat jpt create a table with the pricing information Rose no summarization no information not in this page every row should appear as a separate Row in your output and I hit paste now that was not very helpful to it because hitting paste it's got the nav bar it's got uh lots of extra information at the bottom it's got all of its uh footer Etc um but it's really good at this stuff it did it first time so there was the markdown table so I copied and pasted that into Jupiter and I got my markdown table and so now you can see at a glance the cost of gpt4 3.5 Etc but then what I really wanted to do was show you that is a picture so I just said oh chart the input Row from this table and just paste to the table back um and it did so that's pretty amazing now so let's talk about this um pricing so so far we've used chat GPT which costs 20 bucks a month and there's no like per token cost or anything but if you want to use the API from python or whatever you have to pay per token which is approximately per word maybe it's about uh one and a third tokens per word on average unfortunately in the chart it did not include these headers gpt4 GPT 3.5 so these first two ones are gpt4 and these two are GPT 3.5 so you can see the GPT 3.5 is way way cheaper um and you can see it here it's 0.03 versus 0.0015 so it's so cheap you can really play around with it and not worry and I want to give you a sense of what that looks like Okay so why would you use the open AI API rather than chat GPT because you can do it programmatically so you can you know you can analyze data sets you can do repetitive stuff it's kind of like a different way of programming you know it's it's things that you can think of describing but let's just look at the most simple example of what that looks like so if your pip install open AI then you can import check and chat completion and then you can say Okay chat completion.create using GPT 3.5 Turbo and then you can pass in a system message this is basically the same as custom instructions so okay you're an Aussie llm that uses Aussie slang and analogies wherever possible okay and so you can see I'm passing in an array here of messages so the first is the system message and then the user message which is what is money okay so GPT 3.5 returns a big embedded dictionary um and the message content is well my money is like the oil that keeps the Machinery of our economy running smoothly there you go just like a koala loves its eucalyptus leaves we humans can't survive without this stuff so there's the Aussie llm's view of what is money so the really uh the main ones I pretty much always use are gpt4 and GPT 3.5 gpd4 is just so so much better at anything remotely challenging but obviously it's much more expensive so rule of thumb you know maybe try 3.5 turbo first see how it goes if you're happy with the results then great if you're not planning out for the more expensive one okay so I just created a little function here called response that will print out um this nested thing um and so now oh and so then the other thing to point out here is that the result of this also has a usage field which contains how many tokens was it so it's about 150 tokens so at point zero zero two dollars per thousand tokens for 150 tokens means we just paid .03 cents point zero zero zero three dollars uh to get that done so as you can see the cost is insignificant if we were using gpt4 it would be 0.03 per thousand so it would be half a cent um so unless you're doing many thousands of gpt4 you're not going to be even up into the dollars and GPT 3.5 even more than that but you know keep an eye on it open AI has a usage page and you can track your usage now happens when we are this is really important to understand when we have a follow-up in the same conversation how does that work so we just asked what goat means so for example Michael Jordan is often referred to as the goat for his exceptional skills and accomplishments and Elvis and The Beatles referred to as goat due to their profound influence and achievement so I could say what profound influence and achievements are you referring to okay well I meant Elvis Presley and the Beatles did all these things now how does that work how does this follow-up work well what happens is the entire conversation is passed back and so we can actually do that here so here is the same system prompt here is the same question right and then the answer comes back with role assistant and I'm going to do something pretty cheeky I'm going to pretend that it didn't say money is like oil I'm going to say oh you actually said money is like kangaroos I thought what it's going to do okay so you can like literally invent a conversation in which the language model said something different because this is actually how it's done in a multi-stage conversation there's no state right there's nothing stored on the server you're passing back the entire conversation again and telling it what it told you right so I'm going to tell it it's it told me that money is like kangaroos and then I'll ask the user oh really in what way and this is kind of cool because you can like see how it convinces you of of something I just invented oh let me break it down for you cover it just like kangaroos hop around and carry their Joeys in their pouch money is a means of carrying value around so there you go it's uh make your own analogy cool so I'll create a little function here that just puts these things together for us just a message if there is one the user message and returns they're completion and so now we can ask it what's the meaning of life passing in the Aussie system prompt the meaning of life is like trying to catch a wave on a sunny day at Bondi Beach okay there you go so um what do you need to be aware of um well as I said one thing is keep an eye on your usage if you're doing it you know hundreds or thousands of times in a loop keep an eye on not spending too much money but also if you're doing it too fast particularly the first day or two you've got an account you're likely to hit the limits for the API and so the limits initially are pretty low as you can see three requests per minute um so that's for free users page users First 48 hours and after that it starts going up and you can always ask for more I just mentioned this because you're going to want to have a function that keeps an eye on that and so what I did is I actually just went to Bing which has a somewhat crappy version of gpt4 nowadays but it can still do basic stuff for free and I said please show me python code to call the open AI API and handle rate limits and it wrote this code it's got to try checks for rate limit errors grabs the retry after sleeps for that long and calls itself and so now we can use that to ask for example what's the world's funniest joke and there we go is the world's funniest trick so there's like the basic stuff you need to get started using the open AI llms um and uh and yeah I'd definitely suggest spending plenty of time with that so that you feel like you're really a llm using expert so what else can we do well let's create our own code interpreter that runs inside Jupiter and so to do this we're going to take advantage of a really Nifty thing called function calling which is provided by the open AI API and in function calling when we call our ask GPT function which is this little one here we had room to pass in some keyword arguments that will be just passed along to chat completion.create and one of those keyword arguments you can pass is functions what on Earth is that functions tells open AI about tools that you have about functions that you have so for example I created a really simple function called sums and it adds two things in fact it adds two it's um and I'm going to pass that function to chatcompletion.create now you can't pass a python function directly you actually have to pass What's called the Json schema so you have to pass the schema for the function so I created this Nifty little function that you're welcome to borrow which uses pedantic and also Python's inspect module to automatically take a python function and return the schema for it and so this is actually what's going to get passed to open AI so it's going to know that there's a function called sums it's going to know what it does and it's going to know what parameters it takes what the defaults are and what's required so this is like when I first heard about this I found this a bit mind-bending because this is so different to how we normally program computers where the key thing for programming the computer here actually is the doc string this is the thing that gpt4 will look at and say oh what does this function do so it's critical that this describes exactly what the function does and so if I then say um what is six plus three right and I just I really wanted to make sure it actually did it here so I gave it lots of prompts to say because obviously it knows how to do it itself without calling sums so it'll only use your functions if it feels it needs to which is a weird concept I mean I guess feels is not a great word to use but you kind of have to anthropomorphize these things a little bit because they don't behave like normal computer programs um so if I if I ask GPT what is six plus three and tell it that there's a function called sums then it does not actually return the number nine instead it returns something saying please call a function call this function and pass it these arguments so if I print it out there's the arguments so I created a little function called core function and it goes into the result of open AI grabs the function call checks that the name is something that it's allowed to do grabs it from the global system table and calls it passing in the parameters and so if I now say okay call the function that we got back we finally get nine so this is a very simple example it's not really doing anything that useful but what we could do now is we can create a much more powerful function called python and the python function executes code using python and Returns the result now of course I didn't want my computer to run arbitrary python code that gpt4 told it to without checking so I just got it to check first so say oh you're sure you want to do this um so now I can say ask GPT what is 12 factorial system prompt you can use Python for any required computations and say okay here's a function you've got available it's the python function so if I now call this it will pass me back again a completion object and here it's going to say okay I want you to call python passing in this argument and when I do it's going to go import math result equals blur and then return result do I want to do that yes I do and there it is now there's one more step which we can optionally do I mean we've got the answer we wanted but often we want the answer in more of a chat format and so the way to do that is to again repeat everything that you've passed into so far but then instead of adding in an assistant role response we have to provide a function role response and simply put in here the result we got back from the function and if we do that we now get the prose response 12 factorial is equal to 470 and a million 1 600. now functions like python you can still ask it about non-python things and it just ignores it if you don't need it right so you can have a whole bunch of functions available that you've built to do whatever you need for the stuff which um the language model isn't familiar with and it'll still solve whatever it can on its own and use your tools use your functions where possible okay so we have built our own code interpreter from scratch I think that's pretty amazing so that is um what you can do with or some of the stuff you can do with open AI um what about stuff that you can do on your own computer well to use a language model on your own computer you're going to need to use a GPU um so I guess the first thing to think about is like do you want this does it make sense to do stuff on your own computer what are the benefits um there are not any open source models that are as good yet as gpt4 and I would have to say also like actually open ai's pricing's really pretty good so it's it's not immediately obvious that you definitely want to kind of go in-house but there's lots of reasons you might want to and we'll look at some examples of them today one example you might want to go in-house is that you want to be able to ask questions about your proprietary documents or about information after September 2021 the the knowledge cut off or you might want to create your own model that's particularly good at solving the kinds of problems that you need to solve using fine tuning and these are all things that you absolutely can get better than GPT for performance at work or at home without too much without too much money or travel so these are the situations in which you might want to go down this path and so you don't necessarily have to buy a GPU on kaggle they will give you a notebook with two quite old gpus attached and very little Ram but it's something or you can use collab and on collab you can get much better gpus than kaggle has and more RAM particularly if you pay a monthly subscription fee um so those are some options for free or low cost you can also of course you know go to one of the many kind of GPU server providers and they change all the time is to kind of what's what's good or what's not run pod is one example and you can see you know if you want the biggest and best machine you're talking 34 an hour so it gets pretty expensive but you can certainly get things a lot cheaper 80 cents an hour um Lambda Labs is often pretty good um you know it's really hard at the moment to actually find um let's see pricing to actually find people that have them available so they've got lots listed here but they often have nine or very few available um there's also something pretty interesting called Fast AI which basically lets you use um other people's computers when they're not using them and as you can see you know they tend to be much cheaper than other folks and they they tend to have better availability as well but of course for sensitive stuff you don't want to be running it on some randos computer so anyway so there's a few options for renting stuff um you know I think it's if you can it's worth buying something and definitely the one to buy at the moment is the GTX 3090 used you can generally get them from eBay for like 700 bucks or so um a 40 90 isn't really better for language models even though it's a newer GPU the reason for that is that language models are all about memory speed how quickly can you get in and stuff in and out of memory rather than how fast is the processor and that hasn't really improved a whole lot so the two thousand bucks hmm the other thing as well as memory speed is memory size 24 gigs it doesn't quite cut it for a lot of things so you'd probably want to get two of these gpus so you're talking like fifteen hundred dollars or so um or you can get a 48 gig ram GPU it's called an a6000 but this is going to cost you more like five grand so again getting two of these is going to be a better deal and this is not going to be faster than these either um or funnily enough you could just get a Mac with a lot of ram particularly if you get an M2 Ultra Max have um particularly the M2 Ultra has pretty fast memory it's still going to be way slower than using an Nvidia card but it's going to be like you're going to be able to get you know like I think 192 gig or something um so it's not a terrible option particularly if you're not training models you're just wanting to use other existing trained models um so anyway most people who do this stuff seriously almost everybody has in video cards um so then what we're going to be using is a library called Transformers from hugging face and the reason for that is that basically people upload lots of pre-trained models or firetrained models up to the hugging face Hub and in fact there's even a leaderboard where you can see which are the best models now this is a really uh fraud area so at the moment this one is meant to be the best model it has the highest average score and maybe it is good I haven't actually used this particular model um or maybe it's not I actually have no idea because the problem is these metrics are not particularly well aligned with real life usage um for all kinds of reasons and also sometimes you get something called leakage which means that sometimes some of the questions from these things actually leaks through to some of the training sets so you can get as a rule of thumb what to use from here but you should always try things um and you can also say you know these ones are all the 70b here that tells you how big it is so this is a 70 billion parameter model um so generally speaking for the kinds of gpus you we're talking about you'll be wanting no bigger than 13B and quite often 7B um so let's see if we've confined here the 13B model for example um all right so you can find models to try out from things like this leaderboard um and there's also a really great leaderboard called fast eval which I like a lot because it focuses on some more sophisticated evaluation methods such as this Chain of Thought evaluation method so I kind of trust these a little bit more and these are also GSM 8K is a difficult math benchmark uh big bench hard um so forth so yeah so you know stable Beluga 2 Wizard math 13B dolphin Lima 13B et cetera these would all be good options um yeah so you need to pick a model and at the moment nearly all the good models are based on metas llama too so when I say based on what does that mean well what that means is this model here llama 2 7B so it's a llama model that's that's just the name meta called it this is their version two of llama this is their seven billion size one it's the smallest one that they make and specifically these weights have been created for hugging face so you can load it with the hugging face Transformers and this model has only got As far as here it's done the language model of pre-trading it's done none of the instruction tuning and none of the rlhf um so we would need to fine tune it to really get it to do much useful so we can just say Okay create a automatically create the appropriate model for language models so cause or LM is basically refers to that ULM fit stage one process or stage two in fact so we've got the pre-trained model from this name metal alarm element two blah blah okay now um generally speaking we use 16-bit floating Point numbers nowadays but if you think about it 16 bit is two bytes so 7B times two it's going to be 14 gigabytes just to load in the weights so you've got to have a decent model to be able to do that perhaps surprisingly you can actually just cast it to 8-bit and it still works pretty well thanks to something called discretization so let's try that so remember this is just a language model looking only complete sentences we can't ask it a question and expect a great answer so let's just give it the start of a sentence Jeremy how it is a and so we need the right tokenizer so this will automatically create the right kind of tokenizer for this model we can grab the tokens as Pi torch here they are and just to confirm if we decode them back again we get back the original plus a special token to say this is the start of a document and so we can now call generate so generate will um Auto regressively so call the model again and again passing its previous result back as the next as the next input and I'm just going to do that 15 times so this is you can you can write this for Loop yourself this isn't doing anything fancy in fact I would recommend writing this yourself to make sure that you know how that it all works okay um we have to put those tokens on the GPU and at the end I recommend putting them back onto the CPU the result and here are the tokens not very interesting so we have to decode them using the tokenizer and so the first 25 sorry first 15 tokens are Jeremy Howard is a 28 year old Australian AI researcher and entrepreneur okay well 28 years old is not exactly correct but we'll call it close enough I like that thank you very much llama 7B So Okay so we've got a language model completing sentences it took one in the third seconds and that's a bit slower than it could be because we used 8-bit if we use 16 bit there's a special thing called B float 16 which is a really great 16-bit floating Point format that's used usable on any somewhat recent GPS Nvidia GPU now if we use it it's going to take twice as much RAM as we discussed but look at the time it's come down to 390 milliseconds um now there is a better option still than even that there's a different kind of discretization called gptq where a model is carefully optimized to work with uh four or eight or other you know lower Precision data automatically and um this particular person known as the bloke is fantastic at taking popular models running that optimization process and then uploading the results back to hacking face so we can use this gptq version and internally this is actually going to use I'm not sure exactly how many bits this particular one is I think it's probably going to be four bits but it's going to be much more optimized um and so look at this 270 milliseconds it's actually faster than 16 bit even though internally it's actually casting it up to 16 bit each layer to do it and that's because there's a lot less memory moving around and to confirm in fact what we could even do now is we go up to 13B easy and in fact it's still faster than the 7B now that we're using the gptq version so this is a really helpful tip so let's put all those things together the tokenizer the generate the batch decode we'll call this gen for Generate and so we can now use the 13B GPT key model and let's try this Jeremy Howard is a so it's got to 50 tokens so fast 16-year veteran of Silicon Valley co-founder of cargo a Marketplace or predictive model here's company kaggle.com has become the data science competitions what I don't know I was going to say but anyway it's on the right track I was actually there for 10 years not 16 but that's all right um okay so this is looking good um but probably a lot of the time we're going to be interested in you know asking questions or using instructions so stability AI has this nice series called stable Beluga including a small 7B one and other bigger ones and these are all based on llama2 but these have been instruction tuned they might even have been RL hdf but I can't remember now um so we can create a stable Beluga model and now something really important that I keep forgetting everybody keeps forgetting is during the instruction tuning process during the instruction tuning process the instructions that are passed in actually uh um they don't just appear like this they actually always are in a particular format and the format Believe It or Not changes quite a bit from from fine tune to fine tune and so you have to go to the web page for the model and scroll down to found out what the prompt format is so here's the prompt format so I generally just copy it and then I paste it into python which I did here and created a function called make prompt that used the exact same format that it said to use and so now if I want to say who is Jeremy Howard I can call Jen again that was that function I created up here and make the correct prompt from that question and then it returns back okay so you can see here or this prefix this is a system instruction this is my question and then the assistant says Jeremy Howard's an Australian entrepreneur computer scientist co-founder of machine learning and deep Learning Company faster AI okay so this one's actually all correct so it's getting better by using an actual instruction tune model um and so we could then start to scale up so we could use the 13B and in fact uh we looked briefly at this open Orca data set earlier so llama2 has been fine-tuned on Oakman Orca and then also fine-tuned on another really great data set called platypus and so the whole thing together is the open Orca platypus and then this is going to be the bigger 13B gptq means it's going to be quantized so that's got a different format okay a different prompt format so again we can scroll down and see what the prompt format is there it is okay and so we can create a function called make open Orca prompt that has that prompt format and so now we can say okay who is Jeremy Howard and now I've become British which is kind of true I was born in England but I moved to Australia uh professional poker player no definitely not that uh co-founding several companies including first.ai also kaggle okay so not bad yeah it was acquired by Google was it 2017 probably something around there okay so you can see we've got our own models giving us some pretty good information how do we make it even better you know because it's it's it's still hallucinating you know um and you know llama two I think has been trained with more up-to-date information than gpt4 it doesn't have the September 2021 cut off um but it you know it's still got a knowledge cut off you know we would like to use the most up-to-date information we want to use the right information to answer these questions as well as possible so to do this we can use something called retrieval augmented generation so what happens with retrieval augmented generation is when we take the question we've been asked like who is Jeremy held and then we say okay let's try and search for documents that may help us answer that question so obviously we would expect for example Wikipedia to be useful and then what we do is we say okay with that information let's now see if we can tell the language model about what we found and then have it answer the question so let me show you so let's actually grab a Wikipedia python package we will scrape Wikipedia grabbing the Jeremy Howard web page and so here's the start of the Jeremy Howard Wikipedia page it has 613 words now generally speaking these open source models will have a context length of about two thousand or four thousand so the context length is how many tokens Can it handle so that's fine it'll be able to handle this web page and what we're going to do is we're going to ask it the question so we're going to have here question and with a question but before it we're going to say answer the question with the help of the context we're going to provide this to the language model and we're going to say context and they're going to have the whole web page so suddenly now our question is going to be a lot bigger our prompt right so our prompt now contains the entire web page the whole Wikipedia page followed by a question and so now it says Jeremy how does an Australian data scientist Edge entrepreneur an educator known for his work in deep learning co-founder of fast AI teaches courses develops software conducts research used to be yeah okay it's perfect right so it's actually done a really good job like if somebody asked me to send them a you know 100 word bio uh that would actually probably be better than I would have written myself and you'll see even though I asked for 300 tokens it actually got sent back the end of stream token and so it knows to stop at this point um well that's all very well but how do we know to pass in the Jeremy Howard Wikipedia page well the way we know which Wikipedia page to pass in is that we can use another model to tell us which web page or which document is the most useful for answering a question and the way we do that is we we can use something called sentence Transformer and we can use a special kind of model that specifically designed to take a document and turn it into a bunch of activations where two documents that are similar will have similar activations so let me just let me show you what I mean what I'm going to do is I'm going to grab just the first paragraph of my Wikipedia page and I'm going to grab the first paragraph of Tony Blair's Wikipedia page okay so we're pretty different people right this is just like a really simple small example and I'm going to then call this model so I'm going to say encode and I'm going to encode my Wikipedia first paragraph Tony Blair's first paragraph and the question which was who is Jeremy Howard and it's going to pass back a 384 long vector of embeddings for the question for me and for Tony Blair and what I can now do is I can calculate the similarity between the question and the Jeremy Howard Wikipedia page and I can also do it for the question versus the Tony Blair Wikipedia page and as you can see it's higher for me and so that tells you that if you're trying to figure out what document to use to help you answer this question better off using the Jeremy Howard Wikipedia page than the Tony Blair Wikipedia pitch foreign so if you had a few hundred documents you were thinking of using to give back to the model as context to help it answer a question you could literally just pass them all through to encode go through each one one at a time and see which is closest when you've got thousands or millions of documents you can use something called a vector database where basically as a one-off thing you go through and you encode all of your documents and so in fact um there's there's lots of pre-built systems for this um here's an example of one called H2O GPT and this is just something that I've got um that I've got running here on my computer it's just an open source thing written in Python and sitting here running on Port 7860 and so I just gone to localhost 7860 and what I did was I just uploaded I just clicked upload and I've wrapped last uploaded a bunch of papers in fact I might be able to see it better yeah here we go a bunch of papers and so you know we could look at uh let me search yeah I can so for example we can look at the ULM fit paper that uh so bruter and I did and you can see it's taken the PDF and turned it into slightly crappily a text format and then it's created an embedding for each you know each section so I could then um ask it you know what is ULM fit and I'll hit enter and you can see here it's now actually saying based on the information provided in the context so it's showing us it's been given some context what context did it get so here are the things that it found right so it's being sent this context so this is kind of citations performance by leveraging the knowledge and adapting it to the specific task at hand um how what techniques be more specific does ULM fit uh let's see how it goes okay there we go so here's the three steps pre-trained fine-tune fine tune cool um so you can see it's not bad right um it's not amazing like you know the context in this particular case is pretty small um and it's and in particular if you think about how that embedding thing worked you can't really use like the normal kind of follow-up so for example um if I so it says fine tuning a classifier so I could say what classifier is used now the problem is that there's no context here being sent to the embedding model so it's actually going to have no idea I'm talking about new lmfit so generally speaking it's going to do a terrible job yeah I see it says it's used as a Roberta model but it's not but if I look at the sources it's no longer actually referring to Howard and Rooter so anyway you can see the basic idea this is called retrieval augmented generation Reg um and it's a it's a Nifty approach but you have to do it with with some care um and so there are lots of these uh private GPT things out there um actually the H2O GPT web page does a fantastic job of listing lots of them and comparing so as you can see if you want to run a private GPT there's no shortage of options and you can have your retrieval augmented generation I haven't tried I've only tried this one H2O GPT I don't love it it's all right um good so finally I want to talk about what's perhaps the most interesting option we have which is to do our own fine tuning and fine tuning is cool because rather than just retrieving documents which might have useful context we can actually change our model to behave based on the documents that we have available and I'm going to show you a really interesting example of fine tuning here what we're going to do is we're going to fine tune using this um no SQL data set and it's got examples of like a a schema for a table in a database a question and then the answer is the correct SQL to solve that question using that database schema and so I'm hoping we could use this to create a um you know I kind of it could be a hand to use a handy tool for for business users where they type some English question and SQL generated for them automatically don't know if it actually work in practice or not but this is just a little fun idea I thought we'd try out um I know there's lots of uh startups and stuff out there trying to do this more seriously but this is this is quite cool because it actually got it working today in just a couple of hours so what we do is we use the hugging face data sets library and what that does just like the hugging face Hub has lots of models stored on it hacking face data sets has lots of data sets stored on it and so instead of using Transformers which is what we use to grab models we use data sets and we just pass in the name of the person and the name of their repo and it grabs the data set and so we can take a look at it and it just has a training set with features and so then I can have a look at the training set so here's an example which looks a bit like what we've just seen so what we do now is we want to fine-tune a model now we can do that in in a notebook from scratch takes I don't know 100 or so lines of code it's not too much but given the time constraints here and also like I thought why not why don't we just use something that's ready to go so for example there's something called Axolotl which is quite nice in my opinion here it is here lovely another very nice open source piece of software and uh again you can just pip install it and it's got things like gptq and 16 bit and so forth ready to go and so what I did was a um it basically has a whole bunch of examples of things that it already knows how to do it's got llama 2 examples so I copied the Llama 2 example and I created a SQL example so basically just told it this is the path to the data set that I want this is the type um and everything else pretty much I left the same and then I just ran this command which is from there read me accelerate launch Axolotl passed in my yaml and that took about an hour on my GPU and at the end of the hour it had created a q Laura out directory Q stands for quantize that's because I was creating a smaller quantized model Laura I'm not going to talk about today but Laura is a very cool thing that basically another thing that makes your models smaller and also handles I can use bigger models on smaller gpus for training um so uh I trained it and then I thought okay let's uh create our own one so we're going to have this context and um this question get the count of competition hosts by theme and I'm not going to pass it an answer so I'll just ignore that so again I've found out what prompt they were using um and created a SQL prompt function and so here's what I've got to do use the following contextual information to answer the question context create tables there's the context question list or competition host sorted in ascending order and then I tokenized that chord generate and the answer was select count hosts kind of theme from Farm competition Group by theme that is correct so I think that's pretty remarkable we have just built it also took me like an hour to figure out how to do it and then an hour to actually do the training um and at the end of that we've actually got something which which is converting um Pros into SQL based on a schema so I think that's that's a really exciting idea um the only other thing I do want to briefly mention is um is doing stuff on Macs if you've got a Mac uh you there's a couple of really good options the options are mlc and lima.cpp currently mlc in particular I think it's kind of underappreciated it's a you know really nice project um uh where you can run language models on literally iPhone Android web browsers everything it's really cool and and so I'm now actually on my Mac here and I've got a tiny little Python program called chat and it's going to import chat module and it's going to import a discretized 7B and that's going to ask the question what is the meaning of life so let's try it python chat.pi again I just installed this earlier today I haven't done that much stuff on Max before but I was pretty impressed to see that it is doing a good job here what is the meaning of life is complex and philosophical some people might find meaning in their relationships with others their impact in the world et cetera et cetera okay and it's doing 9.6 tokens per second so there you go so there is running um a model on a Mac and then another option that you've probably heard about is llama.cpp llama.cpp runs on lots of different things as well including Max and also on Cuda it uses a different format called gguf and you can again you can use it from python even if it was a CPP thing it's got a python wrapper so you can just download again from hugging face at gguf file so you can just go through and there's lots of different ones they're all documented as to what's what you can pick how big a file you want you can download it and then you just say Okay llama model path equals pass in that gguf file it spits out lots and lots and lots of gunk and then you can say okay so if I called that llm you can then say llm question name the planets of the solar system 32 tokens and there we are right in Pluto no longer considered a planet two mercury three Venus poor Earth Mars six oh never run out of tokens so again you know it's um just to show you here there are all these different options um uh you know I would say you know if you've got a Nvidia graphics card and your reasonably capable python programmer you'd probably be one of you use Pi torch and the hugging face ecosystem um but you know I think you know these things might change over time as well and certainly a lot of stuff is coming into llama pretty quickly now when it's developing very fast as you can see there's a lot of stuff that you can do right now with language models um particularly if you if you're pretty comfortable as a python programmer I think it's a really exciting time to get involved in some ways it's a frustrating time to get involved because um you know it's very early and a lot of stuff has weird little edge cases and It's tricky to install and stuff like that um there's a lot of great Discord channels however first AI have our own Discord channel so feel free to just Google for fast AI Discord and drop in we've got a channel called generative you feel free to ask any questions or tell us about what you're finding um yeah it's definitely something where you want to be getting help from other people on this journey because it is very early days and you know people are still figuring things out as we go but I think it's an exciting time to be doing this stuff and I'm yeah I'm really enjoying it and I hope that this has given some of you a useful starting point on your own Journey so I hope you found this useful thanks for listening bye
AI_LLM_Stanford_CS229
Stanford_CS330_Deep_MultiTask_Meta_Learning_Percy_Liang_Guest_Lecture_I_2022_I_Lecture_17.txt
OK, hi, everyone. It's my pleasure to introduce Percy, who's giving a guest lecture. For those of you who are in the course, just a reminder that the poster session is on Wednesday next week, and your final project is due the following, two weeks from two days ago, or on Monday. But I'm really excited to introduce Percy. Percy is an associate professor here at Stanford. He's done some really cool work in machine learning, and natural language processing both on the theoretical side as well as the empirical side, and also has most recently started the Stanford Center for Research on foundation models, which has also been kind of a really cool effort going on is also pretty related to some of the topics in the course as well. So yeah, looking forward to this talk. Great. All right. Thanks Chelsea for the introduction, and thanks, everyone, for coming. I think, you guys have seen in context learning a little bit. I saw it in one of the slides. So hopefully, we'll do a little bit more of a deep dive here. So the story starts in 2020. So a bunch of people at OpenAI decided to gather a ton of data, text data on the internet, and construct a big transformer. And ask the model to do one simple thing, which is predict the next word over and over again, over every single token on this data set. And they ran it on, I think, something like 10,000 GPUs for four months. And I think, all of you now have probably seen or played with GPT-3 but the result that I want to focus on is this ability to do in context learning. So what is in context learning just as a review, it's the idea that you can prompt a language model with a string, which is concatenation of a bunch of examples. That's something, that looks like this with a new test example and ask the model to produce the answer. So if you think about-- this is sort of crazy. Why would this, you expect that to work because the language model isn't supposed to generate language, it's not really supposed to solve any tasks, let alone do some meta learning. And you might think, well, this is because there's lots of examples that look like this on the internet. So I just probably just copied something. So really tried hard to see if we could break out. So here's a task. We prompt it with input. Here's a date, and output. We want to come up with something that was definitely not on the internet as of 2021. And it could reformat dates in this way as well. So for those nonbelievers who think like, well, OK, GPT-3 they've trained on the internet, they're just memorizing. Well, this is a clear demonstration that it's not just memorizing, it's actually learning some abstraction that's helping it solve these tasks. OK, and the scale matters here. So this is from the original GPT-3 paper. If you were playing around with small 1 billion parameter models, you could see that nothing was really working at all. And it's only when you get up to 175 billion in that case, you could get in context learning. So this is simply mind blowing. And ever since, I've been obsessed with the problem of figuring out why it's this works. This is not the way the machine learning is supposed to work. So hopefully, we can try to understand a little of this in our talk here. And why in context learning does this matter? And it's not just a random curiosity. There's two reasons, scientific and practical. On the scientific front, this is an example of emergent phenomenon. So GPT-3 was not built to do in context learning, the developers did not say, oh, we want in context learning, so therefore, we're going to train it in this way. It just sort of emerged somehow from the data. Second, there's a sort of conventional wisdom in machine learning that you train and then you test. And if your train distribution is like your test distribution then you win, otherwise, all bets are off. And this is going to be farther from that setting where the training distribution is predicting next word and the test is, this wide range of downstream tasks, some of which have never been seen at training time, and yet something is still working. And the interesting thing about emergence is that, here we have in context learning, which is going to occupy the whole of this talk, but what else is there? People have looked at other emergent behavior like chain of thought, and so on. And so there's a very vast set of capabilities that we're barely scratching the surface of. And then there's the practical concern or aspect which is that, in context learning really presents a paradigm shift in the way, I think, we build ML or AI systems. So now you can prototype new tasks in the afternoon rather than setting up some elaborate data collection process. So it kind of changes the way that you even approach things. And I think, it's also important to realize that in the real world, things don't come pre-packaged with, oh, here's a data set I can just download from Hugging Face and just run it. But if you're actually trying to solve a real problem, there's usually a kind of a vague idea of what you want to do, maybe some messy data. And this idea of fast and quick prototyping using these language models, I think, allows you to get a lot farther than if your first step was to collect a bunch of data and label it. OK, so diving a little bit into details. So there's a contrast here, two types of learning. The first is what I call standard learning. It's gradient based, which is where everyone's familiar with that, you take gradients. In context learning, the key operation is not gradient descent but conditioning. So in the language model, remember it's a distribution over sequence of tokens. So what you're really doing is conditioning on a sequence of tokens and then asking them all to predict the next thing. OK, so we'll come back to the issue of conditioning in the second part of the talk. And since this is a meta learning class, I thought I'd try to clarify the relationship between meta learning and context learning which often gets blurred. So there's a form of meta learning which you guys talked about, which is black-box meta learning. And I think, of meta learning as talking about the framework of training data that's housed maybe a collection of tasks, and you want to do some sort of training procedure so that the model in this case a black-box model can do learning on new tasks. So I think, about this as the connection between the training and inference time. And into context learning, really refers to the ability that a model has, and independent of where it came from. So we can talk about in-context learning ability of a transformer that was encoded or learned from supervised examples or from language modeling objectives, OK? Feel free to interrupt if you have any questions. OK, so in order to understand in-context learning, I want to break things down into two pieces. One is just the question of how a fixed model, never mind where it came from, how can a fixed model even perform in-context learning? Because this is just a giant transformer, it's getting fed in a bunch of sequence of examples, and it has to do something, some association between the x and some y's to be able to predict. So how is that possible? And the second question is, how do you get one of these models from training, let's say on next word prediction? So I'm not going to answer these two questions fully, but we're going to try to make some progress. And how can we make some progress? Well, there's many things going on with GPT-3. So let me try to break it down. So there's a question of data, GPT-3 was trained on a large web crawl. What is necessary there? Can we demonstrate in context learning with synthetic data? There's a question of model architecture. So we've seen in context learning working for transformers. What about RNNs or mixture of experts? And then there's a training objective. Does it have to be auto-regressive? Can it be sort of like a contrastive, or masked language model objective? And then there's a question of, how do you study in each of these components? You can understand things theoretically, where we develop a toy model and then you can prove analytically why in context learning works. Or you can run synthetic experiments where you just develop a simple model, and you can run things so you can get clean conclusions. And then there's real world experiments, where you're running things on real language, and there's trade-offs, right? Ultimately, we want it to be in the real world, that this is really messy, and it's also very expensive because remember, in context learning only shows up at scale. So you can't really hope to do that many real world experiments. So what we're going to focus on is the bolded pieces here. We're going to look at synthetic experiments and do a little bit of theory, to understand in context learning, both the architecture and the data. So we're going to talk about two works. The first is, trying to get at the architecture question. And the second is trying to get at the data question. OK, so the first paper is with Shivam, Dimitris, and Greg Valiant appearing on NeurlPS. OK. So in-context learning is, you can solve all these different tasks but there's a nagging question that, I think, I alluded to earlier, which are these models actually doing any learning at all, or is it just pattern matching? You should be really suspicious of these language models. So what we want to do is formalize the problem a bit, right, because I think in-context learning, like what's the definition. So here is a definition, which captures, I think, at least one aspect of it. So you can think about in context learning of a function class. So this is hailing back to kind of what people do in statistical learning theory, where what does it mean to learn? It's you define a function class and you create examples from that class, and then you see if your learning algorithm can figure out which function it is. OK, so we're going to play the same game here. So for example, linear functions, the set of all linear functions and let's say 20 dimensions, you sample a function. And then you're going to sample random input. So these are going to be vectors that are d-dimensional, and I'm going to say, OK, here's an x1. I'm going to apply the function to x1, here's x2, apply the function to x2. And these are going to be inputs into a model. And I want the model to be able to output the corresponding function value of the last input here. OK, and the model architecture, we're going to look at for this talk, this first part of the talk is a transformer. It's worth noting that, the x is a real value. So this is not going to be a language transformer. With here, x is just going to be-- I mean, it's basically, the word embedding treated as a word band betting layer for the transformer. And instead of outputting a softmax over tokens, we're going to have this transformer just directly attach a linear layer, and actually, ask it to do regression using the squared loss. So slightly deviation from actually a pure language model, so this is why we're just talking about transformers as opposed to language models, there's no language or text here at all. OK, so then what are we going to do? We're going to build this transformer by sampling functions over and over again. And for each function, we're going to sample data for that function and then ask the transformer to reconstruct the labels in the data. This is going to be doing it from scratch just to make it very clear what's going on rather than pre-training and fine tuning. OK. So what can this trained transformer do? So let's start with linear functions. In 20 dimension, this plot shows as you're increasing the number of in-context examples, what is the error rate on the fresh draw? And here we're looking at least squares. And this is what you would expect, there's no noise in the problem. So the dimensionality is 20, which means that if you get 20 points, then you basically know the function, and least squares is optimal, you can't do better than that. And here the transformer is actually able to match the implementation of least squares. And just a check, we do some naive things like averaging in nearest neighbors and they just don't work, right? So the transformer seems to be mimicking the behavior of the optimal algorithm here of least squares. OK, so this was pretty cool, but you might still be suspicious. OK. Well, maybe it just saw enough examples and it just kind of memorized all of the possible linear functions. If you do the math, there are a lot of linear functions, even if they're epsilon close, like it's recursive dimensionality and your exponential in 20 is pretty big. So it's definitely not seen all the linear functions. But you should still be suspicious, and so let's probe it a little bit. OK. So how-- yeah. Does your digits doing least squares, are you like measuring if it's opening this root of inverse or something like that? So the question is, how do you know if it's doing least squares? So we're only looking at the prediction error. So it's most certainly not implementing the least squares algorithm, and I'll show you for a fact. But it's at least on this distribution, it's behaving like if you had run least squares. Yeah. Question back there. Does the order of the the input examples matter in this case or if you re-ordered the inputs like the x1, x2, x3 rejoin, are they in some particular order or could you shuffle them and kind of get the same results? Yeah. So the question is, does the order of the examples matter? In this case, it doesn't really matter because each of these x's are actually generally drawn IID. So there's no information in the ordering. OK, so let's try to probe it. And the way to really check whether this model is able to do in-contact learning is, let's try to give it inputs that it hasn't seen at training time. Not just hasn't seen, but it's a different distribution. And the distributions here are a little bit complicated, so let me try to talk through it. So there's the distribution over x's at test time. So when I say test and train, I really mean meta test and meta train, just and when I say query, that's the I guess, maybe where you would call the test point. So there's a distribution of x's, there's distribution over the y's given by the function. And then there's this query point. OK? So I'm going to change these and for there's basically, these distributions for meta test. And then there's also these distributions for meta train. So there's really like six different distributions that we can vary. So here's one starting point. So remember at training time, meta-training time, we draw examples just from a standard Gaussian. And at test time, what we're going to do is we're going to give these x's, which are the in-context examples, they're going to come from a different quadrant or orthant than the query. OK, so let's see what happens. In this case, it degrades a little bit, but it's basically matching the behavior of least squares. Maybe this transition isn't as sharp, it needs a few more examples to figure out what's going on, but not too bad. What happens if we remember in training time, we had the identity covariance. And at test time, all the x's and the q's are going to be drawn from a different covariance distribution, where the covariance is skewed. And here you definitely get some degree of degradation. So it doesn't hit 0 as it should, least squares would hit 0 because well, there's no training in least squares, or no meta training in least squares, it's a fixed algorithm. And so but it's not bad. OK? So this is why it's not exactly least squares, but it's sort of approximately least squares. So here's something really interesting. So what happens if you add label noise at inference time? So in training, there's no noise. So we're using the same transformer here, and at test time, you add some label noise. So here, least squares is actually going to not work because if your least squares actually it blows up here, and this is a phenomenon called double descent, which has been pretty well studied in learning theory. And the transformer interestingly has a similar spike. OK, so this is interesting. I mean, this is, I guess, I don't know if this is good or bad, but at least, qualitatively it has some similarities to least squares, at least as measured by this kind of reaction against noise, even though it's not exactly least squares. OK. So the conclusion there is that, yeah, sort of works like least squares. So what about going beyond linear function classes? So let's look at sparse functions. So in the sparse function, the f's have zeros in a lot of places on the weight vector except for maybe three entries. So here the optimal thing is not least squares, least squares is going to take 20 examples to figure out what the solution is, it's the lasso algorithm, which does L1 regularization. And here, we show that the transformer actually learns to behave like the lasso. So this is pretty cool because now the transformer is able to learn kind of non-trivial. I mean, the lasso is not a trivial algorithm, and exploding sparsity is not trivial. So it's able to do that, which is pretty cool. Note that we did have to train the transformer to do this. So it's not magic, otherwise it wouldn't know about sparsity at all. What about 2-layer ReLU networks? So here, the baseline that we looked at is gradient descent, and it basically matches gradient descent, so that's nice. Yeah. The original transformer that you trained on non-sparse things and see if it matches least squares. So the question is-- If I took the original transformer, and applied it to the sparse question, so this should match to the least squares objective because a sparse linear function is just a special case of linear functions, and we already know that. the previous transformer acted like least squares on a fairly wide range of distributions. Yeah. So I have a question about the label noise from earlier, what kind of model noise it? Was it always-- Yeah, so the label noise is just adding Gaussian noise. How well would it work do would you expect if you give a complete outlier just for 1 or for y's? So question is, how well would it work if you gave it 1y, which is way out there? That would probably break least squares because unless you regularize. So least squares there's no regularization, it's not regression. So it's going to just be thrown off by that label. And the transformer, we didn't do that exact experiment, but I imagine that it would be also distracted. Yeah, Chelsea. Given that least squares is the optimal solution when you don't have noise, do you think that these sorts of findings are surprising? I guess, I think, yeah, I guess, do you think it's surprising or did these results kind of differ from what you were expecting? So the question is, basically, is this result surprising? It was not obvious to me that this transformer would have this type of behavior. And in fact, I'm not going to talk about this, but we also ran experiments with LSTMs. And LSTMs look like transformers on the indistribution, but on this case, LSTMs did not have the double descent. So it's pretty non-obvious, I think, that there's some dependence on the architecture depending on-- different architecture have a different inductive bias, which will lead to different CONCOR algorithms that it's learning. Is that just because the transformer is fitting the data better? Is it better universal function or proximity basically? So the question is, is that because the transformer is just-- Fitting the training? Fitting the training there. I don't think so. I think, it is inductive bias because we did-- I mean, there's still more work to be done, but we try to make the LSTM large so it wasn't like a capacity issue. It could be a training, LSTMs are hard to train so not quite sure. Yeah. What's the intuition for why double descent happens? The intuition behind why double descent happens. This is maybe a longer question, so maybe we should take that offline, but basically, I guess, one quick answer is that in the over parameterized regime. So statistically, what you would expect is like, OK, you fit and then you start overfitting when there's noise. If there's no noise, then there's no overfitting because you just nail it. And what people have observed is that over parameterization, when the number of dimensions is larger than the number of examples, you get this optimal error, but I'm happy to chat more later. Let me go on since we have a lot to cover, but good questions. OK, so finally, we look at decision trees. So for decision trees, we looked at the greedy algorithm, XGBoost, which is the state of our decision tree learning algorithm. And here, the transformer actually outperforms, at least this on sort of synthetic data distribution. So this is not claiming that you should ditch XGBoost and use this transformer, but this was sort of curious, I think, that the transformer is able to learn some sort of algorithm. At least on this distribution, it's outperforming sort of hand-coded algorithms, so to speak. Model size clearly matters, but what's interesting is that model size is especially important when you look at kind of robustness and extrapolation out of distribution. For standard, it seems, like, OK, there's a steady improvement as you increase the model size, but it sort of really matters if you're extrapolating. OK. So let me summarize. So one conceptual, important thing to take away is that we're defining in-context learning of a function class. So this sort of is maybe an important concept, not to think about in-context learning is like some fuzzy thing, but we're talking about rigorously in-context learning of a function class, and this is a property of a model. But in order to sort of prove the existence of these models, we can train Transformers to do in-context learning on these linear functions. We saw we could do sparse linear functions, neural networks decision trees. We also evaluate the robustness of distributed prompts, which I think is really crucial if you want to understand. Because many of the differences between model size and LSTMs as Transformers, you don't really see unless you go out of distribution, because that's where the inductive biases really kick in. And I think it's sort of interesting to think about what algorithms these transformers are representing. There's still a lot of open questions here. This model, when you condition on these in-context examples, is a function. It's not a linear function, certainly, but it is a function that's local, I think, within a ball, behaves linearly, and it'll be interesting to understand what function that actually is. I alluded to RNNs and LSTMs, how much of this is specific to transformers, as opposed to other architectures. How can we look underneath the hood to see what the transformers are actually doing mechanistically? There's this follow-up paper, which is really interesting where they actually are able to construct a Transformer by setting its weights somehow and show that that can do linear regression. They also do some more probing experiments to look inside the Transformer, whereas we are only looking at behaviors. One question is, can we get algorithmic insights? So this is something I'm excited about because this will maybe teach us something about algorithm design. And the idea that the decision trees case the Transformer is actually able to do so much better suggests maybe there are other sorts of algorithms or principles here that we can pull out. And finally, this is all on synthetic tasks. It'd be great to tie this back into real tasks with knowledge. And the exclusion of knowledge here is deliberate here because we wanted to understand in-context learning, really, the learning part, and you can think about this as a pure learning. All we know is it's a learning function, there's no knowledge. It's just figure out what the linear function is. This is just learning. A lot of in-context examples that you see in literature bring in prior knowledge about translation. There's no way, obviously, you can learn how to translate sentences from five examples. It has to be knowledge, and how does that work in conjunction with this learning ability? That's, I think, a really interesting question. Question back there? Yeah, just a quick one here. It seems like there is some threshold rate based on model size, the number of parameters, at which point in-context learning happens. In this work, did you do an analysis on that? For instance, if you reduced the number of parameters you had on your model, would it not learn to do this lasso-type regression? Is there a way to estimate how many parameters you need for your model, given some level of task difficulty? Yeah, that's a great question. So the question is, basically, how big of a model do you need for certain types of behaviors? We have this experiment, which shows that, I mean, size definitely does matter. I think it will be really interesting to do a more careful scaling laws type of analysis, where you train a sequence of models from small to large and maybe you increase the depth or the number of tension heads and look at these dimensions of scaling and then track the different types of behaviors like, did it learn the lasso? Did it match? Does it have double descent, and so on. Yeah, that would be interesting follow-up work. Yeah. Would there be a special case of [INAUDIBLE] networks when you have [INAUDIBLE]-- I'm sorry-- [INAUDIBLE] fully commit to graph. So I'm wondering, does this show an [INAUDIBLE] have seen studies that relate in-context learning from transformers to graph neural networks? So the question is, transformers can be seen as a special case of graph neural networks. Are there any works that explore in-context learning with graph neural networks? I mean, there's an independently, I think, interest in thinking about how you do in-context learning with graph-structured data, which is, I think, interesting to explore. I think here, I guess there's the graph, I guess that sense what you're trying to get out of the graph neural network. Here, you have a sequence of x-y pairs, and there is, in some sense, a symmetry. There's sort of all k squared different kind of connections that make sense. So I don't know if there would be a natural graph structure. I guess that the asymmetry between the x and the y tokens maybe matter. So you could try to sparsify the graph or play with attention heads somehow. Maybe a broader question is the Transformer, we have positional embeddings here, which seems rather unnatural from the point of view of it's not order invariant. And someone asks about the dependence on ordering of examples. Maybe you can build that invariance directly into the model. Any other questions? Yeah. Can you do the experiments without -- it still works without that, right? So the question is, do we do any experiments without the position or [INAUDIBLE]?? It wouldn't work because then you don't know which y goes with which x because if it's just a bag of x-y pairs, you don't-- Oh, your x and y's separated? Yeah, they're separate tokens. Okay, so like you would [INAUDIBLE]?? Yeah. OK. So to the outline. So we talked about what Transformers can learn in context, looking at, I think about it, really, as an architectural question. Can the Transformer do certain things and doing a bunch of synthetic experiments on well-defined function classes to explore the limits of Transformers. Now, let's try to understand the role of data. Although the way we'll get at the data maybe is not where you think it would be. So this is an ICLR paper with Michael Xie, Aditi Ranganathan, and Tengyu Ma. And remember, there's two types of standard learning, you think about gradients. And in-context learning is this weird thing where you condition to do learning. But maybe it's not, actually, that weird if you think about it with through a Bayesian lens. And if you think about Bayesian inference, which might be a different paradigm for, I guess, the norm in ML these days. So let's walk through what Bayesian inference is. So imagine you have a latent random variable theta, which corresponds to a task. You can think about it as a function, a linear function, if you want, which is unknown. That's why it's not shaded. And you think about building a generative model over what-- general models overloaded these days. Let's build a probabilistic framework for thinking about how theta is related to variables. So here's a simple example. Let's suppose that we have x1 and y1. So suppose you can generate x1 or you can just condition on it. It doesn't matter. But the point is that y is generated, given x and theta. And independently, for each of the k examples, we have y, given x and theta. And then you have a query point, which is just another IID example, and so you ask what is the probability of y query given xquery of theta. And now, of course, you don't know what theta is. So being a good Bayesian, you would just try to marginalize it out, and that's what this equation does. So this is a very classic kind of Bayesian analysis, where you have this posterior distribution, where you condition on everything that you observe, which is this. And then you look at the posterior distribution over theta. So you try to guess what theta is. And then you basically weight your prediction. So this is the prediction if you had theta, how would I predict on yquery given xquery, and you're basically averaging over possible values of theta, where the averaging is given by the posterior. So this object, in Bayesian analysis, is called the posterior predictive distribution. It doesn't have a theta in it because it's been marginalized out, so it just looks like x1 y1, all the way to xk yk, xquery and probability distribution of yquery. And this is exactly the same form that we've been playing around with when we talk about doing in-context learning. OK. So through this lens, what we can think about in-context learning or-- these Transformers are doing is that they're trying to fit this posterior predictive distribution directly. OK? But there could be this underlying structure here that is latent. But the Transformer, it doesn't care. It's just going to fit this distribution, and maybe it has some notion of implicitly theta, maybe it doesn't. Who knows. So what will be useful to think about is this posterior predictive distribution, and we're going to start abstracting away from the architecture, and we're just going to talk about the distribution. OK. So remember our questions. How can a fixed model, a Transformer, perform in context learning? And the first part of the talk showed some empirical evidence that the Transformer can do in-context learning in a fairly wide number of non-trivial settings and if you are shown examples of the task, essentially. Now, there are some extrapolation if they're all out of domain, but largely, you're showing the Transformer here's what linear regression looks like. And so you can think about through this lens is, well, this is the model-- I mean, an accurate depiction of the linear regression set up, if for appropriate choices of distributions. And so you can think about it, what we're doing is, we're just fitting this distribution. And if you believe that a transformer is some universal function approximator, if you give it enough data, it should just work. I think, still, it's non-trivial that it can do it in a reasonable amount of time. Because I mean, you can invoke universal function approximator to-- this could be a pretty complex function, so that the transformer can learn it and you can actually run SGD to do it. It's not obvious. That's what the first part of the talk showed that it sort of works. But now, we're going to move on to the second question is, how does this model arise from training? And the key thing is that remember in GPT-3, it's just trained on next word prediction. So it's not explicitly training for these tasks, which is a whole point of emergent behavior. And this is really, I think, the harder question or conceptually harder question to ask. So the main challenge here is this distribution shift. We're training on basically, internet crawl, and we're prompting it with examples that don't show up at training or even out of distribution. So the pre-training distribution is not the same as the prompting distribution and can be, actually, pretty wildly different. So in what settings can this actually work? So we're going to try to make some progress by defining a simple model, where the pre-training distribution and the prompting distribution differ in a well-controlled way, and then we're going to see if we can make some progress. So we're going to consider pre-training distribution as a mixture of HMMs, and the idea is that you have this concept, theta, that encodes, for example, the topic, like, oh, this is a Wikipedia biography, let's say. And then given that, which encodes let's say, the transitions of HMM, then we're going to generate text from that HMM. OK? So to generate a document, you first sample transitions from this HMM, and then you're going to sample the hidden states from HMM and then the sample, the emissions, given the hidden states. OK? And then you get your text and you remember, you hide theta because you don't see it, but that's the data-generating process for the pre-training data. So now, let's think about what language modeling would try to do, if it's asked to just predict this text. I would argue that it implicitly has to infer the target concept somehow. I know this is a little bit hand wavy, but bear with me. So if you have this article about Albert Einstein, it kind of needs to figure out, OK, well, this is probably like a Wikipedia biography and invoke-- I'm going to generate things that look like Wikipedia biographies. So the LM will probably try to implicitly infer theta, approximately somehow and then try to sample from the HMM. So of course, the Transformer is not literally doing this, but this is just kind of a cartoon of what it might be thinking. OK, now, what about the prompting distribution? We're going to find the prompting distribution as, we're going to choose a target concept, theta star, so we're going to fix it. And then we're going to generate from the HMM, but we're going to break it up into independent pieces. And here, we can have Albert Einstein was German, delimiter, Gandhi was an Indian, delimiter, and so on, right? So this part is generated from the HMM, and then once you hit a delimiter, you reset and go back to sort of the initial state and you generate from HMM again, and you reset and you generate from HMM again, OK? So this distribution is different from the training distribution because it has, basically, these restarts as opposed to one, long continuous HMM. And this corresponds to the fact that, in general, in documents, you have topical coherence and Albert Einstein, you do have a Wikipedia page of Albert Einstein, and then in these in-context examples, you're sort of quickly changing the topic. It's still an HMM, but you're resetting the topic in a sense, OK? So now, the question is, what will the language model do on this prompting distribution? And what you would like, what you would hope is that the language model could still infer this target concept, and then it can generate from this HMM. And then if it knows theta star, then you would be done, because this is Marie Curie was Polish, it's just joining from HMM. But the difficulty is the distribution mismatch. You're now going to condition this language model on samples from the prompt distribution. And remember, the prompting distribution is different from the training distribution. So that's the technical hurdle here. Otherwise, it should be clear that it works from standard Bayesian theory. Yeah. Didn't Markov model usually, my understanding was that you make observations, and there's like a hidden state or something? So you get a sequence of observations, and there's like a hidden sequence of states or something? Yeah. But here, it seems like theta is playing the role of the hidden information, but there is also hidden-- can we re-word something like there is also a hidden state that's evolving in the background too [INAUDIBLE]? So the question is, what are the hidden states in the HMM? I haven't shown them here, but you can think about-- let's see what's the best? Theta is defining the transition probabilities of HMM. And then the way you sample this text is you have a hidden state and you transition according to the probabilities specified by theta, and then you emit, given those hidden states. So I haven't shown them because they might not be interpretable. You can think about this, basically, let's say, 1 through 50 hidden states or something. So the key challenge here is that the prompt distribution is different from the pre-training distribution, and here's a visualization. So in the in-context learning examples, you have transitions according to HMM, and then, which are in distribution, so that's great. And then you have these low-probability transitions, where you're like, here's delimiter, and then I'm going to start going from Einstein to Gandhi all of a sudden. So those are low probability under the language model because you just don't see this kind of text too often at training time. OK. So then what can you do? So here is maybe the most technical slide. This is just still a sketch. So we proved a result, which makes assumption. And the assumption intuitively is, the signal that you get about theta, so the difference between the true concept and any other concept who is measured by the observation distribution, has to be larger than this error that you get from these low-probability transitions. So these transitions are the source of distribution shift, and if you can bound the distribution shift in terms of some separation, then we show that as in-context learning works in the sense that as a number of examples k goes to infinity, then in-context like this language model, will ultimately predict the right thing. So it's a asymptotic result, but this is sort of the type. So notice that if you didn't have this distribution mismatch, this would just follow through standard Bayesian asymptotics, and that would be fine. So the key thing is that now you have this error that you have to account for, and under some conditions, this still works. So maybe some practical takeaways from the theory are-- and you don't need the theory to make these practical takeaways, but it's maybe good to understand these takeaways and the concept of this theory is that making the prompting distribution as close to the training distribution helps. And you see that a large literature on prompting basically tries to finagle things, so that's the case. For example, if you want to know capitals, then you try to say Berlin is the capital of Germany and so on, using natural language, so it looks more in distribution, so this is a kind of a prompting trick. So you generally want to use the delimiters that are like new lines or pounds that don't increase the probability of inferring the wrong concept. So through this latent concept view, what you're basically doing is that every time you think about inferring HMM, every time you're transitioning, this is sort of confusing the model. And if you have, instead of a new line, you say something like birth date, that's going to really confuse the model, so don't do that. Using something neutral is helpful. So then now, I'm going to switch into some empirical studies here. The first is we built a data set, generative in-context learning data set, and the goal is to have something small, so we can run experiments and study things without waiting weeks to train large models. And so there's a pre-training distribution of 1,000 documents. Each document is just one, long sequence, a sample from HMM, and it basically looks like this, so it's sort of gibberish. And the prompting distribution is concatenating independent examples. So it's basically the same distribution of gibberish punctuated by these delimiters. OK? So we train Transformers and LSTMs on the pre-training distribution, which is just these documents, and then conditioned on this prompting distribution and see if it could predict the right answer. And as a number of in-context examples increases, we see that the Transformer improves, so k here is the length of an example. So here, the k is 3, for example. And then we see that as the length of example increases, then things get better. This should be natural because the more lengthier the in-context examples here are, then the more in distribution this prompting distribution looks. And here, LSTMs are actually doing a little bit within Transformers, which might be also, natural because this data set is, basically, like HMM. I mean, it's a mixture of HMM, so it's not exactly an HMM, but it has a temporal sequence, and maybe that matches the inductive bias of LSTMs better than Transformers, which have to work harder. Effective modeling scale. So I think the scale is, obviously, a top-of-mind question. Yeah, question? So when k equals 8, I think it's which one will do better, almost a question of k equals 10? From the Transformer example, we have zero explanation why that is? So the question is, as k increases, it seems like this is catching up with k equals 10. I think it's just a diminishing gains that you get because 8 and 10 aren't that different, but going from 3 to 5 is almost like doubling the size. And if k equals 1, then you really don't have much information. [INAUDIBLE] Yeah, there's some noise here, so yeah. So what happens when you increase the model scale? No, the common refrain is that when models get better, things get better. But one thing that's interesting is that the in-context accuracy will get better as you improve them, increase the model size, let's say, from 12 to 16, but the validation loss or the pre-training loss is the same. So this is sort of interesting because you're not fitting the data better by using a larger model. We don't really understand why this is the case, but maybe just a hypothesizing maybe, there's inductive bias for in-context learning that improves with model size. Not really sure. Then we're trying to sort of see whether this data set captures a lot of the phenomena that you see in the literature. So there's this peculiar thing in the GPT-3 paper that 0-shot is sometimes better than 1-shot for some data sets. And you see the same thing happened in June as well, which is sort of interesting. There's a paper that did a really interesting experiment. So this is for the skeptics of in-context learning, like, OK, are you actually doing in-context learning or not. Yeah, question? Is the reason why 0-shot learning is better than 1-shot learning, that the most [INAUDIBLE] answer of the first initial example? For some of the multiple choice, I can see that maybe the answer is B, the [INAUDIBLE] is over that answer, or if it's a [INAUDIBLE]. Yeah, so the question is, is 0-shot better than 1-shot just simply because the model is copying the last answer. And that's generally, I think, what happens, because you have only one example, then so the model can't really tell the difference between the constant function, where I always want to output that's answer versus actually, doing some in-context learning. Yeah. So this is an interesting experiment. So they took the sentiment data set, and what they did was just randomize the labels. So I'm just going to randomly replace positive, neutral, and negative with some other labels. And then you see if in-context learning still works. So if you were doing normal learning, this would be crazy. You would just completely get destroyed because there's no signal. But what they found across a bunch of different models and tasks and data sets is that the amount of drop that you get from gold labels versus random labels is actually not that much. And sometimes-- I don't know why this increases, but GPT-3, it dips very little compared to not having demonstrations at all. So this is sort of interesting. So clearly, in-context learning, sort of different from normal, supervised learning. One way you can kind of see this through the Bayesian lens is that in-context inputs help us nail down what the target concept is, despite the noisy labels. So you're still conditioning on x. So that's giving you valid information about theta, and the only problem is that these y's are now just junk, but this is where it's a little bit speculative. Maybe because there's noise, there's no information in them, so certainly, if you were missing the y's, then that's strictly better, because you're able to infer the task better and then you can predict. And if they're random, maybe that's like marginalized. I mean, not mathematically, but that's the sort of intuition that you're conditioning on x's and the y's are just random, so they're kind of ignored. So this is maybe one example where the Bayesian inference perspective gives you a little bit of insight into this empirical phenomenon. Here's another example that we don't explain, so this is very similar. So here, let's pose in-context examples are the task is to predict whether something is a sport, animal, or plant or vegetable, and the labels have been shuffled. So they're not random, but they're deterministically shuffled. And in this case, GPT-3 is actually able to respect that shuffling. So this is something that's not captured by our framework, because this is more of an abstract reasoning capability that GPT-3 has, where it's able to associate-- basically, do variable binding. Basically, sport is just a variable that has a particular meaning in this context, and I'm going to use it consistently to mean that thing, which is sort of really cool if you think about it. Despite the strong prior that sport is sport. Now, in this context, it's, I guess, vegetable. So mysteries. OK. Let me try to summarize this section. So I want to argue that Bayesian inference is a useful way to think about in-context learning because really, Bayesian inference is all about conditioning, and in-context learning, all you're doing is conditioning on your in-context examples. And in Bayesian inference, the key object is a posterior predictive distribution, which is exactly the thing that you're trying to approximate when you're training a model to do in-context learning. The main challenge is to analyze the case where the pre-training and the prompting distribution are just different. We showed some kind of mild, theoretical progress by, if you bound the errors of the low-probability transitions, then you can prove something reasonable. One important note is that all of these results here in the second part of talk, are independent of the architecture. And I think this is interesting because now, understanding in-context learning, through this lens, is all about understanding the differences between the pre-training distribution and the prompting distribution, which you have analytical. So there's no sample complexity. It's just like these two distributions, and what happens if you condition on a draw from an OD distribution and ask the model to predict on your IID distribution. And this might be useful for understanding the role of data, because if you think about the role of data, you want to solve these tasks, you have the web corpora. Maybe there's a way to use this framework to understand data distributions and their relationships. And we also have this small synthetic data set, which is based on a mixture of HMMs, which, hopefully, can allow you to run experiments really quickly and assess and answer some questions. So to wrap it up, we looked at two different projects. One is to understand whether Transformers can do in-context learning of a function class, and the second, thinking about in-context learning as Bayesian inference. So final slide here. So in terms of learning is, I think, one of these great mysteries, I think, that we have in modern AI. And it's sort of becoming a foundation for many AI applications. So people are using them to just build new application and increasingly, applications that didn't exist before because you can kind of spin them up so quickly. And I think understanding is certainly lacking, and I think it's key to both making scientific progress, but also, engineering better systems. Because these in-context learning systems are not reliable, right? We don't understand how they work, and sometimes they work, sometimes they don't. And this talk takes a particular view that synthetic setups can help us more rigorously explore. So I think it's also valuable to run real experiments on real data, and we're doing a bunch of that, but there's only so much handle you can get on what's going on. And I think we've gotten a few insights into the role of model architectures and the role of data solutions by focusing on this much more controlled setting. And now, the big open question is, what of this can you link up with the real-world settings, which bring in knowledge and prior knowledge? And there's clearly a bunch of things, phenomena that are not captured here. And there's more beyond in-context learning. So there's other emerging phenomenon such as chain of thought, ability to do arithmetic, a lot of other things that are hidden inside these large language models, which are waiting to be kind of discovered. So it's really kind of exciting because it's doing scientific discovery, rather than purely engineering. And maybe one final thought is that we're scrapping the idea of a task, which has been so central to machine learning. Because what these language models allow you to do is, not just define tasks on the fly, but sort of fluidly go between tasks. When you have instructions, which we haven't really analyzed or talked about, paired with maybe a few examples, that feels like that's maybe one task. But then the idea that it's just a language model, it doesn't have a notion of a task specifically, which means that maybe having a task-based framing could be too limited to understand kind of truly what's happening in the language model. So I'll end there and happy to take questions. Yeah. Do you think it's possible to use knowledge distillation to take some of the learnings from [INAUDIBLE] learning and fine-tune [INAUDIBLE] and more into the fine-tuning regime? So the question is, can you use knowledge distillation to move things more into the fine-tuning regime? Well, one answer is that I think there's a great deal of interest in taking language models and doing additional fine tuning to improve their in-context learning behaviors. So this is typically what happens when people do what is called instruction tuning, where you have things that look more like tasks, and this definitely helps. And it helps quite a bit more than just scaling up. Of course, you scale up, and you do this. It's the best of both worlds. I think it takes a little bit away the magic, in my opinion. And for practical purposes, if you just want a good model, you can absolutely do this, because it's going to give you a smaller model that's more performant. But I think from a scientific perspective, the reason I'm so excited about GPT-3 is that, initially, you didn't have to do this, which means that if you didn't try to do something, but you incidentally, did well on it, that means it probably can do a lot of other things beyond our imagination. So this is a principle of generalization and machine learning that, well, if you don't look at it-- I mean, of course, you can fit the test set and then do really well and test that. But if you don't do that and you happen to do well on the test set, then you know that you can probably do well on new examples. And this is taking it to the meta level. If you didn't tune on any, I guess, in meta learning, you could, classically, you can say, OK, well, I'm not going to train on a task, and now, I happen to do well on some tasks, I'll probably do well on other tasks. But this is going on another level, which is like, OK, there's no notion of task. And you see some of the things that GPT-3 can do, like write a poem in the style of Shakespeare about in-context learning or something or do derive explanations. And so these are things that explaining something isn't a task, at least in a traditional sense. It's sort of a capability that's coupled with other things, and these things actually do compose in a way. So I think getting out of the x to y mind set could help you, maybe unravel some of these other deeper structures. Of course, this talk is completely about x-y pairs, just to be clear, but I think there's much more to do beyond this. Yeah. I had a question about the example you had with the label reshuffling. You were saying how are you relabeled or shuffled up the labels for the [INAUDIBLE] and the force and things like that. I was able to latch onto that really well. But how [INAUDIBLE] as the example here. Yeah. So it was pretty impressive that it kind of ignored as priors to involve the most recent problem, but then I imagine that if there's any label noise, that could be used if you have a cucumber and vegetable and then beef to [INAUDIBLE]. And the test, would it guess 50/50, or would it commit one? Yeah, so the question is, what happens if you have noise in your labels. I don't know. What would happen [INAUDIBLE]? I mean, what's the optimal thing to do? It's probably to balance the two somehow. Like if you have just a little bit of noise, then maybe you would just ignore it. The random label result shows that it's robust to noise and labels. And so maybe if there's a deterministic structure, where it's not random noise, but if you're flipping, then it latches onto that structure. But if there's no patterns there, then just ignore it. That would be what I would hope that happens, and I wouldn't be surprised if it happens, but no, I think you would need to run a more careful experiment to know for sure. OK. What about complex learning with visual concepts? Because a lot of the talk was about language and a lot of metallurgy traditionally has been on vision. So I'm just curious about the vision, because it's also like different modalities, those in-context learning happened in different modalities, or is there something special about hoping some symbols and stuff? I don't know. I'm just curious. Yeah, that's a really good question. So the question is, what about vision, and does in-context learning happen in vision? So that area is much less developed than in language because we don't have a public GPT-3 that everyone can play with. Flamingo from DeepMind, I guess, is a big model that can do some in-context learning for visual inputs. But it's no has access to it publicly. I do think that something about language definitely makes in-context learning more natural and easier in the sense that you think about languages inputs, typically like here is a movie, we classify it. But language is sort of also operating at a mental level. That's a power of language is you can describe tasks. And the internet probably has things that do look like tasks. I mean, it has to, otherwise, it's only limits to magic, I guess. So there's probably a lot of structured things about language that make in-context learning work. Now, in vision, there's no, I don't think, fundamental reason why you might not-- I guess the thing is what are you doing in vision? If you're doing classification, you're fundamentally going from a continuous visual space to some sort of label space, which is language, in a sense. Whereas, in language inputs, you're already in the language space, and that fluidity might help. But I don't know if there's any fundamental reason you can't do in-context learning in vision or what it would look like. I mean, I guess you could certainly train models to do in-context learning vision. That I have a very high faith in the Transformer to learn complicated things. Whether it emerges naturally from crawling the web with a bunch of images, I don't know. Really depends on your data distribution. Chelsea. You mentioned that in-context learning has a lot of desirable properties over fine tuning, but it's also a lot less reliable. Do you think that there are ways to increase that reliability, or is that just something more fundamental as a result of being self-supervised? And are there ways to make it more reliable without getting rid of the nice parts about it that's not surprising? The question is, can you make in-context learning more reliable? So one answer is that if you do instruction tuning, you'll definitely make it more reliable, and this is what everyone who is actually trying to make these work in practice do. But you still might be concerned because, ultimately, you're hoping that this transformer does something for you. And that part, I think, at least these toy experiments on linear regression shows that it's under these pretty clean conditions, it can learn fairly complicated functions. I think that now, if you have 100,000 examples, you probably don't want to just prompt a language model with those examples. But then, yeah, if you have a lot of examples, which you'll probably be fine tuning in or doing some sort of retrieval on those anyway. So I think I guess maybe what I'll say is that future learning shouldn't be reliable in the sense that it's massively under specified. So I mean, you shouldn't expect to do that well unless you have strong priors, even if you're fine tuning, I would say, and in-context learning is sort of in that regime, And empirically, I guess, especially if you're doing instruct tuning, it's not worse. And if you're going into a lot more examples, then you're in a different regime, and you should be doing gradient updates. I've lost track. Here, let's sweep the room. I'll go with you and then sweep this way. Yeah. I was just trying to grasp with the game of having the random label is. Because earlier you talked about how-- do you think it might be coming from the language that they used to describe the label, and then the model was trying to figure out what is of the passed by language that showed up in the label? Because otherwise, I'm just trying to figure out why they have such a big gain in just randomized labels. Yeah, so the question is, why is the random labels kind of working at all? I mean, by looking at the inputs, you can kind of guess what the task should be, I guess is the answer. If you look at these sentences it's like, what might someone want to do with these example inputs and well, maybe, classify sentiment. And I mean, it's not obvious. It's completely obvious, but there has to be the thing that you know what it's doing. Or if positive, negative, and neutral to go like vegetable, fruit, and something [INAUDIBLE].. To finalize the words, I think it should work because-- yeah, I think it will. Well, actually, so you have to be a little bit careful because if you put a fruit, vegetable, then the model will probably want to generate fruit and vegetable, right? And then you're left wondering, OK, what does these labels mean? So yeah. So in the functional space example, we actually had a fair idea that we need 20 samples to actually learn the [INAUDIBLE] algorithm. So in general, how do we think it will accommodate a mode of examples that you need in your prompt or to figure out what the task is? I mean, how does that make the scale of the transformer? How many example, should we data prompt to actually figure out the task, and does the scale matter in this? Yeah, so the question is, here, you have 20 examples, and we know that you need 20 examples to learn this function. What about in general? Certainly, for real tasks, I think this is not really a-- I mean, it's a hard and ill-defined question because how many examples do you need to know for translation. I think it depends on the strength of your task prior. One view is that, OK, GPT-3 already knows how to do tasks. All you're doing in in-context learning is prompting, like if you're asking when was someone born, right? 0-shot, you don't know how you want to format the date, but if you see a few examples, you know what the date format should be, and that's all that's happening. So in that view, the number of examples is basically the number of examples to figure out the format of the task, but not this through semantics. And I think that's not an inaccurate view of what in-context learning does, because there's so few examples. I mean, some of these tasks are like crazy, and you can't really figure it out unless you-- so you can think about the examples are there, partly to help you figure out what the task is. And for that, inputs alone probably do-- and outputs, I guess, in the task description, I'll contribute. And then the kind of the real value of the example is to really figure out what the format is. And so maybe you could try to formulate a framework around thinking about it in terms of what is a space of task and how many do you need to nail down, which is maybe very little, because there's just a lot of common things that people talk about. And once you see in input, you kind of know. Like if I see text, if I see code, it's probably going to be a code question. If I see like numbers, it's probably going to be a math question. That gives me a lot of information about the task, and then you need to figure out the format. So yeah, hopefully, that helps. We're over time, so we get to maybe take one more in there. You're the boss. One more. I was going to return earlier to whether or not the conference learning will work for other modalities. I wanted your view on what counts as cheating, so if I took Clip and I transformed the image into a caption using where I'm using representations this way. And if you want to go back to your example, where you were doing these analogies between words, like the sports-vegetables thing, does that still count as in-context learning now, for a different modality? And I'm wondering where do you think compositionality plays a role in defining what in-context learning is. So the question is, if you took Clip and you turned all the images into text and then you prompted GPT-3, would that be considered cheating, to declare that you have an in-context learning system for vision? Yeah, like you can solve a new label [INAUDIBLE].. It's just that you're going to cheat the process of learning the representation. I mean, I think it's a very practical thing to do, and I think there's a lot of work showing that you should be leveraging these building blocks as components. I don't think you need to train the mega multi-modal that does everything. I think we have language models, and people have gotten a lot of mileage by prompting composition or chaining things. Socratic models from Google has this thing where the models talk to each other. And so I think from a systems perspective, yeah, that's great. I think it's certainly not answering the question of the emergent behavior. Because it's not introducing anything new. There's nothing new to say about emergent behavior, if you chain things together. Whereas compared to this hypothetical experiment where you train on web pages with the text and the images and now, can you do in-context learning with text and images in maybe both directions, generating text and generating images, that's a much more scientifically interesting question. And if you did that, you would, again, probably, it would lead you to some new emergent behaviors that you wouldn't get if you just had GPT-3 and Clip. Well, all right, let's thank Percy, you guys. All right. Thanks, everyone.
AI_LLM_Stanford_CS229
Large_Language_Models_from_scratch.txt
hello everyone in this video you'll learn all about large language models let's start with that autocomplete feature on your mobile phone did you ever wonder how it works the suggested word here is the it's the most used word in the english language let's type y next there are a number of words that start with t y if you took the shortest you get t y e this graph plots the frequency of this word over time it was pretty popular in 1806 but today you'd have to read 20 million words to find your first occurrence of tye we can look up these frequencies for every word starting with t y and if we sort by word frequency we have a clear winner the same approach works for phrases and sentences you see it every time you start typing into your internet search engine the search engine scores each query by calculating its frequency in other words how many other people have used that query so is that all there is to language modeling indeed the goal is to assign a probability to every sentence and frequencies are one way to multiple probabilities the problem is that this frequency approach doesn't allow you to score new sentences this is a perfectly valid sentence but one that may not have appeared previously you may ask how many new sentences are there really given the zillions of internet posts each day won't we exhaust all possible combinations of words soon let's do a back of the envelope calculation there are well over a hundred thousand words in the english language and a typical sentence has more than 10 words that means 10 to the 50th combinations this is a ridiculously big number the vast majority of sentences will never be seen by any human so to really model language we need to do more than just count sentences that already exist we need to somehow model things like grammar and style here's an example from a noble prize-winning poet let's try to build a language model that can write like bob dylan we'll start by thinking of this text as a time series where each word depends on the previous one notice that the word was appears three times let's merge all three instances into one this turns our series into a graph the other word that repeats is if let's merge these two as well if we add probability to these edges it becomes a language model and you can use it to generate text in the style of bob dylan if you start it early and traverse the right sequence of branches you'll get the original verses now let's generate a brand new phrase that wasn't in the song we'll start at the and follow these branches here's another example hey these are actually pretty good they are brand new phrases that sound like bob dylan but other paths give bizarre results and most just produce nonsense how can we make this better for starters we can use a lot more text to build our model here's what you get if you build the model using the whole song hmm these are still pretty weird the real problem here is that our model is too simplistic it assumes that each word depends only on the previous word we can write this as a conditional probability it's the probability of the current word x of n conditioned on the previous word x of n minus one you can do a little better if you look at relationships of three words instead of just two let's build a table of all consecutive triples these are called trigrams and you can use these trigrams to define next word probabilities based on the two previous words the text generation results are slightly better but still not great the problem is that these words can have pretty long range dependencies for example the word red relates to hair which is three words back but it also rhymes with bed which is 13 words back if you ignored those rhymes the song wouldn't work at all so we need to model functions like this or even longer to accurately represent language and these functions are exceedingly complex there's no way we can model this exactly but we can try to approximate it there are a number of ways to approximate a function you may have heard of fourier series which approximates a function using sines and cosines or taylor series which is a sum of polynomials these are both universal approximators they can approximate almost any function another example of a universal approximator is neural networks an advantage of neural networks is that you don't need to know anything about the function you're trying to approximate just input and output pairs let's try an example this function is kind of interesting let's approximate it with a neural network we'll use this one it has five nodes and eight weights let's choose an x position on the graph and send that through the network the first layer duplicates x and multiplies each copy with a different weight then each weighted copy is passed through an activation function an s-shaped curve called a sigmoid in this case multiply by a new weight and then add it up the result is our approximation of f of x which we'll call y and we can plot it to see how far it is from the function we're trying to fit this is our error you can see it's pretty big that's because i generated the weights randomly so this is just one data point we can send many x values to our network to generate this red curve let's define an error function that adds up all of these differences between red and blue curve values we'll use these errors to update the weights this is called training the network and if we repeat these update steps thousands of times we get a pretty good fit we can think of our energy function as a landscape our goal is to find the lowest point the basin that seems easy right but what if i didn't show you the function it's a lot harder now huh okay i'll give you a few points and also they're negative gradients which point downhill now you can just follow the gradients to roll down the hill this is called gradient descent it's how neural networks are optimized so to train our network we need the gradient of the error function it's a vector with eight partial derivatives one for each weight this turns out to be pretty straightforward to compute for neural networks and you can do it in a single backwards path through the network this process of calculating partial derivatives in a network is called back propagation and it's the workhorse of neural networks all right now you just told me that neural networks are universal approximators you can fit any function so how about this function which models language well there's some network that can do it but it needs enough capacity as an analogy suppose we tried fitting our blue curve with this network which only has four weights it can't fit that second bump because there's not enough capacity in the network and the design decisions matter like what if i used a different activation function relus are pretty popular but they give piecewise linear reconstructions so you need more of them to fit a curvy function like this so how do we go about designing a neural network to model language stay tuned for part two where we create an amazing neural network that can generate poetry translate languages and even write computer code
AI_LLM_Stanford_CS229
Leonard_Susskind_ER_EPR_or_Whats_Behind_the_Horizons_of_Black_Holes_1_of_2.txt
[Music] Stanford University Let's uh begin I'm going to begin with my interests and this is very rapidly going to get into what I'm interested in and what I've been working on but let's uh let's talk in a little bit of generalities first the subject is quantum mechanics and the subject is gravity and the real relationship between them off hand it sounds like maybe there isn't much relationship between them gravity is about the biggest things in the world the biggest and heaviest things in the world oh I see a whole bunch of people coming in all right I'll wait a second uh I don't have my glasses on but I think I recognize it is that BK hi BK he's one of our one of my most important members of sitp I would say also very young um right off hand you might think gravity doesn't have much to do with quantum mechanics quantum mechanics is about the very small and the very light that which is so delicate that when you touch it it does something uh it jumps around and does all sorts of things because it's so small and so light and gravity is about exactly the opposite things which are so big and so heavy that in a sense they're the biggest and heaviest things that physics is about and so what do they have to do with each other not much you might think but they come together in one special place black holes black holes are both highly quantum mechanical objects and of course highly gravitating objects and that's the portal into the connection between quantum mechanics and gravity now the essence of gravity is geometry geometry of space and time the essence of quantum mechanics well various people would say there are different Essences of quantum mechanics but I think we're sort of converging over the years that the essence of quantum mechanics is entanglement the strange difference between classical physics and quantum physics is largely encoded in the phenomena of entanglement and another concept also that we will talk about which is called computational complexity these are the things I'm interested in and what is happening is we're beginning to see the ideas of quantum mechanics be encoded in geometry in surprising ways that we never expected the deepest principles of quantum mechanics the strangeness of quantum mechanics the weirdness of quantum mechanics apparently what's happening is it's all being encoded and understood as aspects of geometry particularly when the Quant when the objects that we're studying Quantum mechanically happen to be black holes so that's what I'm going to tell you about largely what and as I said there are lots of other people who have different thoughts than I do they're all wrong but that's uh oh they are but uh okay so I'm going to begin with a concept that's called ER now who can fill in the rest there's at least three people in the audience that I know can fill in the rest but I'll do it myself equals epr ER and epr stand for names of people and on the left hand side as you might expect an e what does e stand for E stands for Einstein who else could it stand for Einstein on the right hand side e stands for Einstein on the leftand side R stands for Rosen you probably never heard of Rosen he was a minor physicist of uh who collaborated with Einstein sort of one of his assistance so the RS on both sides are the same and the p in the middle is Podolski who was also a a good physicist but a relatively minor one the ER on the leftand side stands for a paper that was written by Einstein and Rosen in 1935 very much toward the end of Einstein's productive career in fact a lot of people up until recently would have said it had exceeded his uh his time as an influential physicist epr stands for a fundamentally quantum mechanical idea entanglement leftand side geometry the geometry of black holes in particular the right hand side a fundamental quantum concept of entanglement and again it stands for a paper that was written by Einstein podowski and Rosen in the same year 1935 the two papers apparently had nothing to do with each other it was this was usually credited as the paper which recognized the importance of entanglement or recognized the strangeness of entanglement and the left- hand side had to do with black holes it has turned out that they are so deeply related that one can almost put an equal sign between them that they are so deeply connected joined at the hip the idea of Einstein Rosen Bridges and Einstein Podolski Rosen in tlement so I'm going to tell you about that and then I'm going to tell you not tonight I don't think about a concept that is developing or at least in my imagination it's developing called computational complexity the relationship strangely it's a relationship between Concepts from computer science and Concepts from quantum mechanics and gravity so that's what this course is about okay let's start with entanglement though now as I said this course is for people who have the basics who have learned let me call it the theoretical minimum and I am not going to spend time uh explaining things which I assume that a incoming graduate student who has had a good education in quantum mechanics and a good education in general relativity would be likely to know that's the starting point all right so in lement entanglement is a thing that can happen between systems between Pairs of systems the simplest systems are cubits a cubit is like a spin it can either be up or down or like a coin it can be heads or tails but a quantum mechanical version of it which can be in superpositions of states and if you have two cubits the states are labeled either up or down you can have a entangled superposition of them this is an entangled superposition one of them is up the other one is down minus down up this is called a bell pair and it stands for the state of a two Cubit system and it says very simply the two Cubit system is such that if the first Cubit is up the second one is definitely down and if the second one is if the first one is down the first is up there's a correlation between them if you had these two entangled cubits incidentally they could be far apart you could make the Cubit pair and then take them for Cubit stands for Quantum bit uh if you made the Cubit pair the bit of information as as inside a computer um if you made the Cubit pair you could take them far apart from each other and then because of the structure of this state you can say there's a correlation if Bob over here measures his Cubit and it's up he knows instantly that Alice is down this is Alice Alice is Cubit if Bob's is down then he knows that Alice is up now that in itself is not so weird the way I usually like to describe this is I uh give one person a coin which is a a penny the other person a coin which is a dime they don't know which is which and I tell them to go out of the room we never actually carry this out the reason we never carry it out is because I never have a coin in with me not having a coin I feel perfectly free instead to do it with a $110,000 bill and a $1 bill but this is all Air money I mean I don't yeah so we have here's a $10,000 bill art you get one of them and Sanjay gets another one you don't have to stand up I'll just pretend okay so I mix them up they're different real sugar and uh chemical sugar and behind my back and I give one to Art and one to Sanjay I tell him to go out of the room and then the amazing fact is the minute that art the second that art opens up his package and discovers what it is he knows instantly what sanj's is how can that be well it doesn't seem very amazing does it it's just obvious that's called correlation entanglement is a form of correlation but in a moment I will tell you what the difference is it's the state of being correlated for two coins or two uh or two uh bills is really got something to do with the fact that you didn't really know everything about the system that you could have known that in principle could be known in particular somebody could have done a very very delicate experiment without anybody even recognizing it for example by sending in sending into the room a very very very um low energy radio wave a radio wave with maybe just a couple of quanta in it and those Quant might bounce off the objects and record which one was which without making any difference to the experiment classical mechanics is like that you can measure things you can look at them without changing them you have a pair of coins you can look at the coin without causing it to flip quantum mechanics is different quantum mechanics when you look at something you cause it to change the peculiar thing about entanglement the very very strange thing about entanglement is that knowing the quantum state in this form is everything that can possibly be known about those two cubits there is no more to be known a Quantum State like this is a full the fullest possible description of the two cubits and yet it says nothing whatever about what either one of them is doing it is equally likely that the first Cubit is up as that it is down so it is a state of being which is absolutely the maximal you can know about the system and yet you know nothing about the individual constituents that sounds crazy from a classical Viewpoint from a classical Viewpoint if you know everything that there is to know about something then you know everything there is to know about the parts of the system an entangled State like this is a state in which you know everything about the system that can be known and yet in this case you know nothing about either of the constituents that's the notion of entanglement now incidentally for those who know about spin you you know you could measure the spin along other axes not just up or down but in and out or left and right and it has the same property that if Alice comes along and measures instead of the upness or down of this Cubit she measures the inness or outness of it she will immediately know what Bob's Cubit is doing in the same way in the same with respect to the same thing she'll know that if she makes a measurement and this Cubit is out then Bobs is in if this Cubit is in then Bobs is out so it's something which is correlation but it's a much more fundamental concept and just the statistical or just the um the fact that you may not have known which one of these I gave to Sanjay which one I gave to um uh to Art but somebody could have known not with entanglement so entangled states are very special and they really do characterize in in many ways what is quantum about quantum mechanics almost everything that's strange about quantum mechanics it's really strange about it things which are the really bizarre logic of it traces back to entanglement now can you make macroscopic objects which oh there's another thing about let me just say what it is has the property that you can find out any piece of information about what Alice is holding by making a measurement on what Bob is holding this is the character of entanglement you can find out everything you want to know or anything you want not everything but anything you want to know about what Bob is holding Alice can find out by measuring her own system by doing an experiment on her own system all right can this are there macroscopic versions of this can you take large scale objects and entangle them or can you can you create large scale objects which are entangled in this way remember they have the property that although you know everything about the combined system you know nothing about its parts or you cannot know anything about its parts and you can find out anything about one of them by doing a measurement on the other one yes you can here's how you can make a macroscopic system um first question is how do you make entangled pairs like this well there's an easy way you just create particle pairs you can create particle pairs for example electron positron pairs in the laboratory by colliding photons together all sorts of ways of doing it when the electrons and the and the positrons come out they are entangled in fact their spins are entangled in exactly this fashion so you can easily create pairs of particles that live in this that uh have this funny relationship between them so you create a pair the V here stands for the creation of the pair one particle goes off one way the other particle goes off the other way well what do you have you have two entangled particles separated from each other and you put them in a box and you put these in a box over here too and then you do it again you create another entangled pair and now this box has two particles this box has two particles and this one is entangled with you keep doing it you keep doing it over and over and over again until you fill these boxes with large numbers of particles there may be enough of them to create an interesting macroscopic object in each of these boxes and those objects are fully entangled they have this property that you can find out anything you want about one of them by doing a measurement on the other so entanglement is not restricted to small objects large scale objects can be entangled now of course uh where we're going is we're going to be talking about black holes any questions about entanglement if I would refer you to the to the uh lecture notes that I gave on entanglement and uh there we are now you might think in order to have entanglement you have to have some particles you have to have something to be entangled in fact even just empty space is entangled so to understand how empty space is entangled the first thing you have to understand of course is that in quantum mechanics empty space has properties uh it's not really so empty it can have virtual particles in it it has fluctuations of the field in classical physics the vacuum the state with nothing in it the state with no photons or no radiation in it the electromagnetic field for example is just plain zero it's just plain zero there is no electromagnetic field but in quantum mechanics in the empty space the field fluctuates sometimes you look at it it could be positive sometimes you look at it it could be negative translated into photons some of the time in a little box someplace you might find a photon if you look for it even though we're talking about empty space sometimes you would not find a photon right so if you take empty space and you make measure ments on it you get non-trivial results not just zero but you sometimes find the particle sometimes you don't find the particle and here's what entanglement has to say about empty space let's divide empty space into two halves I'm drawing I'm using the Blackboard to represent empty space the left side and the right side and let's divide the space into little cells and in particular I'm interested in cells that are near the boundary I'm not really doing anything I'm doing a mathematical imaginary operation both in dividing the space down the middle and in making believe that it's made up of little cells but now we can do a measurement of the field or we could look we could look for a virtual particle we could look for one of these vacuum fluctuating particles in here and we could look in here what we find is that if there is no particle on the right right side there will be no particle on the left side if there is a particle on the left right side there will be a particle on the left side so we could write the quantum state of these two here by saying if one of them is empty called zero then the other one is empty if we find a particle in it then we will find another particle in the other hole that looks very much where did I erase it that looks very much like an entangled State here you can find out by looking in the left little cell here whether there's a particle here just by the correlation between them this is a form of entanglement and it's a form of entanglement that's there in empty space between parts of space are we assuming any scale between these two cells are they very close can they yeah they begin very close but we can also look at larger ones a little bit further away and we find that they're entangled with each other this with this this with this this with this in pretty much the same pattern and in fact there's a general pattern it's called scale invariance or conformal invariance and we find that regions are entangled with each other if they're near each other this one over here is not entangled with this to make a measurement of a particle in here you'll find out nothing about this it's just too far away but you will find out if there's a particle here there's probably one here same thing in this bigger box here if there's a particle in here there's probably one in here if there's one in here there's probably one in here if there's and so forth okay so this entangle there's a pattern turn to this entanglement and it's thought it's actually thought this is a Quantum property but it really is beginning to be thought I I don't know how to describe this except to say we think it's so fundamental that it in a sense holds space together we know that if we were to destroy this entanglement entanglement can be destroyed entanglement can be destroyed by making measurements there are all sorts of ways of dest drawing entanglement between systems simply by looking at the system once once um once art looks at his sugar packet and he knows which one it is and he knows what he has he now knows what Sanjay has there's no more entanglement between them they now know there's no more they know what they have so by looking at the system art has changed it into an unentangled State the same thing can be done in space you can make an observation of all of these cells here and determine whether there is a particle in them or not and that destroys the entanglement it also does something else it creates a huge amount of energy in the region where you made the experiment remember in quantum mechanics when you look at something you change it you affect it and in this particular case if you look at what's in here you affect the system and you create a huge energy in here now huge huge energies are sources of gravitational effects they change the geometry they change what space actually looks like and in fact they disconnect the space they disconnect the space they sort of unzip it space is zipped together by entanglement um this is one of the things we've learned by thinking about gravity and quantum mechanics at the same time that entanglement is important in a sense for holding the parts of space together without it they would uh space would not have its nice coherent structure it's contiguous structure of Parts being adjacent to other parts so in tanglement is not only a feature of electrons it's a feature of everything including empty space now and entanglement has another property one which is very important it's called monogamy the same kind of monogamy that I hope you people all enjoy that I have enjoyed what does entanglement of monogamy mean no monogamy of entanglement it's easy to explain if you have three cubits imagine you have three cubits we can call them Alice 's Cubit Bob's Cubit and Charlie's Cubit Alice Bob and Charlie are quantum physicists way of um saying a b and c Alice Bob and Charlie so H1 has a cubit actually we call a art couldn't we right but then we have to do s for sanj all right Alice Bob and right now it could be that Alice's Cubit and Bob's Cubit are entangled if and that means that Alice can look at her Cubit and find out information about Bobs they correlated in fact anything about Bob's Cubit Alice can find out by looking at the appropriate thing for her Cubit but they will find out nothing about Charlie's Cubit if Alice and Bob are entangled on the other hand it is possible for Alice and Charlie to be entangled in which case Bob is um the Third Wheel of the what do you call it when three people never yeah mange twah or whatever love a love triangle right a love triangle so you can only be technically I'm talking about something called maximal entanglement but uh but let's just call it entanglement when two things are ENT Tangled they are entangled with each other and they cannot be entangled with a third that's the monogamy of entanglement and it's a deep and basic principle of quantum mechanics that uh entanglement is monogamous okay now I've actually taught you enough to explain a paradox the Paradox is a very deep Paradox among physicists is sometimes called the amps Paradox amps doeses not stand for a unit of electrical current it stands for four physicists El Mary who was a graduate student marov and who's now here as a postto marov who's a famous general relativity expert Pinsky who's just a famous physicist and Sully who was also a graduate student who uh contribute to the study they discovered what looked like a very fundamental Paradox about black holes and entanglement so I'll tell you what that Paradox was you start with a black hole a black hole has a horizon the black line represents the Horizon of the the black hole inside is the interior of the black hole outside is the exterior of the black hole interior is inside exterior is outside and according to Einstein the according to when I say Einstein according to the general theory of relativity The Horizon is just a point of no return things can fall through the Horizon things can fall through the Horizon and when they pass through the Horizon they just find in empty space empty space vacuum the horizon or the vicinity of the Horizon on both sides just looks like a continuous um space in which both sides are just the opposite side of a uh a vacuum of empty space so this means somebody falling into the black hole will simply experience nothing special now what we've already found that if things are like empty space nevertheless there is this entanglement now these things are not easy to measure you don't go and measure a vacuum fluctuation um with a uh um I don't know what do you use to measure things when you want to a ruler or a um whatever mul a what a multimeter a multimeter right right you got to be delicate it's hard to do but you can do it um so when who who do we send into the black hole this is gets to be a question now do we send Bob into the black hole or Alice I usually send Bob into the black hole I used to send Alice in and then I got a lot of complaints about it so we're going to send Bob into the black hole now Bob is going to do experiments as he falls through maybe he's even going to make an experiment like the like one of these experiments here and he ought to find that space the properties of space are very very similar to what they would be in the empty vacuum the empty vacuum not really being so empty so if that's true then as he falls through he might make a measurement of what's going on over here he might look for example for a uh for a virtual particle and then he might also make an experiment over here and these two experiments or these two degrees of freedom should be entangled why because empty space is entangled in this way so let's label the things that Bob could measure let's call this one b and this one a these do not stand for Bob and Alice now they are just in front of the Horizon and behind the Horizon now I use b and a because we've been using BNA a for these for a long time b and a they don't stand they could stand for Bob and Alice but we already agreed that Bob was the one who went into the black hole so it's screwed up notation okay Bob that's not Bob that's just a little vacuum fluctuation over here and the other one is a little vacuum fluctuation over here and they ought to be entangled but let's suppose that for one reason or another and I'll give you a reason in a moment that this black hole happens to be entangled highly entangled with some other system over here how could that be how could it have gotten entangled well let me give you an example the the uh the example that amps used was this black hole might have evaporated for a period And if you collect together all the evaporation products they will be entangled with the original black hole but we don't need to do that we could do it more simply we could imagine since we're in the business of gunkan experiments thought experiments and the purpose of thought experiments is to analyze the consistency of um of principles we can build these black holes to be entangled from the first place I showed you a little while ago how you can create two macroscopic objects which are entangled you simply create them out of material which was entangled to begin with particles which individually this one with this one this one with this one were in Tangled so you do the same thing you collect these entangled particles in buckets each particle in the left is entangled with something in the right and then you create enough of them that gravity pulls them together and makes a black hole out of each side that's something we think should be in principle possible if we do that what we wind up is with two entangled black holes we have two entangled black holes which means everything about this black hole whatever it is anything you might want to know about this black hole you can measure by making a measurement in this black hole these two could be arbitrarily far apart these entangled pairs could have been separated from here to the moon or from here to that's not very far from here to Alpha centuri or from here to um uh 10 billion light years away they're still entangled and you can find out anything you want about one of them by looking at the other carefully enough okay so let's do that let's let's assume that somehow these two black holes going be black holes now this is Alice's uh this is what Alice has control over and Bob is over here someplace now we have a problem we have a problem because I told you in the beginning that this black hole everything about it every little bit of it is entangled with something in here but I also told you that for the Horizon to make sense and to look like empty space the things just on this side of the Horizon have to be entangled with the things on the other side and now we have a monogamy problem the monogamy problem is that b is a bigamist and bigamy is forbidden not only by law but by uh physical law so this something's wrong something's wrong one possible answer is that if a black hole is created entangled with another object doesn't have to be another black hole could be anything thing but if it's in particular if it's created and Tangled with another object then something happens to the interior of this black hole and wipes it out creates nothingness in here my nothingness I don't mean empty space I mean nothing and that means that if somebody fell into it they couldn't fall in because there's nothing there that's called a firewall the idea a firewall of the kind uh you know that that creates an absolute barrier to where anything could go from outside the black hole to inside no inside because if there was an inside and in particular if it was possible to pass smoothly through here as if the as if the Horizon was empty space this would have to be entangled with this but by assumption it's entangled with that so we have we have this problem and um the conclusion that amps true Al Mar Pinsky maral and Sully was that what this means is that if a black hole does for some reason become entangled with another system like this it just doesn't have an inside there was no inside on the other hand everything we knew about general relativity everything that Einstein taught us said no the interior of black holes exist makes just as much sense as regions of space as the outside and somebody falling should fall in smoothly so this is a paradox this is and it's a real Paradox it's a genuine one the solution I don't think and I don't think too many people including the inventors of the Paradox are at all convinced that the solution is right the firewall solution but it's a real Paradox and it's a thing that that has driven me for the last two years this is conflicts of principle this is is conflict of principle between Einstein general relativity on the one hand and um and quantum mechanics on the other hand conflicts of principle like this when they're unraveled and when one understands what's going on often lead to the biggest advances in our understanding of physics so this can't stay that way you can't just leave it and say oh well okay it seems like a paradox it's really at the core of things at the core of the relation between gravity and quantum mechan mecs okay so let's see what uh what is the possible solution we talk about possible solutions you say again quickly how B became how B became well we're postulating somehow that b remember when a thing is highly entangled like this and this was created in this very entangled State you can find out anything about what's going on in this black hole by looking at what's going on in this one by construction we constructed it that way so we constructed the black hole in such a way that b is entangled with something in Alice's system here that means there's something or other that Alice can measure in her system that would tell her about B on the other hand if the Horizon really is smooth if the Horizon really is is like empty space if it really is like empty space then the properties of quantum field Theory the properties of um just the quantum mechanics of empty space says that B has to be entangled with a so B is one of the black holes that b is not a black hole B is just a little is one of these little cells it's just one of these little cells that somebody could make a measurement just a little piece of empty space a little piece of empty space as falls through the Horizon here's the Horizon right over here it makes a measurement of whether there's a particle here and then whether there's a particle here a vacuum particle a vacuum fluctuation and those things in order for the vacuum not to be messed up badly would have to be correlated and entangled that's what we learned by understanding Quantum field Theory okay so if if the Horizon is what we always thought a horizon was just a point of no return but not in itself some sort of terrible environment then this little region of space over here has to be entangled with this little region of space over here on the other hand by construction everything about this black hole was entangled with something in Alice's system over here which could be another black hole let's take it to be another black hole B can't be entangled with both because of the monogamy property a a is behind the Horizon and in our current understanding about thinking of black holes the basic Quantum state is composed out of things outside the Horizon and the interior is built in some other way but you might just say that everything about this black hole including B has to be entangled with something here and therefore how can it be entangled with a now there's one way these are bits of information they're logical bits if you like they logical possibilities you can say does a have a particle in it or doesn't a have a particle in it but you can just say it's like an abstract bit of information could it possibly be what's that why does which one have to be a the left side one the left side one if this weren't a black hole we wouldn't be talking about its interior we wouldn't be talking about its [Music] Horizon what makes the black what makes a different from normal two points in the belief sorry what makes the black hole different than what if I consider two points in the normal gravitation Earth then these two points should be correlated and Earth is created out of some so you say oh I see what you're saying you're saying for example supposing this was just empty space and this has to be correlated with that is that what you're asking but supposing we knew that this empty space was also entangled with something over there the answer indeed would be if we knew that this was entangled with something over here then we would know that it wasn't entangled with this and we would expect a a mess in here a terrible mess in here which would not look like empty space we we would have to do it's very very difficult to unentangled the these two things here but um it's a puzzle for black holes because black black holes have Horizons and Horizons are supposed to be smooth from all we ever learned about black holes they're supposed to have these smooth Horizons um if we took just a chunk of matter and we did this we'd probably make a big mess out of it over here which is exactly what we don't want to do for the black hole so the question is is one possible answer is what the amp said that a black hole which is entangled with another thing develops properties which are very different than what Einstein said black holes are like now remember one way of entangling the black hole with something else is just to let it evaporate for a while black holes evaporate Hawking radiation when as they evaporate incidentally this is something that any system will do it will eventually evaporate it could be a puddle of water a puddle of water a H2O will also evaporate let's suppose it's an I completely isolated in an outer space a little puddle of water and the little droplets will go off the little droplets will be entangled with the puddle as the puddle shrinks it will become entangled with the outgoing products of evaporation eventually when there's enough evaporation product out out there the um the bubble here will be highly entangled with what's out here right so the way that um that amps imagined it they imagined this object over here was really just the Hawking radiation that got went out now you could take that Hawking radiation and collapse it into a black hole it would still be entangled with it work with the H2O H2O example I mean you're you're saying it's Tangled with all the evaporation yeah there's it would just actually be entangled with the outgoing radiation and uh different parts in here would not be entangled the way you might have expected and it would just be true but there's no such thing as a horizon and there's no reason to think that there ought to be a smooth Horizon so this it only comes up it comes up as a crisis when you think about black holes uh again it could simply mean that the black hole doesn't have an interior nobody wanted to believe that I think very few people do believe that that the black hole doesn't have an interior and it certainly violates the principle of equivalence the principle of equivalence of Einstein and so the question is is there no other way out is there no way that this somehow could be misleading well there's one way out as I said these are logical cubits these logical bits of information uh and a logical bit of information over here which is the thing we called a which is supposed to be entangled with a logical bit of information over here despite the fact that the logical bit of information over here is entangled with something else let's give it another name B Prime B and B Prime are entangled we built the thing that way B is supposed to be entangled with a but the something wrong there is one way out and that's that A and B Prime are the same thing A and B Prime really are the same thing is that possible is that completely nuts amps said that's too crazy they're too far away this object could be off an alpha centuri this black hole let's say on earth and the thing which is right behind the horizon over here is clearly near Earth it's not near Alpha centuri so it doesn't really make sense to say that a is really the same object that's out here but they also had a experiment that they imagine here's the here's the idea the idea is that a bit of information just behind the Horizon here is really the same bit of information as over here okay so here when various people said this and said maybe this is what's going on the answer of amps was very very clear they said no that's complete nonsense and here's the way to prove that it's nonsense supposing a really was the same as B Prime and B Prime is off on Alpha centuri well Alice who was at this end could make a disturbance of B Prime she could make measure something in B Prime measuring things disturbs them so by Alice manipulating the B Prime which is in this black hole someplace the thing which is entangled with B by measuring it right first of all she has found out something about B but by assumption she has effectively measured something just behind the Horizon of the black hole well that's crazy because by making a measurement of something you disturb it and the result would be a disturbance just behind the Horizon of the black hole amps said and the disturbance means some energy deposited there some energy deposited there by the measurement anytime you measure something you tend to change its energy so by measuring what was out here you made it a disturbance over here well how crazy one of them is off on Alpha centuri the other is over here that can't be right uh or you can say one of two things by measuring what's out here perhaps you did create a particle behind here but that's as I said that's that's a little bit crazy it's too far away so if there was a particle behind there as a consequence of Alice doing something over here that particle must have been there whether or not Alice made her disturbance she just too far away for it to matter and so the conclusion was that there must be loads of particles behind here one for each possible measurement that Alice could possibly do on this thing that's what was called a firewall and that was the basic Paradox that was the unhappy Paradox that amps concocted there was one assumption here and it seems so obvious that uh that that it was obvious namely that there's no causal connection no causal connection between this black hole or the environment of this black hole I draw a little bubble over here to indicate the thing connected with this black hole that there's no causal connection between the bits of this black hole and the things which are behind the Horizon over here in other words there's no causal connection means a way of disturbing things over here that will show up over here and that seemed obvious to everybody except it seems that it's wrong it seems that it's wrong and to understand why it's wrong we need to think about on the other half of the e erer e erer connection this was all about entanglement in the context of black holes and and a puzzle about entanglement the other paper from 1935 had nothing to do with quantum mechanics and had to do with the geometry of the Interiors of black holes ER invented something called an Einstein Rosen bridge I don't think they called it an Einstein Rosen Bridge it was called Einstein rosenbridge afterwards and what they did was they looked at the mathematical solution of Einstein's equations that um uh I don't know how they managed to look at the crisol solution while before crisol was born I'm not sure but uh somehow they did and they knew enough about general relativity Einstein was pretty good at general relativity uh but for those who know what a crisol black hole is it's basically a black hole I'm going to draw the Penrose diagram of it now here we're coming to things that I'm going to assume you know a little bit about but I'll explain them a little bit as we go along That's A Spacetime picture time going up as usual space going horizontal It's A Spacetime map of a black hole let me explain at you on these kind of diagrams light moves at 45° he moves at 45° either outward or any if you're out here this is spatial Infinity spatial [Music] Infinity um when I say it's a map of space time it's a distorted basic it's distorted in the same way that a RCA projection of of the earth is distorted it's squeezed around in such a way as to be able to put all of space time on black so this is very far away over here off at Infinity this is time Infinity clocks go to Infinity up here here's the Horizon of a black hole and here's the exterior of the black hole the only thing you should notice is if if some poor person happens to fall through this line here they are trapped this is the singularity of the black hole and it's a bad place it's a place where you won't survive this is the Horizon of the black hole if you fall through there from the exterior region you're dead you're doomed anyway you may not be dead but you're doomed you're going to hit that Singularity there now this is the exterior of the black hole but strangely the black hole seems to have two exteriors one on this side and one on this side and you can fall through from this side or that side well it's really two black holes it's a black hole on this side and it's a black hole on this side they're in a certain sense infinitely far away from each other you can't go around from one side to the other you'll never get there here's a picture of what this crisol extension and in what sense it's two black holes it really represents two disconnected regions of space completely disconnected here's a region of space or a space space that stands for space not space time but Space X that's the right side the left side another region of space and connecting them you can go from one to the other by passing through something that John Wheeler called a wormhole when you walk from here to here right through that point you walk through the Horizon of one and you show up just outside the Horizon of the other that's over here there's the Horizon so falling in or crossing over not falling in but crossing over from here to here is Crossing from here to here Einstein and Rosen Drew this picture somebody called it an Einstein Rosen bridge and it stood for this diagram over here it's two completely disconnected worlds that are only connected by virtue of this Wormhole okay it's a real solution of Einstein's equations it exists it exists mathematically it has some properties first of all the region over here as you cross from here to here is pure empty space nothing strange going on there if you were able to pass from here to here you would find nothing strange there it's just empty space that implies that there is entanglement between the side and the side entanglement across here another way to F the speed of yeah if you actually wanted to walk across there yes you would but but we're not really walking across there we're just talking about the properties of space from one side to the next imagine I don't really imagine walking across there I imagine a series of observers over there making measurements and then they could correlate their measurements by um by sending signals into the interior here so they could ask somebody on this side and somebody on this side making measurements whether this side and this side are entangled they're supposed to be entangled if Space is really just flat over here space is just empty over here and that says that in this picture of what's going on at an instant there must be entanglement between one branch and the other branch in other words to put it another way this represents two entangled black holes one over here and one over here now there on two completely disconnected spaces you can't go around the outside forget going through the interior can you ever go from here around to here no because they're just disconnected they're not to they're not a connected space but you can do the same thing in another way or or a very closely related thing imagine a very big space but I'm going to draw it in a funny way here's space okay it's big I'm just going to fold it like this now that's not really done anything to it it hasn't change the geometry the geometric relations it's just a way of exhibiting it if I had space very big and I might want to pull it all into this room you know big sheet of cardboard big huge sheet of cardboard I might want to bend it over like that to get it into the room it doesn't change any of the relationships on the cardboard so we do that we take we take a big region of space and we bend it over let's see if I can do this right you can see the picture I haven't really done anything to it I've just exhibited it in a way that uh that gets it all in a small region here and then we take one of these Einstein Rosen Bridges and put it between here and here these are as far as the external World goes these are two black holes very far from each other you might have to go a zillion light years to go from here to here around the outside but strangely it has the property that you cannot go through the Einstein oh this is one property of Einstein Rosen bridges that is important important you can't actually pass through them you can see that where's our diagram here's our diagram can you get from this side to this side can you get from here to here at all no point on this side over here can send a light signal that will ever get to the side no point can send a signal that will get to the other side but what can happen is they can meet at the center so the meaning of that is although you cannot send a signal through the Einstein Rosen Bridge somebody could jump in from the side and jump in from the side and meet at the center same thing here if somehow this kind of thing could be manufactured then these two places could be as far as the external World goes it could be 10 billion light years or more around there light years and yet if Alice jumps into her black hole and Bob jumps into his black hole they have a chance of meeting at the center that's a very crazy thing but it's a consequence of Einstein's equations what do you need to do to make two black holes that have this kind of connectivity the answer is they've got to be entangled if they're entangled then they will have exactly this kind of connectivity so if you can make black holes which are entangled and you can you can make them by taking entangled collections of particles and collapsing them into black holes they have the astonishing property that in principle no matter how far apart you take them you can do things to them and then jump into them and meet at the center and not in a terribly long time but almost immediately this was always considered a kind of science fictiony thing um but it is an actual property of Einstein's equations together with the ideas of entanglement if you could make two entangled black holes they will have this property okay so now we can look we can come back to this crazy entanglement story let me draw one more way let me draw one more way of thinking about these entangled black holes which are very far from each other let's not fold over space in that way let's just take flat space here it is big and put a black hole over here here's the inside of the black hole here's the Horizon here's the inside of the here's the inside of the second black hole there's the Horizon what we're talking about is a geometry with the Einstein Rosen bridge is is it's a geometry that if you fall into here you come out over here let's redraw it let's well you can see if you fall into one you appear in the interior of the other black hole this point and this point are not really far apart they're both inside but Alice and Bob can jump in and uh discover um right the one thing you can't do is use this to send information from outside this black hole to outside this black hole that will not work but you can use it to send information from outside this black hole to the inside of this black hole the only way Bob on this side would ever know about it is if he was willing to jump into the black hole this kind of connectivity this kind of connectivity uh first of all is the real properties of solutions of Einstein's equations it appears that it is closely connected with whether the black holes are entangled and it basic basically resolves this puzzle it basically resolves this puzzle by mucking around with things over here and doing disturbances in fact what will Alice do if she makes a disturbance over here trying to measure something odds are she will disturb something and send something into her black hole over here if the two black holes are entangled the disturbance will appear right on the other side here so do the solution to the Paradox you mention that's the solution that is when I say it's the solution to the Paradox I do not mean to say that everybody accepts it it means to say that this is a yeah par gets resoled and I'm I'm I'm just trying that to not if B and B Prime are the same if B a if they're close to if they're close and can influence each other easily no so carrying it on to the SP world does it mean that therefore if you jump in on a you come out of B if you jump into to a you will not come out of B you see that we come back to here if you jump in from this side you will not get to the outside of the other one all you'll do is get into this side and into the black hole on this side on the other hand if Alice also jumps in or Bob I don't remember which one is Alice and Bob they can meet at the center okay not unless you're willing not unless you're willing to um jump in right would that be the main point is you can never know if this is true because assuming we're outside the black W by definition so it's it's an irrelevant experiment or photo experiment in some ways we can never know the information can't be verified you can never resolve the entanglement the information can't flow out unless you're willing to jump in but then you're inside yeah nobody's going to do this experiment we're trying to unravel an inconsistency we're trying to unravel an inconsistency between two principles and we're finding is there's no inconsistency but what you have to accept is that there can be this kind of connectivity spatial connectivity surprising unexpected spatial connectivity as a consequence of quantum entanglement is a nonzero amount of space time required for entanglement to be present I think what this is saying is that in a certain sense entanglement builds space time anytime you have entanglement a bit of this kind of thing happens and builds a kind of SpaceTime between the two things now if they're just electrons and they're entangled the um nobody's going to jump into an electron you can't get Alis and Bob to jump into different electrons at different places the only content of the entanglement is the usual content of entanglement what this is saying is somehow when entanglement gets big enough between big enough macroscopic objects when it becomes big and entanglement between big macroscopic objects and those macroscopic objects are made very very dense that entanglement turns into SpaceTime turns into new regions of SpaceTime that [Music] um that were not there if the systems were not entangled now I I think this is not too contentious I think most of theoretical physicists who are working on these things more or less believe this but naively you're saying essentially that entanglement is more f is more fundamental so entanglement creates space time but I don't know that there can there can exist things in SpaceTime that are not entangled there can exist things in SpaceTime which are not entangled and there will be no con spatial connectivity of that type you can have two black holes which are not entangled absolutely you can have things which are not entangled two black holes which are not entangled will not have this peculiar feature of a connectivity between them like this Bob will jump into this one Alice will jump into this one and there is no chance at all that they will meet at the center if they're unentangled diagam isn't that assume that the uh the black hole existed from the beginning of time minus infinity good good point good point good point yeah um this diagram yes but you can imagine that the upper half of this diagram let's take the upper half of this diagram well in this fashion over here that the upper half of this diagram represents the follow the following process you started over here in a sort of neutral position neither on the left or the right and you started creating in Tangled pairs lots of entangled pairs electrons and positrons if you like any any way of making entangled pairs of particles and that's easy and then you take half of each pair and bring them over to here and half of them over here and remember that these are entangled with these let's represent the fact that they're entangled by drawing lines between them this is simply a notational device to say this particle is entangled with that one this one is entangled with that one and so forth and so on no content to this other than whatever content is supposed to be associated with this entanglement then having done that we now collapse them into a black hole we make enough of them so that they either their own gravitational field will cause them to gravitate and form a black hole or we compress them we put them between the jaws of a vice and we squeeze them down until they form a black hole and lo and behold these vertical lines here affected ly coalescent to something when there's enough of them that they form a new region of space it's the emergence of a new region of space out of quantum entanglement that's what these things are saying that we're learning that enough entanglement and a small enough volume will manifest itself as a new region of space that now has the possibility not of communicating from the outside to the outside entanglement never allows that but communicating between the outside of one and the inside of the other yeah yes so have Einstein Rosen Bridge of about one plunk area yes that's a good question you have two that's a very good question you start with two two black holes which are unentangled and you take a entanglement is not a thing which is either um yes or no it's there's a magnitude of entanglement the magnitude is called It's called The entanglement entropy but it's basically a measure of how many entangled pairs are shared between the two sides right if you took one entangled pair and dropped it into here and into here then yes you would make a very very primitive Einstein Rosen bridge for which there is no particular meaning other than the fact that they're entangled but you could draw it by saying there's a very simple primitive plunk scale tiny tiny microscopic bridge between them and um but I assure you nobody is going to jump into uh into there and meet at the center so you can ask how many do you have you can ask how many do you actually have to um assem before you can make something big enough that people could meet at the center and the answer is this that the number of entangled pairs manifests itself in the area in the minimal area of the um of the bridge so if you want to make a thing with an area that's big enough for you to fit into you have to have started with enough entangled pairs that uh that you'll make a big enough area uh something like uh let's see I think 10 to the 76th of them 10 to the 72 of them in order for you to fit in through the bridge so that's right right exactly so it's not not easily done does every element in the universe have an entangled partner somewhere not particularly but uh probably in practice yes only it only has an appreciable effect on large scale SpaceTime if the entanglement is strong enough only when you collect enough entang lement like this and collapse it do you make anything resembling ordinary SpaceTime but yes I would say yes that that is right um the idea that time goes to zero at the Horizon is just a statement that that's a point over there you it does play a role but the main role that it plays is saying that you can't escape outside you can't escape from the inside to the outside um or better yet you can't pass from the exterior of one to the exterior of the other all right so the the the lesson to learn here is that the connectivity of space can be more interesting and more um involved more topologically interesting as a consequence of entanglement and that resolves that does resolve the amps Paradox [Music] um whether amps themselves would agree um I know the answer to that they agree when I'm around that's all I know for sure where is the singularity on the oh oh now this is kind of a slice through the center here now you can ask what happens all right good there's an interesting issue here of what happens as time evolves this is this is the slice right through the middle here which you can sort of think of as what happens when you first collapse the the particles into the black hole when you first take them first the particles and then you collapse them and you make this thing that's just straight through here so let's look at it starting here I'm not literally walking I'm just moving my finger from uh from one side to the other and going from here to here what you're doing is going out from out here through here that's this point over here and then going through and coming back out but you can't do that without exceeding the speed of light okay nevertheless there's an imaginary Excursion walking across there that's this so This picture is what happens at tals 0 when the horizons are just touching okay what happens later what happens as time goes on we're going to talk about this next time in Greater detail But as time goes on the interesting part of the Einstein Rosen bridge is the part behind the Horizon once you fall through here you're stuck in the interior of the black hole you can't get out you're going to hit the singularity so this is the extended Horizon over here this is the extended Horizon on the other side and the interior of the Einstein Rosen Bridge or the interior of this object is what's in here now notice an interesting property as time goes forward let's just draw time going forward like that the Einstein Rosen Bridge grows it started with the horizons touching each other it started let's draw it this way right over here the horizons are touching that's what happens is you move in you get to this point over here that's right over here and then you go out the other end okay at that point at time T equals Z the Einstein Rosen bridge is as short as it can be as time goes forward the horizons start to separate the two Horizons of the two black holes start to separate and leave between them a region of space so what does that look like that looks like this thing starts to grow it starts to stretch it doesn't shrink but it starts the stretch if you work out Einstein's equations you will discover that uh that the solution grows and grows and grows and in fact in going from up here to across here it's infinitely far way up in these Corners here way up in these Corners so as time goes forward this Einstein Rosen Bridge grow its growth is one of the reasons that you have such a hard time sending a signal completely through the the the Einstein Rosen Bridge you can't and it's because as you're trying to send the signal through it's growing and it's growing fast it grows basically with a speed of light so the Einstein Rosen Bridge grows with time it's a very interesting question what properties of the quantum mechanics are encoding this growth of the Einstein Rosen bridge this is the problem all right the resolution of the amps original amps Paradox is essentially that there is a causal route it's Through the Wormhole if you like a causal route from the edge of this black hole over here to the interior of this black black hole it's sort of into this one and out over here that there's there's no problem left once you uh once you buy into that okay but there's still something interesting to understand what is it about entanglement what is it about quantum mechanics what is it about the nature of these systems that encodes the fact that the Einstein Rosen Bridge grows with time it grows it doesn't shrink and how long does it grow classically according to classical general relativity it grows and it grows and it grows forever getting longer and longer with its length being proportional to the time that's a puzzle that's another puzzle it's a new puzzle and the reason it's a new puzzle goes something like this black holes are thermal objects they're objects with a temperature with an entropy and they're in thermal equilibrium well they don't start in thermal equilibrium when they're created they're out of thermal equilibrium but thought of as quantum mechanical systems they're systems which are thrown together and very quickly come to thermal equilibrium the time that it would take a solar mass black hole to come to thermal equilibrium is basically about a millisecond a big solar mass black hole a couple of kilometers big when it collapses it's way out of equilibrium why because the sudden collapse is a sudden process way out of equilibrium thermal equilibrium and then the internal dynamics of whatever it is it's making up the black hole equilibrates and it takes about a millisecond for a solar mass black hole to uh to come to thermal equilibrium very very fast and then what happens to it let's assume nobody is poking around at it but you're watching it from the outside and you're describing it by quantum mechanics you're describing it by a combination of quantum mechanics and whatever else describes it what do you find happens after that millisecond to the black hole the answer is nothing once things come to thermal equilibrium nothing happens that's it that's that's about all that ever happens once you come to thermal equilibrium EV the evolution of systems just ceases um unless you disturb them again unless you disturb them unless you do some disturbance to them it's also possible for accidental fluctuations to take place a little about thermal equilibrium uh my favorite example is the air in the room let's uh take all the people out of the room so that we don't disturb the system and start with some very uh strange configuration of the air the usual configuration people like to talk about is all the air in the corner of the room that's way out of equilibrium that's like the black hole just being allowed to collapse suddenly and then what happens the air eventually fills up the room uniformly and then nothing happens after that all sorts of interesting things happen in between but nothing interesting happens after that the air just fills the room so thermal equilibrium is usually thought of as the end of interesting things happening and it happens fast it happens fast in this room it would probably take a couple of days for the uh for the air to really equilibrate but in a black hole because of the compactness of it and how strongly the different pieces of it interact with each other it happens very fast so after uh so after a very short amount of time somebody looking at this black hole will discover that it's simply star if it was created by a process like this process of collapse they'll discover nothing happen afterwards to the black hole looking at it from the outside if that's the case that nothing interesting happens after thermal equilibrium then how do we account for the fact that the interior of the black hole grows and grows and grows forever what's going on that simultaneously allows us to say that from the outside of the black hole it thermally equilibrates and nothing flat and yet that same quantum mechanical system something is going on that says that is evolving and evolving and evolving the interior geometry evolving essentially forever and growing how do we understand that that's another form that's another Paradox um that I think will take up next time and it's the Paradox of complexity of quantum complexity now as I said I'm tell telling you what I think the uh the purpose of these lectures is for us to lay out what we're thinking about here [Music] um what you will find of course is that ongoing research as it happens is generally does not involve consensus between everybody so this is not a thing which has reached the stage of consensus but this is what I think this is going to go into a historical record this historical record will be a record of the research that's performed at sitp by the various people who do it and a 100 years from now historians will come and they will look at it and what will they say about this we'll find out uh and um hopefully this record will be of some interest to people it'll be public it'll be available and um I'm hoping it will be interesting anyway um let's see should we go on a little bit or not yeah let me tell you a little more little bit more how do you make entangle black holes well I told you one way to make entangle black holes is a BN take a bunch of entangled particles and then collapse them but it's by no means obvious that that makes a uh black hole with a wormhole two black holes with wormholes between them that's a sort of guess but there is a mathematical or physics oriented process that when you actually solve the equations you do discover that you've made two black holes with a wormhole between them and uh this is an old process I'll tell you about it by a process I don't mean could you do this in the laboratory in principle yes in principle yes this could be done in a laboratory it would be very hard and would take a long long time and uh involves uh very very difficult conditions but I'll tell you what the process is you start with a very primitive and simple version of it it was discovered by Julian schwinger Julian schwinger was a great uh one of the great discoverers of quantum electrodynamics and it's called pair Creation in an electric field pairs mean electron positron in an electric field you take an electric field a big one capacitor plates plus plus plus plus plus plus plus minus-- minus minus you um put enough charge on them to make a very very big electric field now one of the problems with this is that you couldn't make such a big electric field in a laboratory it would simply discharge itself you know you blow up your laboratory if you tried so but in principle you make a very very big and it can also do it with a magnetic field but you make an electric field of very very sizable big uh value and what will happen is the vacuum fluctuations the creation remember there are constantly in the vacuum the creation of pairs of particles in particular electrons and positrons virtual particles a minus charge and a plus charge then they they form and then they collapse and they form and they collapse and they're constantly an ongoing thing so supposing over here right in the middle a plus charge no a minus charge an electron and a positron were created left without the electric field they would just recombine and these processes take place continuously in empty space but the electric field exerts a force on the plus charge this way and the electric field exerts a force on the minus charge this way and it simply pulls them apart out goes a plus charge out goes A minus charge now of course if you really have a capacitor then the plus charge will get absorbed into the capacitor but imagine this is so big that uh that um these particles will fly a long way before they uh before we have to worry about the walls of the system so we've created a pair of particles pair of plus plus and minus in fact we've created an entangled pair this pair is entangled in several different ways but in particular the um the spins of the particles are entangled so we've created A Primitive entangled pair now if you really make the electric field big enough you can do something which can be calculated it is possible to calculate from the basic rules of quantum mechanics and field Theory you can calculate the pair creation of something much more exotic than an electron and a positron you can calculate literally calculate the process means you can calculate the probability for it and you can calculate how it behaves afterwards of two black holes a plus black hole nope a minus black hole and a plus black hole these are charged black holes black holes can be electrically charged a plus charge black hole and a minus charge black hole this is just an extreme version of the same swinger process right black holes incidentally have many Quantum States they're not like an electron which has two Quantum States spin up and spin down they have many many Quantum States many Quantum States because they have a large entropy black holes have entropy and so black holes have a lot of quantum States just little rearrangements of them the black holes come out and like the electrons they come out maximally entangled they come out in such a way that you could learn anything about this black hole remember this black holes are not just objects they're objects with many many states many things going on in them complicated systems they come out maximally entangled in the same way as the electrons in such a way that in principle you can learn about anything in Bob's black hole from Alice's black hole that's number one so they're entangled number two when you calculate the geometry the solution of Einstein's equations that correspond to this what you find is the solution has a wormhole between them exactly this behavior let's see we can draw it on one of these folded pictures here's the folded picture on the folded picture on the folded strip there's an electric field the electric field could be pointing that way a big electric field and what happens is a pair is created a pair of black holes are created and the black holes are created with an Einstein Rosen bridge between them and then one of them is accelerated one way and the other is accelerated this is these are opposite directions incidentally this one this way and this one this way correspond to opposite acceleration and this whole thing just slides off and eventually becomes two black holes very far from each other which are entangled and the Einstein Rosen Bridge moves with them so that is a um a classic example it's not classical it's quantum mechanical but it's a classic example of the pair creation of entangled black holes you can then do this over and over again and collect yourself a large number of such pairs so after a while you have a whole bunch of ones going this way and a whole bunch of ones going this way what would it look like it would look like just a whole bunch of black holes connected by wormholes connected by Einstein Rosen Bridges the Einstein Rosen bridges are hidden from our view because they're behind Horizons but then if you like you can take them and you can collapse them into bigger black holes and from following the structure of the Einstein equations you discover that the bigger black holes are number one entangled and number two have Einstein Rosen Bridges between them so this idea of entanglement being if not identical to at least a close relative of Einstein Rosen Bridges this um this idea appears to have um mathematical substance from a number of examples we we know other examples and um we'll see we'll see in 100 years whether it still stands as I said the next thing to try to understand which is even Stranger In some ways even stranger is what property of the quantum mechanical evolution of a pair of black holes corresponds to this growth of the Einstein Rosen bridge and um we will talk about next time that's the subject of quantum complexity yeah electron pairs that's entangled they have a just two particles that are entangled yeah Einstein Bri between a very primitive one which uh in some sense is no bigger than a plunk scale is this all Rel to the idea that particle is actually a black hole um extral black hole Yeah Yeah but of course um a extremal black hole it's an extremal black hole of one bit of entropy the smallest possible and even it's not really that either but um it's lighter than an extremal black hole but yeah it's related to the idea that there's no sharp separation between particles and black holes very definitely connected to that yeah yeah um you know if you look at the Spectrum of Elementary particles light ones heavier ones heavier ones heavier ones uh and eventually you get up to an energy which is a mass which is large enough for them to have an appreciable gravitational field at that point they basically are black holes so the the sharp division into what we call black holes and what we call partic is not really a sharp division um it's a qualitative distinction as I said you can't jump into an electron you can jump into a black hole but it's a um it's a there's a gradation of different Siz black holes particles being the smallest and big ones being big so that's a good point that um that there's no sharp division between particles and black holes and therefore there should be no sharp division between what we just call entanglement and what we call geometrical connectivity yeah when did we when did that concept first Ares that so you have a geometrical solution that creates a a wormhole or an Einstein Rosen bridge that implies two entangled systems that are far apart when well it is that part of this amps part Paradox or does that come long before the amps um well I think it was really articulated clearly most clearly after the amps Paradox but it was based on things that had been um understood earlier much earlier um there was a wonderful paper by a relativist by the name of wna Israel in the early 70s who pointed out that um this Schwarz Shield kusol version of a black hole here this the quantum state of it was a highly entangled State now he didn't know what to do with it he didn't have anything great to do with it and it was more or less forgotten until Juan Melda who is you know largely the hero of all of this to my mind um uh brought it up again in the context of uh modern string theory and modern um uh modern ideas and um realized that these two disconnected black holes on two different sheets were that the connectivity between them the fact that you could send a signal through here even though they were in some sense in completely noninteracting worlds was a consequence of the uh of the entanglement but but as a uh in the context that we're talking about here it dates to much more recent than that dates to a paper of meler and myself but it was you know there was an evolutionary growth of these ideas these ideas don't suddenly spontaneously appear from nowhere um the idea that it was a solution to the amps Paradox that was a paper by maler and myself um about a year ago a year and a half ago uh but uh but the growth of the idea of the connection so let's see um there were a very important observation by a young physicist not so young anymore he used to be he used to be a young physicist in fact he used to be a postto here um Mark Van romdon in in Canada who I think you could probably say it was his basic idea that entanglement is what holds space together and that when space is held together it's always entangled and when it's entangled in the right kind of way it means that things are just opposed in a uh in a smooth way so van ramdon and this is fairly recently when was Van Ramon's paper from bch how long ago yeah right five years five years five years ago and since then people have been thinking about it um this Einstein Rosen Bridge picture is really a special case of it it says that this geometric connectivity if and only if there was entanglement so either you have two disconnected black holes when they're unentangled or they're kind of sewn together by the uh by the entanglement uh that idea was was really spelled out not quite in this context not exactly in this context by Mark Van Rams dunk in a in a you know really great paper um there's very really extremely interesting versions of it now being developed uh at Stanford I'm glad to say by BK and by uh lampros here and uh uh so these are evolving ideas they're all connected and what they're telling us is that the two subjects of quantum mechanic and gravity are not two subjects that they're joined much much more thoroughly and much more intricately than anything we imagined we don't we still haven't gotten to the punchline we don't know what the punchline to all of this really is we're seeing fragments of connections and the connections are such that you get the feeling uh that someday there is not going to be quantum mechanics and gravity I don't even think we'll call it quantum gravity that has two words to it it'll be um [Music] um quity quity one word one word some kind of system which just manifests different properties and that you really can't think of gravity without quantum mechanics and you can't think of quantum mechanics without gravity we already see this here that you can't think of the spatial connectivity of these two regions uh without thinking about entanglement and you can't think about entanglement without thinking about spatial connectivity this is an exceedingly you know we we are living in a golden age of both quantum mechanics and gravity there are things happening that are out of the imagination that anybody had 15 20 years ago when quantum gravity was a complete mystery it's not so much of a complete mystery anymore but what is happening is um I don't know how to describe how exciting it is it's incredibly exciting and it's being pushed it's being pushed by a very interesting collection of Concepts from different areas um it's being very very heavily pushed by what is called Quantum Quantum information Theory Quantum information theory is basically about how information is stored and manipulated and used in Quantum mechanic teal systems as a resource how entanglement is used as a resource for uh for communication among other things so the people who Quantum information people the quantum computer people have developed a large number of Concepts that are um paying off in a totally different framework in the framework of understanding quantum mechanics and gravity and in fact if I'm right about where this is going in particular this Paradox of about what it is that accounts for the growth of these things computer science is coming into it Concepts from computer science Concepts like complexity what is complexity complexity I was in town in Santa Barbara and I joked at the beginning of my um seminar I was paraphrasing um Douglas mcaa I said old physicists never die they just go to Santa Fe and talk about complexity but uh right um what is complexity in in um in computer science complexity in computer science is a measure of how difficult it is to solve a problem solving a problem means two things it means setting up a computer program to solve the problem and running the computer program there are two ways that a problem can be very complex one is if the computer program to solve it has to be very very long the size of the shortest computer program that can solve a problem is called algorithmic complexity but then there's the question if once you've program the computer how long does the computer have to run before it solves a problem and that's called computational complexity computational complexity is something that's interesting for classical computers but it's wildly more interesting for quantum computers and that is now beginning to enter into uh into the subject of black holes and quantum gravity the the complexity ideas so the lesson from all of this is that there is some subject and we're touching on it and we're constantly touching on it in deeper and deeper ways which is the synthesis of the ideas of space time and gravity on the one hand and Quantum information Theory entanglement complexity and so forth on the other and I would guess in 5 years we will know a lot more uh when will we know everything when will we have a complete Theory bch when will we have a complete Theory you're not telling me I I hope you're going to make the complete Theory while you're still at Stanford right but it could be 100 years it could be 50 years it could be 10 years it could be who knows we don't know um next time I'll tell you a little bit about Quantum complexity and its relationship to the growth of the Einstein rosenbridge as I said it's the growth which keeps which prevents signals from being able to go from one side to the other and uh and hopefully I will we do have another candidate to give another series of short lectures it's Sean Hart who will also talk about gravity but and information but he will be talking about I believe the connections between gravity and condensed matter physics superconductivity that sort of thing very interesting so we're going to continue I'm going to lean on my colleague uh till we have everybody telling you what it is that they're interested in and um we'll do it over and over and over again every time uh every time physics changes a little bit every time they change their minds and say I was wrong let's do something else we'll create a history of them anyway I hope you're enjoying it I am thank you for more please visit us at stanford.edu
AI_LLM_Stanford_CS229
Stanford_CS330_Deep_MultiTask_Meta_Learning_NonParametric_FewShot_Learning_l_2022_I_Lecture_6.txt
So our plan for today is primarily we're going to focus on what I'll refer to as non-parametric few-shot short learning methods. This is a pretty cool class of methods that actually seems to work really, really well for few-shot classification problems. And it will also be part of homework 2, in addition to some of the topics that we had covered on Monday. We'll also look at a case study in education, where we actually deployed these systems for a real live application, which is pretty exciting. And then once we talk about this class of methods, we'll also talk a little bit, start to wrap up this module on meta learning algorithms, by talking about a comparison of the three different classes of approaches that we've talked about, and also give a few different example applications. So really, the goal is by the end of the lecture are to get an understanding for what this third class of meta learning methods is, and understand how to implement these non-parametric meta learning methods, and then also start to understand the trade-offs between different meta learning algorithms and some of the applications of these algorithms. Cool. So to briefly recap, the last two lectures we've talked about black box meta learning and optimization based meta learning. And in black-box meta learning, we dramatize the learning process with something like a big black box for current neural network by passing in a trained data set into that neural network, having it output a set of parameters, and having a new example be passed into that, and training this whole system end to end with respect to the ability to generalize to new data points. This is what you've been implementing in homework 1. It's a very expressive approach in that it can represent lots of different learning procedures. But it can also be somewhat challenging to optimize and somewhat sensitive to hyper parameters. Then on the lecture on Monday, we talked about optimization based meta learning algorithms that embed the structure of gradient descent into the inner loop learning process. And we saw that these algorithms, because they're embedding the structure of optimization inside this learning process, you get that nice structure. And so at initialization, you're already going to be getting something that can do at least a little bit of learning from the examples that you give it. But it does require a second order optimization. And this can be computationally more heavyweight, especially in comparison to some of the approaches that we'll be talking about today. Cool. So really what we'd like to be able to do today is we'd like to be able to take a learning procedure and embed that inside the inner loop process of these meta learning algorithms without requiring a second order optimization. And the way that we're going to do that is instead of trying to embed gradient descent into these algorithms, we're going to look at algorithms like nearest neighbors. And in particular, you might think, OK, well nearest neighbors is not a very powerful algorithm. So why might we actually do anything like nearest neighbors? But I think nearest neighbors, these non-parametric machine learning methods actually work pretty well if you are in a low data regime. If you have a small amount of data, then these algorithms are computationally efficient because you don't have that many comparisons to be making. And they're also very simple. And meta test time when, we're trying to do something like few-shot learning, we're actually in a low data regime. And so things like nearest neighbors may actually make a lot of sense. However, during meta training time, we actually potentially have a large number of tasks and we want to be able to learn good representations from the data that we have. And so we still want to be parametric during meta training time. And so really the idea behind the class of methods that we'll talk about today is trying to use parametric meta learners that produce effective non-parametric learners. So it might be a little bit mysterious what that exactly means. So let's get into it a little bit. So say we want to do the canonical few-shot image classification problem that we've been looking at before. If we want to do something like nearest neighbors with this example, what we would do is we would take our test data point and we would compare it to each of our training examples. And once we figured out which of the training examples it was most similar to, then we can output the label of that training example. So this is pretty simple. We're just going to be comparing our test image with each of the images in the training data set for our given task. Now, the key question is when we make these comparisons, in what space do we compare, or what distance metric are we going to be using to compare these images. Now, one thing you could imagine doing is doing L2 distance in pixel space, so just doing Euclidean distance in the original space of the images. And so if you do something like that, say you were to compare this image on the right with the two images on the left just with this L2 distance. I'm curious what you think would be the closer of the two images. How many people think that the left image would be closer in terms of L2 distance? And how many people think the right image? So maybe people are trying to-- but in general, there wasn't any sort of consensus. But it turns out that this left image is actually, in L2 space what's going to be closer to this image on the right. And at least perceptually, in terms of when we see these images, at least the image on the right is something that we would probably want to be closer in terms of our distance metric. So things like L2 distance are actually really terrible for comparing in the original space of things like images. So we don't want to use L2 distance. Does anyone have ideas for what distance metrics we might consider using instead of L2 distance? Yeah? Hold the first five layers of VGG-19, and you use [INAUDIBLE] for this image. That's very specific. So you could use it as the embeddings from kind of a trained neural network, perhaps the fifth layer of the VGG network. Yeah? Learn the metrics, like the training data. Yeah, so maybe you can learn the metric with the training data. Any other thoughts? Yeah. Extract key points. So you could try to extract some key points in the image and compare those, the key points across the images. Yeah. [INAUDIBLE] so if you know your pocket that you could pull out of the [INAUDIBLE],, and maybe do cosine similarities in this huge feature space. Yeah, so you could do something like cosine similarity in a certain feature space. Yeah? This isn't a suggestion, but I just feel like I'm missing something. If we're talking about choosing a distance metric, isn't that a parameter? Like in what sense is this a non-parametric type of-- Yeah. So the question was, if we're choosing the distance metric, isn't that a parameter of the method, and so in what sense is this a non-parametric method. And so yeah, that's exactly right. The choice of distance function is going to be parametric. And in this case, we're actually going to be learning a distance metric which is what was suggested. That's the parametric part of it. But once you have that distance metric, the rest is non-parametric. So you're going to be comparing in that space. So once you embed into some embedding space, everything after that is non-parametric. And so it is going to be something that's a little bit more hybrid. The meta learning process will be parametric and have some parameters that you're optimizing. And then once you optimize those parameters, at test time there isn't going to be any kind of notion of task parameters. And we'll see that a little bit when we get into some of the math. Cool. So yeah, the key idea behind this class of methods is maybe we can learn how to compare examples, using the meta training data. So there's three specific methods that we'll go into, and we'll start with the simplest one, which is referred to as a Siamese network. So a Siamese network is a pretty simple neural network architecture where you have two inputs, and you pass those two inputs into the same exact neural network. And it's called the Siamese network because these two neural networks have the same exact parameters. So the parameters are shared across the two. And what we'll do is we'll pass in two images into this Siamese network. And then once we have the output, we'll then compare these the resulting representation. And we'll train the Siamese neural network to output whether or not the two images we're passing in have the same class or not. And so in this example, these two images have a different class. One is of a lion one is of cups or bowls, and so the label will be 0, because these are a different class. Likewise, we could pass it these two images which are both images of bowls, and we would train it to output 1, and so on and so forth. So whenever we give it to images that our different classes, we train it to output 0. Whenever they're both the same class, we train to output 1. So the training process is a very simple binary classification problem. And you can do this with all of the available meta training data that you have. Cool. And then once you have this Siamese neural network, what you can do is use this as essentially your similarity metric or your distance metric. So you can compare your test image to each of the examples in your training data set, and ask this network, which of the images is the closest. And so in particular, if we go back to the example you had before, you would pass in all pairs of training example, comma, test data point, and ask it which has the highest probability of being the same class, and then output the corresponding label. Any questions on how that works? Cool, so training will be binary classification, and test time, you're running these pairwise comparisons. And so if you have n times k examples in your training data set, you're going to be actually doing n times k forward passes of this neural network, and then finding the probability that's the highest, and then not putting the corresponding label. So meta training is binary classification, and then meta test time will actually be a form of N-way classification. Yeah? [INAUDIBLE] on one-shot learning then kind of thing. So you're asking, this can't work for one-shot learning? Why can't it work for one-shot learning? If you only have one example of each class, then if you compare it to just [INAUDIBLE],, the unit will just learn to predict identity only, kind of like that. So, similar to what we saw in the previous two lectures, you need to have at least one example per class in your training data, in your meta training data set. And so in your meta training data set, you need to make sure that, for example, you sample two images per character, or two images per class. And that ensures that you're actually going to train it to generalize. Versus, for example, if you pass in the same exact image into the network, it would just learn to memorize, and learn how to predict whether or not they're exactly the same image, or whether they're slightly different images. Cool, so now one thing that's not super appealing about this kind of approach is that we actually have this mismatch between what's happening at meta training time and what's happening at meta test time. And so, in particular, if you were actually to write out what happens at meta test time, you're going to get something where basically you're taking-- we'll call this neural network f, or f data. You're going to be comparing the test example with each of your training examples. So you'll be looking at this for each of the xk in your training data set. And you can essentially view what's happening at meta test time as, well, asking this neural network whether or not the probability that these match is greater than, say, 0.5. And if it is greater than 0.5, outputting the label corresponding to that training example. So one way that you could write what's going to be happening at meta test time would be something like this, where you're summing over all of the examples in your training data set, comparing the test input to each of those training samples, and for the one that has the highest probability, then outputting the corresponding label. This isn't exactly correct. Because the network might actually output greater than 0.5 for more than one of the examples. But this is approximately what it is doing. And this will correspond to y hat k, your prediction for the test example. So y hat test. So now from this standpoint, if you view this as what's happening at meta test time, what you could imagine doing instead of doing binary classification during train time is formulating an equation like this, and simply just back propagating into theta. And if we did something like that, then we could actually match what happens at meta training time and meta test time. Now, we can't do exactly that because this operator right here that's basically is kind of an indicator variable for whether or not the probability is above 0.5, it's going to be hard to differentiate through something like that, because it's a hard operation. And so what you can do instead is do something softer, where you simply multiply this probability by yk, and sum over your examples in your training data set for a given task. And if we do something like this, this here, we can actually back propagate through this loss. So this is going to be equal to y hat test. And you can actually optimize the parameters of this network right here with respect to how accurate you are on test examples. And so in particular, the way this algorithm would look like is very similar to the existing meta training algorithms that we have done before, where first you sample a task. So this might, if we were doing three-way classification, this might correspond to, say, three characters. Second, we sample two images per character. And then for the training data set, this basically, it gives us our training data set and our test set for that task. We plug in our training data set into this equation here to then get an estimate, as well as our test example from the test set to get an estimate for the label. And then when we actually go to update theta, we're going to be looking at comparing the-- I guess I should write this more as cross entropy loss. So we'll be looking at what's the best way? Something like y hat test log y tests. I should have written this down before. I think the cross entropy loss is actually the opposite of this. Is this correct for cross entropy loss? Something like this. And so then you'll be minimizing the parameters of basically our comparator function with respect to how accurate your predictions are. So more specifically, what this will look like is this corresponds to something called matching networks. And what we're going to do is we're going to encode our training examples into some embedding space. And then for each of those embeddings, we will compute this function that you can think of this as potentially taking the dot product between your embedding for x test and you're betting for xk. And so in this diagram, you can think of each of these black dots as this function right here. And then each of these colored squares corresponds to the label for that training example. We take the dot product between the black dots and the colored squares, sum over each of these examples to ultimately get the prediction for our best example. Yeah? How well do these methods work on unseen classes at inference time? So how will these methods work on unseen examples at inference time? So in general, as long as you train it on enough of the kinds of meditating scenarios that we've had before where you have enough characters that you're training it on, it will actually generalize well to new image classes at test time. If you only train it on a few classes, then it will struggle to generalize. Yeah? [INAUDIBLE] parametric methods that constrain to classification tasks, only because you cannot learn anything else [INAUDIBLE]. Yeah, so that's a great observation. So these methods are restricted to classification tasks. Things like nearest neighbors are specific to classification, and it's not trivial to try to extend these kinds of ideas to regression problems. Yeah? Why is the D test not going through the same embedding space as the training? Yeah, that's a great question. So one thing you might know is that in this particular architecture, they use a different encoder for the test example than for the training examples. That's mostly just a choice of their architecture. You could choose to put it through the same exact embedding space. And actually, in the method that we'll look at next, they do actually choose to put it in the same exact embedding space. One other note that I'll make about this architecture is that when they embed the training examples they actually use-- this is a somewhat old paper. So they use a bidirectional LSTM. If they were designing this more recently, they'd probably use a bidirectional transformer model. And that means that when it actually computes this, it can actually take into account not just one example. But it's also implicitly taking into account the other examples in your task training set. But basically at a high level, you can think of this method as embedding things, doing nearest neighbors in that embedding space, and then back propagating all the way into the parameters of your embedding function. Of course, we need to do that sort of nearest neighbors function in the soft way, so that we could differentiate through it. And that's why we get a function that looks like this, where we are rather than taking kind of this hardmax, we're going to be taking more of something like a softmax. So the whole thing is trained end to end. One thing that's nice about this paper is it actually really emphasizes how you should really try to match what's happening at meta training time and meta test time. And they were able to get substantially better performance than something like Siamese networks. Cool. So if we walk through the algorithm, we also did this somewhat on the board, but more formally it's very similar to the algorithms that we've seen for the black box approach and the optimization based approach. And what's different is just primarily just the third step, although also the fourth step. So in the third step, instead of actually explicitly computing parameters for the task, we're actually going to be skipping that step, and directly making predictions for the test examples. You can view this as kind of integrating out the parameters, the test parameters. And hence, why we're referring to these as non-parametric few-shot learning methods. And then of course, because we're just computing test predictions directly instead of using a loss function in the form of step four, we're going to be updating the meta parameters simply using a loss function that compares the predictions and the labels. How does this generalize in the case where you have more than one data point [INAUDIBLE] of the same [INAUDIBLE]? Yeah, that's a great question. And that was actually the next question on my side. So in particular, the question is, what happens if you have than one example per class. So I guess I was actually thinking about posing this to you guys. So maybe, does anyone want to answer that question? Yeah? You can compare to the average embedding over the different images of the same class. Yeah, so one thing that you can do, is you can compare the average embedding. You're foreshadowing what happens next. Yeah? We could take the score for each class, count how many [INAUDIBLE] have probably greater than 0.5, and the class which has the highest count, that could be the prediction. Yeah, so you could do some sort of voting scheme. And in practice, that's actually what matching networks tends to do. So basically you're going to be-- this equation is still valid for scenarios where you have more than one example per class. And essentially, each of these is going to vote on what the label is. And if you accumulate enough votes, the thing with the highest accumulation of those scores will then win the vote and get to be able to make the prediction for the label. Now, there's a downside to this voting approach, which is that all the votes are cast somewhat independently of one another. And this means that you might have scenarios where the test example is, maybe there is one example with the incorrect label that actually has a very high vote, but everything else has a very low vote for that example. And you might get examples essentially where that overpowers the actual correct label and doesn't give you exactly what you want. As one maybe rough example for what this might look like, you could think of this as mimicking something a lot like nearest neighbors. And there's a failure mode for nearest neighbors, where if, for example, you have data that maybe looks something like this, where you have just a two-way classification problem. You're trying to classify between positive and negative examples. If you end up getting an example that's say, right here for example, something like nearest neighbors will give you a negative prediction, even though there's a lot of positives in this general vicinity. And if instead of doing nearest neighbors you aggregated information across the examples into what we'll refer to as a prototype for each class, then you can start to mitigate that issue. So if this is your embedding space, and you average the embedding per class, you'll get kind of one prototype that is maybe somewhere right here for the positive class. And you'll get one prototype that's somewhere maybe around here for the negative class. And then if you compare your example to these prototypes, you'll then see that the distance to the positive prototype is less than the distance to the negative prototype. And then you'll get something that's closer to the right answer. And so that's the idea behind the last approach that we'll talk about, which is that instead of trying to perform these comparisons independently, we'll try to aggregate class information to create a prototypical embedding for each class, and then compare the test example to those prototypical examples. So we'll just do nearest neighbors to the prototypes instead of nearest neighbors to the individual training examples. Cool. So what that looks like is basically exactly what's in this picture. If you were actually to write it out in math, you'll compute your prototype. We'll refer to this as Cn to match the picture. This will be basically just an average over an embedding of the examples for that class. And so if we take example-- I'll use xk here. We will only want to average over the embeddings that correspond to class n. And so we're only going to do this summation over the examples where yk is equal to n. And this is a summation over all of the examples in your training data set. So this is basically just going from these blue pluses to the purple plus, and the blue negatives to the purple negative. So this is a very simple average. And then once we have these prototypes, we can compute the distance between the prototype and the test example, rather than looking at the distance between the individual examples. And in practice, what this algorithm will do, is once you compute these distances. Then you'll just negate the distances to get a similarity score. And then take a softmax to ultimately get the probability that y hat for the test example is equal to class n. So written out on the slides we embed all of our examples using the function f, average those embeddings. And so unlike the previous slide, we're actually just going to be using the same exact embedding function for our training examples and our test example. And then this is grinding out the full softmax equation, where we take the distance, we compute the distance between the example and the prototype, and then kind of exponentially and normalize in order to compute a probability. Can you [INAUDIBLE] the distance function as well [INAUDIBLE] Yeah, so in the original paper, they use Euclidean distance or cosine distance as the distance function. In practice, you could actually also learn a distance function, somewhat similar to what matching networks does here as well. And so instead of using-- this could be basically a learned network. If it is learned, then it was going to have its parameters will be kind of a part. It will maybe have some additional parameters. And when you run the meta training process, you're going to optimize-- it would both be the parameters of f, as well as the parameters of this distance function right here. Yeah? Can you explain why there's the minus d on the top and at the bottom there's no minus d? Why there's a minus d here, and why-- oh that's a typo on the slide. Definitely. Yeah. That's a great catch. So on the slide, that should be a minus d in the denominator. Yeah? In practice, how will [INAUDIBLE] Yeah, so there's a paper that I'll mention a few slides from now called relation networks. And they basically meta learned the distance function as well. I think that on the benchmark problems, they weren't seeing significant benefits from using a distance function over just using something like cosine similarity or Euclidean distance. And one reason to potentially expect to not see too many benefits is that f itself has a lot of expressive power to embed into whatever space you might want. And that's going to impose different distance metrics. And so you may not actually need something on top of actually just embedding into this other space. But it's possible that there are some applications where it's difficult to impose a certain metric space, and where we're learning something, the output probability is helpful. Cool. Actually, no. So it's actually on the [INAUDIBLE] slide. So this is the relation network paper. They also do this embedding, and then you compare to them. But then they lastly do this have this kind of relation score that corresponds to this learned distance function. Now there are a couple of other things that you could do as well. So there might be some examples where some of your positive and negative examples are not easy to cluster nicely. So for example, if you're trying to classify between cats and dogs, you may have some breeds of cats that look a lot like dogs. And if you have something like that, then maybe you actually have kind of a cluster over here of negatives. And if your algorithm is having a lot of trouble finding a metric space where everything is clustered together nicely, you could also use something where you actually have a mixture of prototypes. And that's what this paper did here is instead of actually just having one prototype per class, they actually had multiple prototypes per class. In a lot of examples, this isn't critical for getting good performance. But it's something that's worth mentioning. And then the last thing that I'll mention is that instead of doing something as simple as nearest neighbors, there are also approaches that do something more complicated, where you formulate this graph, and do message passing on that graph in order to ultimately figure out the relationship between different examples. Cool. Yeah, so that's the gist of it for these non-parametric methods. I guess to summarize, unlike the previous examples, unlike black box meta learning and optimization based meta learning, we're not actually ever going to be getting an estimate for the task parameters. Instead, we're going to be typically computing some embedding space and then doing nearest neighbors in that embedding space. Yeah? Does it converge quickly into the [INAUDIBLE]?? Yeah, so one of the things that's really nice about this class of methods is that this process is entirely feedforward. So you compute your embeddings in a feedforward manner. And then you compute your prototypes. And then you do this distance function. So it ends up being, you don't have to do any sort of gradient descent process. It ends up being pretty lightweight and pretty fast to train. And so you'll see in homework 2 that this class of methods can actually be quite nice for classification problems. Yeah? What's message passing, the last idea? So the question is, what is message passing. Going into that in depth is beyond the scope of what I'll cover in lecture. Message passing is, I guess very briefly, you can think of message passing as if you have a graph, in this case a graph of different examples, and you have some notion of how these examples are related to each other. Like some of the examples maybe have a very strong relationship, some of the examples have a weaker relationship. Message passing algorithms basically try to pass messages along the edges of that graph. And you iteratively pass messages in order to ultimately try to converge to some notion about the true relation between these examples. But I encourage you. You can read this paper for more detail. There's also, if you take like Stefano's course on deep generative models, he also I'm pretty sure it covers undirected graphical models, and things like message passing there. Yeah? Sorry, which one was it that you said didn't require gradient descent? So none of these methods require gradient descent at meta test time. All of them require a form of gradient descent in order to optimize your objective, to optimize your embedded [INAUDIBLE].. Cool. So I'd like to run through a case study of actually using this algorithm in practice. I put one example of a previous years' case study on the slides. That was actually in a pretty cool healthcare setting, where they're looking at image classification of trying to classify different skin conditions and skin diseases. But the case study that I want to cover this year is an example in education that I'm pretty excited about. I'm a little bit biased, because I was involved in the effort. But I think education is a really cool application area. And I should also mention kind of at the start of this project, we really had no idea how hard or easy the problem would be. And so, yeah, it was kind of rewarding to see how it went. So the problem that we were looking at is the feedback problem. And as one instance of this problem, there was a course offered by some folks at Stanford called Code in Place, similar to Shelter in Place. While you're sheltering in place, you could code in place. And it was a free course, free intro to computer science course. And in the second iteration of the course, it had more than 12,000 students from more than 150 different countries across the world. And in the course, they gave a diagnostic exam to help students understand where they're at with the material. And they wanted to be able to give feedback on the diagnostic. And what the diagnostic was, is students submitted open ended Python code snippets that was trying to solve different problems. And they estimated that if you were to try to give feedback on all of the student's code, it would take a very long time, in particular, take you more than eight months to try to do that. And so for this reason actually in the first iteration of the course, they didn't give any feedback to students on this diagnostic. They just took it, and then they got to see what the solutions were. We tried to see if we could give feedback to the students. Now, there's a way to formulate this as a classification problem. And in particular, in many of the courses that you've probably taken at Stanford, when you submit an assignment or an exam, you get feedback through a rubric which tells you what kinds of misconceptions you made on a given question. And we can formulate rubric grading as a classification problem, where the input, the x example is the open ended Python code. And the label corresponds to basically filling out the rubric. And so each rubric will be a multi-label classification problem where you need to be able to predict whether or not the student has that particular misconception. Now, you might think, well, OK, if this is a classification problem, it must be pretty easy to solve. Because classification is pretty straightforward with machine learning. But the challenge with it is that, first, we don't have that much annotation data. We don't have tons of data on the internet that tells us feedback on student programs. There's also these long-tail distributions where students will solve problems in many, many different ways. And then lastly, and perhaps most importantly, every time you give an assignment or an exam, usually it's somewhat different from the previous time that exam or assignment was given, and then also oftentimes the TAs are different, the instructors are different, and so forth. And so you don't have a static data set or a static problem. You actually have lots of different assignments, exams, rubrics, student solutions, and so forth. Cool. So does anyone have any ideas for how you could frame this as a meta learning problem? I guess to start off, the meta training data set that we have available to us is four final exams and four midterms from CS106a. This has a number of different questions and each student solution has feedback from a rubric. Yeah? I guess you can look at it question by question. And then for each question, you have a rubric, and I guess each type of task within the rubric, would be task for the model. Yeah, exactly. So you could go question by question. And essentially each item on the rubric can correspond to a different task, where you can formulate. For each rubric, for each question will basically be a different task, and your goal is to very quickly be able to give feedback on that rubric item with a small amount of label data. And so each rubric has several items. And every question has possibly its own set of rubric items and options. And so for that reason, you can basically have each rubric item as a different task. So if you had a string insertion problem, and your rubric looks something like this, where maybe one option is for the solution to be perfect, another option is for the student to incorrectly insert the wrong character, and so on and so forth, then each item here can be just a different task for running meta training. And then once you have all these tasks, you can apply the meta learning algorithms that we've looked at in the class. And so what we're going to do is we're going to apply prototypical networks, which is what we had talked about on the last slide. There's also a typo in this equation again. And x corresponds to a sequence of discrete tokens. So it's going to be in this case, Python code, or pseudo Python code, depending on how good the student was. And the embedding function, because it's going to be text, we're going to use a BERT based model called the RoBERTa model. That's going to take as input the Python code and output an embedding of that Python program. And then unfortunately, when we use Prototypical Networks out of the box with a RoBERTa model, it doesn't do very well. And so attention isn't quite all you need. And there are some tricks that we needed to get it to work a lot better. The first was that instead of only using the tasks from the rubrics in our data set, we can augment those tasks with other tasks that can be constructed in a self-supervised manner. So we can construct a task, for example, by predicting the compilation error that Python returns if you try to run the program. We can also construct tasks similar to masked language modeling where you are trying to predict the token that has been masked out. And that can also be formulated as a classification problem where you want to classify whether or not that token is a for token, or an in token, and so forth. So that's going to augment our data set significantly. Another thing that we can do is instead of only giving it a few examples, we can also incorporate side information. So we have information about the name of the rubric option, as well as the text of the question. And we can basically pass this into the embedding function to inform the network what the task might be. And so when we encode the program here, we're also going to prepend the side information which will also be encoded with a BERT-like model. Great, and then the last trick is instead of only using data from the exams and assignments that I mentioned before, we can also use a pretrained model. So instead of randomly initializing the RoBERTa model, we'll pre-train it on with a model called CodeBERT that was pretrained on a ton of unlabeled Python code from the internet. Cool. So the gist of the method is still Prototypical Networks. It's just is that there's three small changes to it. One is that we are using these additional tasks. The second is that we are incorporating the side information into the embedding. And the third is that we're going to pretrain the weights of the encoder. As a whole, the model looks like this, where we have our side information that's encoded and prepended into the transformer layers. We embed the student code into embeddings. And we'll then average those embeddings to form a prototype for each label for each rubric option. And then we'll train the whole network end to end, initialized with the CodeBERT model. Cool. So how will all this work? Our first results were actually just offline, on held out exams and held out rubrics from Stanford data. We found that first it outperformed supervised learning by 8% to 17%, which is fairly significant-- 8% in the held-out exam case, and 17% in the held-out rubric case. In the case where you have a held-out rubric, it's actually more accurate than a human TA. It turns out human TAs are actually not that good at grading. Grading code actually just involves debugging the code itself. And so, yeah, it's actually pretty hard. But there's also still a lot of room to grow in terms of the held-out exam. Helds-out exams are harder than held-out rubrics, because they might involve tokens that were unseen during training because of the questions and things being entirely new. The more exciting thing was actually trying to actually deploy this, in Code in Place. In particular, a lot of this was in collaboration with Chris Piech, and Chris Piech promised the Code in Place people that we would give them feedback on their diagnostic. On May 10th, the students took the diagnostic. And I think we had about one week to give feedback on all the assignments, or on all the solutions. Alan and Chris made a cool UI that looks like this, where we paired the predicted rubric option with text that describes the feedback for that rubric option. And that was presented to the student in this purple box here. The students then evaluated the feedback, whether or not they agreed with the feedback or disagreed with the feedback. And we also tried to use attention to highlight where the algorithm thought the error might be arising from. And it's also worth mentioning that things like syntax errors can prevent unit tests from being useful and giving feedback. And so there are some solutions we were able to give feedback automatically to. But the bulk of the solutions we were not able to give automatic feedback on. Yeah? Is there any additional things you need to do in order to get the model to pay attention to places where the student might have gotten it wrong? Specifically for the highlighting? Yeah. Yeah, so the highlighting, I think it worked a pretty good amount of the time. But it didn't work 100% of the time. The way that we did that, I don't know if I would necessarily recommend this. But the way that we did it, is we randomly masked out part of the input, and then tried to see if the prediction of the model changed. And if it did change, that's an indication that the error may have occurred in that part of the program. But it wasn't the most reliable. And in general, interpretability is a very open area of research in deep learning. Yeah? Was there an ablation study to see which one of the tricks helps the most? Yeah. So there's a pretty detailed ablation study. I don't have them in the slides here. But you can take a look at the paper to see it. The gist is that actually all of them helped a lot. And the three that I covered, I think that they're helping by upwards of 10% accuracy. Yeah? So can this model [INAUDIBLE] logic in edge cases, other than [INAUDIBLE],, like when a student makes an error, with an edge case or something like that? Yeah, so the question is, can this model also understand if students-- like kind of edge cases and so forth. So it's only going to understand the rubric options you give it. And so if there's something that's not on the rubric, it's not going to understand that. And it also still needs a few examples of test time for the rubric. Cool. So let's get into some of the results from actually the live deployment. So we actually gave this to the students in the Code in Place course. We got humans to actually volunteer to give 1,000 feedback, or feedback on 1,000 different student solutions. And then we had the system give feedback on the remaining 15,000 solutions. And then around 2,000 of the solutions could be auto graded, and then they weren't included in any of the analysis. And then in terms of giving the feedback, for the 15,000 that were graded by the system, that's the feedback that the students got. For the 1,000, they got the human feedback. The students didn't know if they were getting human feedback or AI based feedback. Although they did know that we were running a study as part of the process. First, we looked at how much the students agreed with the feedback from the Prototypical Networks based system versus the human feedback. And we were actually somewhat surprised to see that they actually agreed with the feedback from the system slightly more. And also in general, they agreed with the feedback a lot. So in general, it was like-- I think it was 97% versus 98%, or maybe 96% versus 97%. So it seemed to work actually quite well. Now, they might agree with the feedback, but not actually find it useful. For example, if you always said good job, they might kind of agree with that, but not find it very useful. And so we asked them how useful they found the feedback out of a rating from 1 to 5. And they gave it a 4.6 out of 5 in terms of usefulness, which suggests that they actually find it useful in pointing out their misconceptions. Yeah? So [INAUDIBLE] not all of those students would have responded with yes or no. Some of the students would not have responded at all. So is it the same component of bias? Was is [INAUDIBLE] and for the 1,000 students, that would like get the score to-- give the feedback properly? So you're asking what about students who didn't give feedback on the feedback? Is it effective of [INAUDIBLE] experience giving or not giving the feedback, if [INAUDIBLE] Yeah, that's a great question. I mean so first the students didn't know if it was human feedback or not. And so this comparison should very much hold water, because they're not going to like abstain at different rates. The other thing that we did is in the interface we gave the feedback one by one, and to go to the next one, to actually see the feedback for the next question, they had to give feedback on the first question. And so we required that they stated whether or not they agreed with it or not. It's possible that not all students actually looked at the feedback. I don't know exactly what the rate of that was. Yeah? Did experts also grade this, apart from the students by any chance? The humans here were actually kind of basically volunteer TAs for the course. I meant the AI feedback. Did any experts also review the feedback and said whether the feedback would be useful or not? Got it. Yeah, we did not have experts look at the feedback. So these were the only evaluations that we ran. Yeah? Was there a difference in the usefulness found between the AI feedback and the human feedback? That's a good question. I don't know off the top of my head. But I think the answer was no. And I should mention that in some ways this is kind of a hybrid human AI system in some ways. Because this text was written by Chris actually. And so it was doing the bulk of the work, which was to figure out which rubric item it was. And then for each of those rubric items, Chris wrote that text. And the text from Chris was put in the box. This still saved like almost all of the work needed to give feedback. Because Chris just needed to enter that text for each of the items in the rubric once. But I think that because it's very similar, I wouldn't expect there to be a significant difference in usefulness. Cool. And then the last thing that we checked, just as a sanity check, we weren't really expecting to see any bias. But we wanted to check that the model wasn't picking up on something that we wouldn't be aware of. And so we looked at the agreement by the different-- the two most common gender demographics, and the three most common countries in the data set. And saw that there weren't any signs of bias. This was expected, because we removed all the comments from the code. And really there probably isn't that much information in the code that reveals this sort of thing. But it's always good to do these sorts of checks in these kinds of studies as well. Cool. Yeah, so that was a case study of actually using Prototypical Networks in a real application. We're also trying to continue to improve on it. We actually have a little bit more data now. And so we're hoping to improve that bar a little bit more. Now, I'd like to talk a little bit about how the different approaches compare. So we've talked about black box meta learning. We talked about optimization based meta learning. And today we talked about these more non-parametric approaches. And I think it's useful to take a high level view to understand when should you use one algorithm versus the other, and how do they compare to each other. First, we can compare them at a conceptual level. So we walked through this computation graph perspective on meta learning in Monday's lecture. And we saw that both black box meta learning and optimization based meta learning are both a computation graph that takes as input the training data set and the test example, and makes a prediction for that test example. Non-parametric approaches can be viewed in the same exact way. So you can also view something like prototypical networks as having the same form of computation graph. It's just what happens on the inside and the inner loop that differs between it. And so in particular, it makes its predictions for test inputs using something like nearest neighbors or nearest neighbors to prototypes, where the equation for the prototype is the average embedding. So as a whole, you can view all of these approaches as being the same family of meta learning algorithms. And they're all optimized end to end with respect to some meta learning objective. The difference is just whether or not you don't give it any structure at all and you represent it as a neural network, versus embedding gradient descent, versus embedding something like nearest neighbors. This also suggests that it's possible that there may be a fourth class of approach that comes up that embeds something that looks a little bit different from these other methods. And like we mentioned before, you can also, again, mix and match different aspects of this computation graph, and get algorithms that aren't clearly in one of these three buckets. So for example, there are algorithms that both condition the network on the data and run gradient descent. So that would be a hybrid between a black box approach and an optimization based approach. There is also an algorithm that does something like relation networks, which we had talked about here where you learn this distance function, and actually run gradient descent on that relation network embedding. And so that would be a hybrid between optimization based and non-parametric based. And then there's also approaches that do something like MAML, but initialize the last layer as a Prototypical Network. So I guess in general I think it's useful and pedagogical to think about these three different categories. But there really is a spectrum in between these and there's lots of algorithms that don't fall into one of these three categories super cleanly. Then beyond the conceptual similarities between these algorithms, we can also think about the properties that they have. One property that we've talked about a little bit is expressive power. And by that, I mean the ability for the learner to represent a wide range of learning procedures. And we want to have expressive power, because it means that we might be able to scale to scenarios where we have lots of meta training data, and we want to learn a really good learning algorithm. And it also means that it might be applicable to a wider range of domains, where maybe we don't have good learning algorithms. Beyond expressive power, there's a second property that I think is useful, which I'll refer to as consistency, which is that regardless of what you do in the meta training process, it would be nice if your learning procedure monotonically improved as you gave it more training data. And this is useful because you might get a test task that's pretty different from your meta training tasks. And if you do get an out of distribution task, it'd be nice if your algorithm still does something somewhat reasonable on those tasks. And so this property will reduce the reliance on having a large number of meta training tasks. And it should give you, in principle, somewhat better OOD task performance. And you can remember the performance that we looked at when we made the task more and more out of distribution, and we saw that an algorithm like MAML, which is actually consistent, it does better than algorithms that are not consistent. Yeah? Is consistency the same concept as generalizability? The question is, is consistency the same concept as generalizability. I think that consistency implies that the algorithm should generalize better, but not vice versa. You could have something that does seem to generalize well, but isn't guaranteed to improve with more data in expectation. Yeah, they're certainly related. Cool. So I think these properties are pretty important for a lot of applications. And we can think of each of these algorithms within the context of those properties, and black box methods have complete expressive power, but are not consistent. If you give them more data, they won't necessarily get better. Optimization based methods are consistent in that they reduce to gradient descent. And they're expressive if you give them a very deep model for supervised learning settings. And then non-parametric methods are expressive for most architectures that you give it. And under certain conditions, they can also be consistent. Essentially, you can expect them to be consistent if you're embedding function doesn't compress too much about the input. If it compresses too much, then it may no longer be consistent because you may have two examples that are very similar to each other, but your embedding space doesn't actually put them as similar to each other, because it kind of compressed away some of the details. And then beyond these properties, I think there's other pros and cons of these algorithms as well. We talked a little bit about the pros and cons of back box and optimization based methods in the previous lectures. And these were that black box is easy to combine with a variety of learning problems, but can be challenging to optimize and as a result, be data inefficient. Optimization based meta learning algorithms have a positive inductive bias at the start of meta learning, and they also are pretty good at handling varying amounts of K and large numbers of examples well, because you're averaging across those examples when you compute the gradient. But they involve a second order optimization that can be compute and memory intensive. Now for non-parametric approaches, they're an entirely feedforward process. You never have this gradient descent step. And so as a result, they end up being computationally very fast, and usually pretty easy to optimize. In practice, people have found that if you have varying K, It's somewhat harder to generalize to varying K. I don't honestly have great intuition for why this is the case. But it's kind of an empirical observation that people have made in the past, and also that it can be difficult to scale to very large K. The reason why it's difficult to scale to very large K is because you have to make comparisons to all the examples in your training set. And in general, nearest neighbors and non-parametric methods don't scale to very large data sets because the runtime of your algorithm is O of n, or O of n times K in this case. And then the last thing which was mentioned before is that these methods are entirely limited to classification. Cool. And then I guess the other thing that I'll mention here is that on many few-shot learning benchmarks, if you tune these algorithms well, oftentimes you'll see that all three of these can perform quite well on those benchmark problems. I think this says a little bit more about the benchmarks than the methods. I think the benchmarks may be a little bit too easy. And I think that which method you use will depend on your use case. So in general, my recommendation is if you have a supervised classification problem, I think that non-parametric methods would generally be my recommendation, because they're very fast and easy to optimize. And they tend to work well for classification. But if you don't have a classification problem, then you need to use one of the other two approaches. And usually in that case, I might recommend an optimization based meta learning approach. And there are a few instances where I might recommend a black box approach. Black box approaches I think are actually quite useful in reinforcement learning scenarios, in part because we don't actually have good inner loop optimization methods for reinforcement learning. And then lastly, if you have a ton and ton of data, things like black box approaches can make sense because they're so expressive, and because you don't care as much about data efficiency. And that's why we've seen things like GPT-3 I think be very effective with a black box model. Yeah? What did you exactly here mean by entirely feedforward? We are doing training, correct? Right, so by entirely feedforward I mean that at meta test time, it's entirely feedforward. Like the inner loop process is a feedforward computation graph. But actually the same could be true for a black box model as well. Although black box models often may have some amount of recurrence. But yeah, so I mean in contrast to optimization based methods where you're running gradient descent at test time, these approaches are just a forward pass through a neural network. Yeah? So for all of these meta learning algorithms, we still need to have a very large number of tasks for them to sample from ID. If we have a small number of tasks, where generating a new task is still expensive, so we only have dozens of tasks instead of hundreds or thousands, would you still recommend using multi-task learning instead of meta learning or other ways to modify them? Yeah, so the question is, all of these methods require a fairly large number of tasks. And if you have a small number of tasks, would I recommend using multi-task learning instead of learning? Yeah. So first off, I would say that in general, in settings where you have a small number of tasks, that's where consistency can be very important. Because if it is still running something like gradient descent on that new task, then it is less reliant on having a large number of meta training tasks. And then there's I think two other things that you can do when you have a small number of tasks. The first is that you can do task augmentation, which we saw in the education example. In some scenarios, it's possible to come up with those kinds of tasks. In other scenarios, it might be more difficult. And then whether or not to use multi-task learning or meta learning, I think it depends a lot on the particulars of the problem. In general, if there's a small number of tasks, I do think that multi-task learning can be a better approach, especially if you don't mind training on all the tasks at once. Whereas in meta learning, one thing that's nice about these meta learning algorithms is you can train on all your meta training tasks first, then encapsulate that into a model, throw away your meta training task data, and then apply that to the meta test task. So I think that generally, whether to use meta learning or whether to use multi-task learning will depend a bit on the considerations like that. Yeah? What is [INAUDIBLE] or why do you prefer non-parametric over optimized [INAUDIBLE] because I think [INAUDIBLE]. Is it just because compute, or is there's another reason as to [INAUDIBLE]? Yeah, so the question was for classification tasks, is there a reason to prefer non-parametric methods over optimization based methods. And generally, my recommendation is just because of compute. These generally will typically require less compute during the meta training process. They'll also be a little bit lighter weight. And so in the education example, when we were processing Python code with these BERT based models, trying to do a bilevel optimization with a BERT model gets computationally expensive fairly quickly. And if you have a feedforward process, these computational benefits are quite nice. But again, that's my default or rough recommendation. And there may be applications where an optimization based method may be preferable, especially for example, if you have a larger K, or a variant K, or something like that. Yeah? You said there were intermediates, or there was a spectrum between these. What would be a combination between a non-parametric method and one of the other two? Yeah, so I mentioned one combination on the previous slide, and particularly what it was doing is it was-- oh, I didn't mention on this slide. Which slide was it? So these second two methods are examples of hybrids of non-parametric and optimization based methods. For example, the second one, it actually learns this embedding space, and then does gradient descent on that embedding space. And so that's a little bit of a hybrid. I also think that matching networks itself is somewhat of a hybrid between non-parametric methods and black box methods, because it uses this bidirectional LSTM to encode the examples in the training set. Cool. Great. And then in the last eight minutes, or actually before we do that, first, there's also a third property which we'll talk about in a couple of weeks, which is uncertainty awareness. In general, if you're in a few-shot learning regime, there may be some ambiguity with respect to what the task is. And it'd be nice if the network would tell you, given the few examples that I have, I need more data in order to figure out a good classifier for this task. And this can be useful in active learning settings and settings where you need calibrated uncertainty estimates, like in safety critical settings, also in reinforcement learning. Or if you care about driving things from first principles from a Bayesian standpoint, this can also be useful. And we'll talk about uncertainty where meta learning algorithms in-- actually, it might be three weeks. It's either two weeks or three weeks, in Bayesian meta learning lecture. Cool. So in the last seven minutes, I'd like to just briefly run through a few applications, just to give you even more of an idea of the kinds of problems that these methods can be applied to. And also in some ways some of these are actually somewhat creative in actually how they use meta learning algorithms. So in the first lecture, I showed you this video of doing one-shot imitation learning. And the different tasks corresponded to manipulating different objects. And the training example corresponded to a video of a human. And the test example corresponded to a teleoperated demonstration of that task. The model here was an optimization based meta learning algorithm. So it was something like MAML. And one thing that was cool about this is that when you have a video of a human, you don't actually have labels. You just have input images. You don't know what actions the robot should take. And so this approach actually used a learned inner loss function. And so instead of only learning the initialization of the model, it also learned a loss function that was used to run gradient descent in the inner loop. And so this is an example of something where instead of only meta learning one component of the system, you can meta learn other components of the system as well. And so the way that this ended up working is that at test time you give it a video of a human doing the task. It runs gradient descent on this video with the learned inner loss function to get a policy for the task. And then the result of running that policy on the robot is something like this, where it can successfully figure out that it should be going to the red bowl. A second example is looking at predicting the properties of molecules. This is potentially useful in drug discovery problems, where you only have a small amount of experimental data for a given molecule. The task is to predict the properties and activities of different molecules. For example, you might have some experiments that are cheaper or easier to run, and you want to predict activities that are more expensive to run. And here they used optimization based methods. They actually looked at MAML, first order MAML, and a variant of MAML that only updates the last layer of the model in the inner loop. And they're using a graph neural network as the base model. And they found that these meta learning algorithms were able to do better than fine tuning or K nearest neighbors. And then the last application that I'll mention here is few-shot motion prediction. This could potentially be useful for human robot interaction, where you want to predict the motion of people, or in autonomous driving where you want to predict the motion of other cars. And the task corresponds to different users and different underlying motions, basically different trajectories. And the training data set corresponded to the past K time steps of motion, and the test set corresponded to the future seconds of motion. They use kind of a hybrid of an optimization and a black box approach here. So they had this learned update rule in the inner loop. And they were able to, in this case, predict the motion of these human skeletons fairly accurately, and were able to do so much more effectively than a multi-task learning approach or a transfer learning approach. Cool. So those were just three applications that I think survey the spectrum of problems that you might look at in meta learning examples. There's lastly, one closing note that I'd like to mention which gets at one of the things that we saw in the first application. Which is that so far whenever we sample a train set and a test set, we've been looking at examples where we sample them by ID from the same distribution, and they don't actually need to be sampled independently from one another. The inner loop training data that you're learning from, it could have noisy labels, it could be weakly supervised, and not actually have exactly the label that you want. It could have domain shift that's different from what you see in the outer loop test examples. And this is cool, because it means that you can actually kind of metal learn for learning procedures that are more robust to noisy labels, that can learn from weak supervision, that can learn from other forms of supervision than typical forms of machine learning. And so in general, the inner and outer loop don't have to be the same. On this note, it is really important for the test set to be a well formed machine learning objective, because that's going to drive the meta learning process and optimize for that learning procedure. But it's really the inner loop that can be almost anything that you want in some sense, although you do need to have enough supervision and enough information in that inner loop for the model to be able to infer what it's supposed to be doing. Yeah? Is this true for black box? Yeah, this is true for black box too. So when you pass in your examples, your x train and y train, you could have noisy examples, like noisy labels that are being passed in. Or you could have supervision that's a little bit different than a one hot vector, for example. The reason I ask the question is, does the domain shift mean the scaling of the shifting that was done for the MAML? Yeah, so for domain shift, what I had in mind there is, you want to learn a classifier between-- maybe you want to learn an image classifier. And you want to be able to train an image classifier with sketches of things. So that if you give it like a sketch of a cat and a sketch of a dog, it learns a good cat versus dog classifier. Then D train could be sketches, and D test could be neutral images. And Black box methods could work well for that, or optimization based methods, or possibly things like Prototypical Networks. I guess, in that example I would guess that in general, black box methods could actually be a better choice in some of those scenarios, because it is very expressive in the learning procedures it can learn. But yeah, in general, like expect kind of all the algorithms that we've covered to be able to handle these kinds of things. Cool. So that's it for today. We talked about non-parametric few-shot learning algorithms. We compared the different approaches. This is the last lecture on meta learning algorithms. And next week we'll talk about unsupervised pretraining methods, and also a little bit into how these relate to meta learning algorithms, and how you can view some of them in a similar light. And then there's also some reminders about coursework on the right.
AI_LLM_Stanford_CS229
Statistics_for_Data_Science_Probability_and_Statistics_Statistics_Tutorial_PhD_Stanford.txt
data science and machine learning is the hardest job of the 21st century with an average salary of $120,000 per year according to LinkedIn the data science job profile is among the top five jobs in the entire world now if you want to for into the world of data science you need to have good command over statistics as it forms the base of all the data science Concepts so with the help of Statistics you can make predictions such as New York will be hit with multiple tornadoes at the end of this month or the stock market is going to crash by this weekend now all of this sounds magical doesn't it well to be honest it just statistics and not magic and you don't really need a crystal ball to see into the future so keeping the importance of Statistics in mind we have come up with this comprehensive course by Dr abanda sarar Dr abanda sarar has his PhD in statistics from Stanford University he has Tau Applied Mathematics at the Massachusetts Institute of Technology being on the research staff at IBM let quality and engineering development and analytics functions at General Electric and as co-founded omix Labs we are uploading this highquality classroom session by Dr abanda sarar from great learnings business analytics and business intelligence course it has been ranked number one analytics program consecutively for the past 4 years this tutorial will be on YouTube for only a limited period of time so that Learners across the world can have access to high quality content so please do subscribe to Great learn YouTube channel and share the video with your peers so that everyone can learn from the best now without further delay let's have a quick glance at the agenda we'll start off by understanding the difference between statistics and machine learning then we'll go through different types of Statistics which are descriptive predictive and prescriptive after that we'll understand the different types of data available going ahead we'll understand the concept of correlation and co-variance comprehensively following which we'll head on to probability and learn how to implement conditional probability with base theorem and finally we'll look at two types of probability distribution binomial distribution and poison distribution so let's start off with the session what you now need to do is you now need to be able to get the data to solve this problem so therefore the statistical way of thinking typically says you formulate a problem and then you get the data to solve that problem the machine learning we have looking at things typically says here is the data tell me what that data is telling you many of my colleagues and I myself have run into this problem when going for interviews etc etc and so uh um sort of statisticians say that um we're not getting jobs out there and so I go to uh go to people who are hiring and saying that why don't you hire statisticians and I reach an interesting conclusion to this entire discussion um that sometimes around the way the interviewer who's interviewing the statisticians for a data scientist job ask the question um here is my data what can you say and the statistician answers with something like what do you want to know and the business guy says but that's why I want to hire you and the statistician says but if you don't tell me what what you want to know how do I know what to tell you and this goes round and round right no one's happy about this entire process so there's a difference in the way these two communities approach things my job is not to resolve that because in the world that you will face you'll see a lot more of this kind of thinking than you'll see in this thinking because in this world the data is cheap and the question is expensive and you're paid for asking the right question in this world the question is cheap and the data is expensive you're paid for collecting the data so sometimes you will be in a situation where this is going to be important for example let's suppose you're trying to understand who's going to buy my product you're asking the question let's say that my products aren't selling and you want to find out why what will you do get what data so let's say that you're selling your I don't know what do you want to sell um you want to sell watches say so let's suppose people AR buy buying watches anymore which is a reality correct so you're a watch company who buys watches these the entire business model of a watches disappearing do you have watches some of you have he has actually a surprising number of you have maybe they do different things these days right that that seems like a very that that's a fitness device is not really a watch at all so so something like this was actually with my daughter at lunch today so she got something like this I'm not sure my my my my wife who's an entrepreneur runs their own company she came back from Delhi she came back with two of these I don't know where she picked them up so my my daughter the first thing she did she took one of this and she took this thing out because she thought of the whole wristband as an unnecessary idea I mean she that didn't occur to her I mean that's a separate thing that's a nice little beautiful red wristband Etc so a watch is a different thing but let's say that you're a watch company nobody's buying your watches or fewer people are buying your watches now how are you going to solve this problem or how you going to process this information what do you want to do what do you want to know theel okay but remember I'm asking this question also from an analytical perspective So when you say that to check the model and see what is not sold that assumes the whole data question so you so first order you'll see sales for whom and when and how how do you structure your data how will you how will you arrange the problem comp are your competitors sing okay that makes problems even harder because now you're going to look for data that isn't with you no no he's right he's right he's right maybe people are not buying watches because they're buying something else that's a reasonable thing but let's keep this problem simple let's consider only data that is within you we'll go outside not to worry but let's say that I'm looking at my data what data do I want to see and what questions do I want to ask of it let's so sales year by year types and then what comparisons do I want to do year on year year on year region region wise age with what purpose what question am I asking the data what section of customers are buying my product or what section of customers are buying my product compared to what what has changed in terms of the what are my biggest set of customers so that's so that's one thing who who are my biggest customers okay that's a very interesting question to ask except that that question implies that I needed to know who my biggest set of customers sort of could have been but it's a good point where is the bulk of my sales coming from then someone else say something about time you know is it going is it going down so you can look at things like saying that for which group of customers are my sales going down the most for example you could ask that I'm not saying that's the right question but that's a possible question to ask so let's suppose you follow that approach that I'm trying to understand I know that my are going down that's an obvious thing my CEO is telling my CFO is telling me and if I don't stop this we're all going to be out of a job correct the HMT factories in Bangalore are not in good shape one of them I think has become the income tax office right somewhere in the mishwar area so that's going to happen to me if I if I don't do this well so I know my sales are going down but I don't know by how much and particularly for whom so are there segments for which the sales are going down which segments are sales going down the most in which segments are they going down a little bit how fast are they going down I can question I can ask questions of that sort now what conclusions at the end of this do I want to be able to do how do I need to how do I want to use this information now for this you usually follow something like a three-step process and you may have seen this and this this covers both these sides and these words should be should be familiar to you to some extent the first is called descriptive the second is called predictive and the third is called prescriptive have these words been introduced to you at least in this at least you've read it I'm sure you you all Cruise the web and look at blogs and things like that nothing new in this I'm sure but I just want to set a context because it's going to talk a little bit of what we descriptive there's a c here so descriptive predictive and prescriptive now what is a descriptive problem the descriptive problem is a problem that says that describe for me where and I'm losing my sales and when I'm losing my sales it just describes the problem for me it tells me where the problem is it locates it it isolates it the predictive problem says look at this data and give me an idea as to what might happen or what would happen if I change this that or the other so let's suppose I do the following kind of idea I say that let me relate my sales to my prices let me try and understand that if I reduce my prices of my watches will more people buy them conversely if I make my Watches luxury items increase the price of a watch remove a low-end brand and make a watch an aspirational thing a decorative item a luxury item a brand item so that people wear a watch not to see the time but also as a Prestige statement as a fashion statement whatever it is if I do this then what will happen that's predictive I'm trying to predict something based on it I'm trying to say if something happens to let's say one part of my data what will happen to the other part of my data and then and based on that the doctor carries out a predictive analysis of you because I see this I now think you have this issue you have this thing going on let's say I'm diagnosing you as being pre-diabetic you're not yet diabetic but you're happily on the way to becoming a diabetic now because of this I now have to issue you a prescription I now should tell you what to do so there's a data that comes from you that data in some way is modeled using the domain knowledge that the doctor has and that model has translated into a into an action that action is designed to do something typically it's designed to do something actually fairly complicated the first action the doctor tries to do is number one let's say Do no harm the hypocritic oath first let me make sure that that I don't do any unnecessary harm to the patient then let me shall I say optimize his or her Welfare by making sure that I control the blood sugar the best and that I postpone the onset of diabetes as best as I can it's a complex optimization problem of some sort in a business also it's a complex optimization problem right I need to be able to sell more watches but I also need to be able to make money doing so I can increase my sales but if I increase my sales and my profits go down or my earnings Go Down based on the cost then that's a problem but at the same time if I try to run a profitable business and nobody buys my product that also is not a particularly good idea then there are other issues maybe in running the company I've got employees that I want to keep on the on on on the boards how do I run the company in such a way so that it needs that particular labor force I have finances to take care of I have loans to repay how do I get the cash flow in order to repay the bank loans that I have so the prescription has to meet lots and lots of requirements if you're building an autonomous vehicle you'll have situations saying the car has to do this but it also has to follow certain other rules for example if it sees someone crossing the road it should stop but it shouldn't stop very suddenly because if it stops very suddenly is going to hurt the car it's also probably going to hurt the driver so it can it should needs to stop but it shouldn't stop too suddenly it has to follow the rules of the road because otherwise the computer will simply say oh you want me to avoid the person crossing the road I'm just going to go behind the person and you going going to tell the car please don't do that because there's a house next to it you can't just sort of do that oh you didn't tell me that you just told me to avoid the person you didn't tell me about the house okay we'll put that as a constraint in our program and see how well it goes so prescription is problematic another simple way of doing it might be to say that description is how many centuries has Virat kohi scored look up Crick info and it'll give you the answer prediction might be try to guess how many centuries verat kohi will score in the World Cup prescription might be how do we get vat kohi to score more centuries in the world cup and as you can figure out you're going from a purely data BAS version of the problem into something that's only notionally about the data data will help you but there's a lot more than the data when it gets to that what we'll do today what we'll do now once I've finished talking to you is we'll we'll take a look at what descriptive what the the descriptive part of analytics is so the descriptive part of analytics is talking about simply describing the data without necessarily trying to build any prediction or any models into it simply telling you the way it is this is hard this is in itself not necessarily an easy thing to do because you need to know very well how to do that and what are the ways in which one looks at data this is skillful in itself so for example let's suppose that you are you're the I'm you're a do you go to the doctor and the the doctor is looking at you looking at your symptoms and the the doctor recommends a blood test now how does a doctor know what blood test to recommend based on the symptoms based on the symptoms but remember that potentially there's an enormous amount of information in you all of us as biological things carry an enormous amount of information in in our blood in our neurons in our genes or wherever you know if you're talking about Big Data as I said there's 2 meters inside every cell and there are a few billion neurons in your head you don't need to go far to see big data you are big data you are one walking example of big data we all are right now in that big data what little data does the doctor know to see that's a descriptive analytics problem the doctor is not doing any inference on it the doctor is not building a conclusion on it the doctor is not building an AI system on it but it's still a hard problem because given the vast amount of data that the that the that the doctor could potentially see the doctor needs to know that I I this is interesting to me and this is interesting to me and this is interesting to me and this is interesting to me in this particular way for example a blood test let's suppose that I draw I draw blood from you for a particular purpose let's say for blood sugar correct leaving aside the biology of how much blood etc etc to draw um just neither none of you I guess are a do any of you are doctors here any doctors in the room doctor so I can say whatever I want you won't understand what I'm saying no no but so but I'm old enough that this is a real problem for me so you have a you have a large amount of blood that's flowing through you we all do this blood carries nutrients what that does is that every time there is a nutrient inflow the blood looks a little different so if you eat your blood looks a little different because that's your blood's job the blood's job is to carry nutrients if you want to run you want to walk if I'm walking around my legs are getting energy from somewhere the energy need from my legs is being carried from the blood and it is being generated through inputs that I get some of it because of the air that I breathe from where it gets the oxygen to burn things some from the food that I've eaten the nice lunch that that I had where it gets the calories to do that so therefore based on what my energy requirements are and based on what I've eaten my blood is not constant my blood content is what is known as a random variable what's random about it because it looks a different it looks really different all the time your blood at 12:00 is going to look a little different 12:00 at midnight is going to look a little little different from 12:00 at noon because it's doing something a little different the same phenomena is there everywhere if I were to for example measure the temperature of the oil in your car or in your two- wheeler what do you think that temperature will be it depends first of all it depends on whether the car is running or not it depends on whether it has run or not it depends on how much oil there is it depends on how you drive it depends on temperature rest of the car the answer is it depends and the same is true for your bodily fluids so this becomes a slight problem because if it is random then from a random quantity how do I conclude what your blood sugar is how does a doctor reach that reach a conclusion of any sort average of what average of particular duration so there are multiple averages that you can get first of all there's a question of saying that if I take blood from you how is the blood usually collected so the fotus comes and usually takes an injection from one point let's say by some strange accident this is Thoroughly in advised but let's say by say some strange accident two different people are drawing blood from two hands at the same time do not try this at home but let's suppose they do do this will they get the same blood IDE ideally yes it depends what time same right at the same time as I said do not do this at home but at the same time you're getting two different samples there's not just a question of time your blood is not going to look the same even within your body at one period of time even from the left hand even from the right hand exactly the same period of time it's not going to look the same there is a slight there is a slight problem that somehow a little obvious that that you know your your your heart is in the middle your heart is actually in the middle but it beats to the left why because the the heart is what the heart is both a pump and a suction device the pump side is on the left the suction side is on the right so your blood pose out from your left side and it goes back in on the right side so there's a slight asymmetry in your body between left and right one side tends to go out the other side tends to come in it's slight it mixes up all in the middle so one sampling idea is that I'm taking a sample of blood from you and it's just one example the second question is as you were saying it's a question of time so you can average over time if you average over time this is really easier you can say I'm going to do this maybe before eating after eating reg after eating so for those of you who have blood pressure test for example oh sorry blood sugar test once they ask you to do it fasting and then they ask you to do it some 2 hours after eating do they tell you what to eat sometimes with glucose okay sometimes they don't they they sort of say that based on what you naturally eat let me figure out what you are processing they expect you to eat a typical meal and not go and eat you know large amounts of KFC if that is is not what you normally eat just eat what you normally eat a vegetarian eat normal vegetan eat normal food and then figure it out let's see how how good your body is at trying it out so it's saying do a normal thing and I'll take another normal sample then one of you said something very interesting they average things out now what does averaging do neutralize that's an interesting word to use neutralizes things Prov a general provide context provide context context of what context there a good point so so what is the doctor trying to do so let's let's simplify things a little bit and say that let's suppose that the doctor has a threshold let's give it a number let's say the doctor says that if your blood sugar is above 140 I'm going to do something if your blood sugar is say less than 140 I'm not going to do anything I don't know whether this is the right number or not but just let's make it up now the doctor is going to see from you a number it may be a single reading it may be an average it may be a number of things how is the doctor going to translate what they see from you and compare it to the 140 how is that comparison going to be made maximum number of people Maxim so let's suppose I have just one reading so let's suppose that I have one reading and that reading oh I don't know is 135 I've just got one reading from you 135 what does that tell me no test required one AR one argument is it's simple let's take a very machine learning computer science view to this 135 is less than 140 ahuh so now he's saying yeah but you know what let's say that 135 and another guy who say 14 120 there should be something that says that this 135 is a little bit more trouble than 120 closer to the threshold as he says so maybe in other words this threshold isn't quite as as simple as I thought it was so I can solve this problem in one of two ways one way to do this is to make this 140 a little range this is something called fuzzy logic right in other words the question you're asking becomes fuzzy not as crisp you're not fiddling with the data you're fiddling with the boundary you're fiddling with the standard the other way to do that is to create a little uncertainty or create a little plus minus around the reading itself around 135 saying that if this is 135 and let's suppose that I go and get another reading and the second reading that I get is say 130 and the third reading that I get on the day after that is say 132 and I'll say Okay seems to be fine I might say but let's suppose after 135 the guy goes and I do my usual thing and I measure it again and this time it comes out as 157 and I do it again and it comes out as 128 and I do it again it comes out to be 152 so in both cases 135 was probably a good number but in One cases 135 was wearing very little and the other cases 135 was wearing a lot which gives me different ideas as to how to process it so what descriptive analytics talks about essentially is trying to understand understand certain things about data that helps me get to conclusions of this kind a little more rigorously now to be able to quantify what these plus minuses are is going to take a take us a little bit of time and we will not get there this residency we'll get their next residency to say that in order to in order to say it's not 135 or 135 plus minus something that question now needs to be answered but to do that I need to have two particular instruments at my disposal one instrument that I need to have at my disposal is to be able to know what to measure I need to say what does an error mean I need a statement that says that maybe I'm 95% confident that something is happening I'm 95% sure that this is below 140 I need a way to express it and that is the language of probability so what we will do tomorrow is we'll introduce a little bit of the language of probability it'll be sort of unrelated to what we're doing today so there's going to be a little bit of a disconnect but what we're going to do is we're going to create two sets of instruments one instrument that is purely descriptive in nature and one set of instruments which is purely mathematical in nature so that I can put a mathematical statement on top of a description and the reason I need to do that is because the pure description is not helping me solve the problem that I've set itself that I have set so therefore what will happen is you will see in certain medical tests you will not see points like this you will see intervals your number should be between this and this your chestal number your HDL whatever should be between this and this you won't see a number you'll see a range the typ typifies the variation and in certain cases you will see thresholds or maybe the it's just a lower limit or an upper limit but you'll also see a recommendation that says Please do this again in other words I'm going to compare I can't compare one number to to one number one number to one number is typically a very bad place for any kind of analyst to be in because you got no idea of which is error prone and where the error is so therefore what happens is you try to improve one of those numbers and so either by filling around with the range or by getting more measurements and you'll do that and you'll see that as we go along a little later so this is a context for for what we have uh in terms of terms of data let's see so this is a set of files that has been loaded uh it's a very standard set of files it's not mine to be honest uh I just want to make sure that I'm doing what I'm supposed to be doing so so for reasons that are more to do with security my understanding The Notebook will not access your drives so keep it on your desktop and not complicate life so uh and there is this notebook it's called cardio goodness of good the word statistics refers to the idea that this is comes from the statistical way of thinking which as I said opposed to the machine learning we have thinking is tends to be a little more problem First Data next which means we worry about things like hypothesis and populations and sampling and questions like that and the descriptive part refers to the fact that it is not doing any inference it is not predicting anything it's not prescrib anything it is simply telling you what is there with respect to certain questions that you might POS possibly ask of it now what is the context to the case the market research team at a company's assigned the task to identify the profile of the typical customer for each treadmill product offered by the company the market research team decides to investigate whether there are differences across product line with respect to customer characteristics exactly what you guys were suggesting that I should do with respect to the watch understand who does what entirely logical the team decides to collect data on individuals who purchase a treadmill at a particular store during the past 3 months like watches they're now collect looking at data for treadmills and that is in the file in the CSV file so what you should have is you should have a CSV file in the same um directory and through the magic of python you don't have to worry about things like path before we get there remember because we're looking at this statistically before we get the data we should have a rough ideas as to what we're trying to do and so they say that here are the kinds of data that we are looking at the kinds of products the gender the age in years education years relationship status annual household income average number of times a customer plans to use a treadmill each week average noriz a customer expects to run walk each week on a self-rated Fitness scale on 1 to five where one is in poor shape and five is in excellent shape some of this is data some of this is opinion right some of this is opinion masquerading as data like for example number of times a customer plans to use a treadmill hopeful wishful thinking it's still data you're asking someone how many times will you use it her Rose daily no problem seven times a week oh we'll see huh but it's still data it's come from somewhere so so what has happened the way to think about this is to say that I want to understand a certain something and the certain some certain something has to do with the characteristics of customer uh customer characteristics and to do this you can then use either you can either take let's say a marketing point of view who buys you can also take a product engineering kind of view what sells in other words what kind of product should I make Etc in business as you probably know for those of you are any of you entrepreneurs one hand up there one hand up the closet entrepreneurs from what I could figure out right sometimes it's unclear what that word means in other words you think you are or you're not confident enough to call yourself one or you're doing that uh in in it space if you're an entrepreneur for example in um in in physical product space or even in software space one of the things you often think about is What's called the product Market fit which is you're making something how do you match between what you can make and what people will buy because if you make something that people do not buy that doesn't make any sense on the other hand if you identify what people buy and you can't make it that also doesn't make too much sense so the conclusions that we will draw on this we will not draw on today but the purpose is to be able to go towards conclusions of that kind either isolate products isolate customers and try and figure out what what they tell us pandas generally has a fair amount of Statistics built into it that's what it was originally built for numai is something that was built more for mathematical problems than anything else so some of the mathematical algorithms that are needed are there there are other stat side plots in metal plot life or c Bor and many other things that you have seen already um python is still figuring out how to arrange these libraries well enough uh the the shall we say the the the programming bias sometimes shows through in the libraries so I for one do not remotely know this well enough to know what to import up front but a good session you know what to import up front and you do all this up front so you don't get stuck with what you want to do the naming is up to you if you like the names as they are then that's fine you want a standard set of names so when you wrote the data set if this is in the right path just this will work do CSV it's usually smart enough to convert Excel forms into CSV in other words if you have this as excels and things like that it's usually smart enough but if it isn't then just go in and save an XLS file as a CSV file and operate that way in case it doesn't do it on its own but more often than not what you see is that when you when you when when Jupiter sees it it'll see an any excels file as a CSV file or go and make the change yourself or you can have other XLS other restatements in it as well you can change functions inside it and you can figure out how much to head what this tells you is there's the head and the tail of the data this is simply to give you a visualization of what the data is um this gives a sense of what variables are available to it um what kinds of variables they are we'll do we'll see a little bit of a summary after this Etc so for example some of these are numbers income what is income income is annual household income that's a number sum for example let's say gender m male female this is a categorical variable this is not entered as a number it's entered as a text field if you are in Excel for example right at the Top If you go in and you see that it will tell you how many distinct entries there are how many distinct settings there are so usually what happens right at the beginning and a data frame like this if it is created this is a data frame if a data frame is created when it gets created the software knows as to whether it is talking about a number or whether it is talking about categories there are certain challenges to that you can see one particular challenge to this um what does this 180 mean counts why do you think there are so many decimal places that comes here 14 years of experience 16 years of experience why is it going 0 that's yes it does this because it sees other numbers where those decimal places are needed so what it does is what any software typically does is when it sees data it sort of says that at what granularity do I need to store the data sometimes this is driven by your computer your 64 bit your 32bit and things like that but what it does is it means that the data is stored in the data frame to certain digits now usually you don't see that you'll see it in this way but sometimes for example when you say include equal to or and and you ask for a full description the data comes out in this slightly irritating way h because of something here because of let's say the income field or any of that now when it recommends when it looks at the descriptions of this what is the description that it is reporting and how does it choose to report out the description in this particular situation so let's take a little bit of a closer look at this one thing here look at the way it's done here so count unique top frequency and then there are certain things here mean standard deviation minimum 25% 50% 75% and Max when it sees a variable like gender it reports out lots and lots of n what does that tell you right off the bat it can't do that which means it's not a number this is not a number in other words if you ask me to find the mean of something and you're giving me male and female as inputs I don't know what to do which is an entirely reasonable stand to take for any reasonable algorithm right it it requires another kind of description for it to work but the problem we describe this syntax is that it's asking for the same description for all of them whether it's in significant digits whether it's in columns etc etc so it's chosen this description and it says that that's all that I'm going to give you but where it makes sense let's say for example I look at age now for age I've got 180 observations and it is calculating certain descriptions for it correct so what are the discriptions that is calculating let's look at these is calculating a desription like say minimum minimum is what 18 maximum is 50 these are easy to understand then let's look at something really interesting suppose I want to report one number one representative age for this data set this is like asking the question how do I get a representative blood sugar number for you I can give you a minimum and a maximum but to do the minimum the maximum I need to draw blood many many times from you but let's suppose I want to this I want one representative age for you somebody asks you what is your blood sugar you want to give them one number similarly somebody's looking at this data and ask the question give me a representative age how old is your typical user or what age do you want to build it for or you're even asking a you're even asking let's say a product question you're a product designer and a product designer building a treadmill now how do you design a product those of you Engineers based on based on the weight now very good what weight who's weight huh who's the user what is the weight of the user he's got a good point as as a design engineer I need to know what weight will be on that treadmill now what is the answer to that question Max us who visit the gym huh so there's a question of saying that if I want to measure a variable by one number how should I even frame that question what makes sense what is the one average no Max in this particular case you might argue the max is the is the right number because I want to be able to say if I can support him I can support anyone but there's also a downside to that I've now engineered that product I You could argue that I shall shall I I say over engineered that product I'm sorry fact of safy a factory of safety okay all right so let's suppose that you are you are doing this for a mattress you all sleep on mattresses we're all relatively wealthy based on the fact that we're here so we probably sleep on a mattress not everyone's fortunate enough to sleep on a mattress but let's suppose you do sleep on a mattress how much weight should that mattress be designed to Bear if you over engineer it what will happen happen is that number one for for for a reasonable weight let's say weight a lot below that that mattress is not going to sink let's say that you design it for 100 kilos now if you have 50 kilos or 60 kilos that mattress is not going to sink for you because it's going to feel comfortable for someone who is 100 kilos and for someone who's 50 kilos you're just going to bounce on it you're going to feel it soft silkiness or whatever it is you want to feel from the mattress it won't work so what to do that's a hard problem it's a description but it's a hard problem who do I Engineer for and so therefore uh people have different ranges of what I mean to represent it so here's one version of it this is what is called a fiveo summary I report out the minimum the 25% point the 50% point the 75 % point and the maximum variable by variable I report five numbers I report the lowest what does 25% mean 25% of my data set or the people are younger than 24 the youngest is 18 25% or a quarter of them are between 18 and 24 a quarter are between 2 24 and 26 a quarter between 26 and 33 and a quarter are between 33 and 50 this is what is known as a distribution this is what is known as a distribution statisticians love distributions they capture the variability in the data and they do all kinds of things with it so I'm going to draw typical shape of a distribution we'll we'll make more sense of it later on this is a theoretical distribution distribution for example let's say has has a minimum has a maximum has say 25% Point has a 50% point it say 75% point in terms of probabilities there's 25% here 25% here 25% here 25% here if you want to think in terms of pure description this is not a probability it's just a proportion if you want to think in terms of probabilities what this means is that out of 180 out of 180 people if I draw one person at random if I draw one person at random there's a 25% chance that that person's weight is going to be below below 24 sorry age 24 correct if you want to think in terms of probabilities we'll do that tomorrow but this is a description so what this description does is it gives you an idea as to what value to use in which situation so for example you could say that I'm going to use 20 26 as my representative age if I do that what is the logic I'm using this this 25% this 50% point so to speak this is called the median this is called the median and we'll see it median means the age of the average person first s take the middle person and ask how old are you the age of the average person I could also ask for the average age AG of the person which is what which is the mean which is 1 / n X1 + xn now this is algebra what you have to do is you have to put n equal to 180 this is the first age the second age the third age up to 180 1 by 180 age 1 plus age 11 18 this is called the mean this value is what 28.7 n the average age is about 28 years or 28 and a half years 28.8 years but the age of the average person is 26 yes so can you please repeat the median and the the difference between the two describe what is the age of the person so I described the median as the age of the average person and I describe the mean as the average age of a person now he's looking at me like saying you have to be kidding me right that's confusing I admit to it the easy way to understand it could be this what is the mean add them all up divide by how many there are what is the median sort them from the smallest to the largest pick off the middle if they're an even number what do you do you take the average of the two middle ones if they're the same it'll be the same number if they're not it'll be a number between them so sometimes the median may show up with a 0. five or something like that for that reason if there is an ing counts but there are an even number of counts now which do you think is better dep you're giving the right answer it depends you'll figure out that I like that answer they both make sense they both make sense it depends on what what context you're going to use it for in certain case yes you said that Med basically the mean of the it's it's the it's the age of the average person it's a reading from the average person so what is the parameter we are saying it is average person how we are getting that aage okay if you're talking in terms of parameters so he used an interesting term he's saying what is the parameter I'm after parameter is an interesting word parameter refers to something what generally in a population it's an unknown thing that I'm trying to get after for example blood sugar is a parameter it exists but I don't know it I'm trying to get my handle on it correct so if I'm thinking in terms of of of parameters then these are different parameters so let's let's look at a distribution here I'm not sure whether this will pick up things I hope so so the median is the is the median is a parameter such that on this side I have 50% and on this side I have 50% this is the median the mean is what is called the first moment what that means is think of this as a plate of metal and I want to balance it on something where do I put my finger so that it balances it is the CG of the data the center of gravity of the data you can understand the difference between these two now if for example I push the data out to the right what happens to the median nothing happens to the median because the 50/50 split Remains the Same but if I push the data out to the right the mean will change it'll move to the right your lever the lever principle right if there's more weight on one side I have to move my finger in order to counterbalance that weight so these are two different parameters if the distribution for example is what is called symmetric symmetric means it looks the same on the left as on the right then these two will equal because the idea of going half to the left and half to the right will be the same as the idea of where do I balance because the left is equal to the right so when the mean is not equal to the median that's a signal that the left is not equal to the right and when the mean is a little more than the median it says that there is some data that has been pushed to the right and that should be something that you can guess here because the mean and the median to some extent are what 24 26 Etc the lowest is 18 that's about six 6 years 8 years less than that but what is the maximum 50 that's 25 years beyond the data is pushed to the right a little bit instead of saying push to the right the right technical term is Right skewed there are there are shall I say people are more not average on the on the older side than on the younger side there was a hand up somewhere I was just confused with the statement that Medan would not move but then you explain yes so so therefore one reason that the median often doesn't move is because it is not that sensitive to outliers so let's suppose for example we look at us us and we ask ourselves what is our mean income or median income and we have that each of us make a certain amount of money we can sort that up and set and put that in now let's suppose that Mr mukes am money walks into the room now what is going to happen to these numbers mean is going to go up right he alone probably makes a very large multiple of all our incomes put together possibly I don't know how much you make I know how much I make but what's going to happen to the median it's going to stay almost almost the same the typical person may move by at most half because what is the typical person going to be the typical person is going to be an actual individual in the room or maybe an average of two individuals in the room and that person is not going to change yes yes that's that's one conclusion we can draw draw on this there are other plots below which will also show the same thing you're not being able to draw that conclusion good logical reason I haven't shown you the full data we'll see the histogram we'll do that so hold on to that question the conclusion was drawn is that there are two pieces there are two things to do see here one is if I simply look at this without seeing any more Graphics where is the middle of the data from a median perspective at 26 correct now from 26 look at the difference between 26 and the smallest 18 between 18 and 26 that's 8 years this 8 years contains 90 observations because there's 180 total now what is on the opposite side of this 26 to 50 that's how many years 24 24 24 years this 24 years now contains how many observations same 90 so there are 90 observations that are between 18 and 26 and there are 90 observations between 26 and 50 so if I were to draw a picture what would what would that picture look like yes exactly as you're drawing it right we can say on left this usually by definition is called right skewed this is a problem that bab has does this mean it's left skewed or right skewed as a word right it's called right skewed more data to the right sorry more data is a dangerous word huh no that's it's the same number of observations I'll say the data is pushed to the right more variation or right more variation on the right side is probably a safer way of putting it yes so skewness is often measured in various things one measure of skewness is typically for example mean minus median mean minus median if it is positive it usually corresponds ress mean minus median negative usually corresponds to left skewness this is a statistic iCal rule but sometimes it is used as a definition for skewness there are many definitions for skewness skewed data sometimes causes difficulties in analysis because what happens is the idea of variation changes being variation on one side means something really different than variation from the other side um by the way um what's happening to you with respect to things like books are you getting books are you not getting books are you have no idea what the books are you got one book which is what which is the statistics book okay I'll take a look at that book later so this book right okay show me the book okay comment one very nice book comment two not a python book right that doesn't make it a bad book so if you're looking for help on how to code things up this is not the right book get a book like think stats or something like that but if you want to understand the statistics side to it it's an excellent book so everything that I'm talking about is going to be here I might talk about which chapters and things like that at some point and I might talk about how to use this in the book so for example at the back of this book there are lots in there are tables there are tables at the back of this book which we'll learn how to use and then I'll try to convince you that you shouldn't use them but remember many of these methods are done in ways in which either you don't have access to computers or if you do have access to computers you don't have them shall we say at runtime in other words when I want to run the application on it I can build a model using a computer but I can't run it within one the runtime environment for statistics is often done when there are no computers around the build environment can include computers but the runtime environment cannot a lot of Statistics is done under that kind of situation even probability yes very much so very much so okay so definitions of skewness and things like that do it do it in the way you usually use a book which means you go to the index and see if the word is there and then you go back and figure it out and it'll give you some ideas as to how that works it's it's a nice book it's one of the best best books that you have in business statistics but it's not necessarily a book that will tell you how to code things up that is not a deficiency of the book not every book can do things of that sort there are other books around that will tell you how to quote things up but will not explain what you are doing it's important to know what you are doing it's also important to know why you're doing it but books can't be written with often everything in mind yes can you suggest some book that which tell us how we can think that way the thinking is here I think this is good for thinking I I would absolutely recommend this book on the thinking side because the problem is lies that in which situation what we need to yes and that answer I think is very very good here where you won't get is it'll say do this and it won't give you the python syntax to do it that that will not be here so if you can solve that problem through some other means I used to have a colleague in in corporate life who had a very big sticker on his board it said Google search is not research right now nobody agrees with him anymore so I suppose that when in doubt you do what normal Homo sapiens do today which is you Google for an answer so one possibility is that you ex you you you understand something from a book such as this and if you want to understand the syntax just Google for the term say python that term whatever it'll probably give you the code things are very well organized these days uh there's also the question and I should give you a very slight warning here not to not to discourage you from anything but in the next 9 months or thereabouts the the duration of your program there's going to be a fair amount of material that will be thrown at you correct the look and feel will sometimes be like what we would what we would often call it MIT as drinking from a fire horse you can if you want to but you'll get very wet so therefore pick your battle if you want to understand the statistic side of it please please go into the depth of it but if you try to get into equal depth on every topic that you want to learn that will take up a lot of your professional time now the reason we do the statistics for first one it's it's a little easier from a computational perspective although harder from a conceptual perspective so we begin it this way but hold on to that idea and then as you keep going know see if this is something that you want to learn more on and if you can you're welcome just write to us or let us know or let anyone know that is just coming let her know and we'll get the references to you but if you want to for say for the first few residencies please read the book and see what happens if there are doubts yes but it's it's a well-written book it's in it's it's instructor is one of our colleagues here you know if you want we can also help explain things so this is the summary what did the summary tell you the summary gave you what's called the five numbers five numbers that help you describe the data minimum 25 50 75 Max we'll see another graphical description of this it also described for you a mean there is also another number here and this is this number is indicated by the letters STD STD refers to standard deviation SD refers to standard deviation and what is the formula for a standard deviation STD is equal to the square root of little bit of a mess but Two Steps step one calculate the average step two take the distance from the average for every observation ask the question how far is every data point from The Middle if it is very far from the middle say that the deviation is more if it is not far from the middle say that the deviation is less deviation being used as a synonym for variation I'm talking about variation variation can be more or variation can be less more than the average less than the average if someone is much older than average there's variation if someone is much younger than average there is variation so therefore both of these are variation so what I do is when I take the difference from the average I Square it so more than xar becomes positive less than xar also becomes positive then I add it up and I average it there's a small question as to why it is n minus one and that is because I'm I'm Divi I'm taking a difference from an observation that is already taken from the data now if I've squared when I have squared my original unit was in age when I have squared this has become age squared so I take the square root in order to get my measure back into the scale of years so the standard deviation is a measure of how spread a typical observation is from the average it is a standard deviation where a deviation is how far from the average you are and because of this squaring you need to work with a square root in in in sort of modern machine learning people sometimes use something called a mean absolute deviation mad mad very optimistically called so what mad is is if you don't take a square you take an absolute value and then you do not have a square root outside it and that is sometimes used as a measure of how much variability there is so why it is squ why is it why we do just we Square it because we want to look at both positive and negative deviations if I didn't Square it what would happen is it would cancel out what was the word that one of you used neutralize right I love that term your positive deviations would neutralize your negative [Music] deviations yes seen the posi and thetive this number is going to be positive if say X1 one so let's look at the first number here so if I look at the head command here when I did the head command here what did the what did the head command give me the first few observations now this is an 18yearold this probably sorted by H this is an 18y old correct now I'm try I'm trying to explain the variability of this data with respect to this 18y old what is the what is the what why is there variation this 18 number is not the same as 28 and 18 is less than 28 so what what I want to do is I want to go 18 minus 28.7 what I'm interested in is this 10 this 10e difference between the two now the person the oldest person in this data set is how old 50 when I get to that row this 50 will also differ from this 28 by 22 years so I'm interested in that 10 and I'm interested in the 22 I'm not interested in the minus - 10 or a minus 22 we can I can do that I can do that in other words what I can do is I can look at I can represent 18 - 28 as 10 and I can represent 28 - 50 as 22 and that is this as I said 1 / n -1 absolute X1 - xar plus plus absolute xn - xar that is this with n minus one and this is done as I'm saying this is what is called mean absolute deviation and many machine learning algorithms use this you are correct in today's world this is simpler now when standard deviations came up first this was actually a little harder but people did argue about this I think well 150 maybe more abouts I forget my history that much there were two famous mathematicians one named ga and one name llas who argued as to whether to use this or whether to use this laas said you should use this and gaus said you should use now the reason gaus won was simply because gaus found it easier to do calculations why is this easier to calculate with because Newton had come up with Calculus you know a century or so before that and so for example let's that you want to minimize variability which is a which is some something that we often need to do in analytics which means you need to minimize things with standard deviation which means you need to differentiate this function the the square function is differentiable you can minimize it using calculus this is not so therefore what happened was gaus could do calculations but laas could not and laas lost and gaus won the definition of the standard deviation we haven't much used 25% or 75 so as in as in okay why do we not do that so today this entire argument makes no sense because today how do we minimize anything our computer program you don't use any calculus you ask you run fmen or something of that sort you basically run a program to do it so therefore this argument that's you can both do calculations equally well with this as in as in that so today what is happening is that lass's way of thinking is being used more and more this one is a lot less sensitive to outliers this one what it does is if it is far away the 22 squares to 484 or something like that which is a large number so the standard deviation is is is often driven by very large deviances larger the devian the more it blows up and so therefore this is often very criticized if you read for example the finance literature there's a guy called Talib Nasim Talib he writes his book called the black Swann and Fool by Randomness where he left and right criticized the standard deviation as a measure of anything so today this argument doesn't make a great deal of sense and when in practice something like this makes sense it's often used so a lot of this is done historically it it looks this way because of a certain um historical definition and then it's not it's hard to change so today in in you know centuries after gaus sad people like me are trying to explain it having trouble doing it because there's a logic to it right I mean and that logic doesn't hold at all anymore now yes in simple terms is that 18 how far generally is that from the medum how far how far on the average is an observation from the average confusing statement again he's going to be unhappy but how far on the average is an observation from the average if that answer is zero that means everything is at the average but you asking the question how far from the average is it is an observation on the average if I take your blood pressure how far from your average blood pressure is this reading if this is exactly equal then I don't need to worry about variability every time I measure blood pressure I'll see the same thing what is your average bank balance don't tell me that but but but you know what I mean right you have an average bank balance your your bank account manager or your bank actually tracks this what your average bank balance is but you're not actually your B balance is almost never or very very rarely equal to your actual average bank balance it's more and it's less how much more how much less is something that the bank is also interested in in order to try and figure out you know how much of your money so to speak to get out there because the bank is going to make money by lending it out correct but when it lends it out it can't give it to you so it makes an assessment as to how much money it's I don't want to get into Finance now but you get the drift H so therefore then it is a measure of that it is not the only measure of that so for example here's another measure so you remember this 25 number and the 75 number that you're asking about let's say that I calculate a number that looks like this let's say 33 minus let's say 33 is a 75% Point minus 24 so 33 - 24 let's say this is my 24 and this is my 33 between this how much data lies 50% why because this is 25% and this is 25% this now contains 50% this is sometimes called the inter quartile range inter quartile range big word right now why is it called an inter quartile range the reason is because sometimes this is called Q3 and this is called q1 Q3 stands for upper quartile you can understand quartile quarter so upper quarter and this is the lower quartile and the difference between the upper quartile and the lower quartile is sometimes called the inter quartile why is it called the range because what is the actual range of the data the range of the data in this particular case is 50 minus 18 and 50us 18 which is your max minus your Min this is simp sometimes simply called the range range is maximum minus minimum inter quartile range is upper quartile minus lower quartile and these measures are used they do see certain uses based on certain applications you can see certain advantages to this for example let's suppose that I calculate my five point summary with my five point summary I can now give you a measure of location which is my median and I can give you two measures of dispersion which is my interquartile range and my range so those five five numbers have now been Twisted to give me a summary number which is the median and a range number interestingly I can also draw mental conclusions from that for example I can draw conclusions from these five numbers in the following way 24 and 33 half my customers are between 22 and 24 and 33 so if I want to deal with half my customers I need to be able to deal with a range of about 9 years within this 9 years is all that I'm interested to get this right so if I'm building my if I'm building my my my machine I'm going to make sure let's say that the 33y old is okay with this and the 24 year old is okay with this will the 50-year-old be okay with this no may not be but if I want to make the 50-year-old okay with this I'll have trouble with the 18-year-old so I can do a lot with even these five numbers we'll see more descriptive statistics as we go along by the way this is only for age I can do this for you know usage I can do this for Fitness I can do this for income I can do this for Miles income is interesting here's the median income $50,000 and the mean income about $53,000 if you see income in almost all real cases the mean income is going to be more than the median income the per capita income of India is more than the income of the typical Indian see what does this command do if I say my data. info what this is doing is my data first of all is a data frame that I created just to review I read the PDF file this way now this is a describe and this here is info now describe and info in the English language are similar things description and information and this is interpreted in the software is two completely different things information is like your variable setting it's like your integer field your real field it's setting like that it's giving you information on the data as data the word data means different things to different people to a statistician data means what to a statistician data means a number to an IT professional what does data mean bytes information you know I've lost my data I don't particularly care what the data is I've lost my data so this is that information it tells you tells you about the data it's an object it's a description it's a 64-bit stored integer it's an object so it tells you about numeric categorical it tells you about the kind of data that's available nonnull fields in other words there are objects in the field Etc there are so many integer types which are stored at 64 because this computer is probably capable at 64 and there are three categorical variables this is a this is shall we say a data object summary of the of what is there in that data frame not a statistical summary useful in its own way particularly if you're processing it and storing it for those of you who are going to go into Data um sort of curation like carers this kind of a database is a nightmare because typically what happens is when you store real data you in in addition to data you often store what's called a data dictionary sometimes that's referred to as metadata data about the data because simply storing a bunch of numbers is not enough you have to say what the numbers are about this adds a layer of complexity to the metadata you now have to store not only what the varable is about but what kind of a variable it is so many professional organizations say is that archival data should never be a mixture of both numerical and categorical objects and they pay a price for that numerical things should become categorical or categorical Things become numerical but what happens is if you are storing large volumes of It And archiving it and making it available for people who have not seen it before it sometimes gets convenient so therefore fees like this are often useful to see how big a problem you have right now I want to plot a few things to plot you can plot anything Sion I think is coming a little later but this plot this is from M plot library and it is plotting through a command called hist hist means histogram which you've already seen you've covered his histograms right I think we seen histograms so this is a histogram now histogram as a syntax has bin sizes and figure sizes so what you can do is you can play around with these and see the differences in what this histogram does but there's a certain default that shows up and that default is quite good and here is a histogram distribution of the age this is not a set of numbers this is a picture this is a picture what does this picture have this picture has a set of bins and it has a set of counts within each bins between these two numbers between say 10 and whatever this is let's say 22 or thereabouts I have a count of let's say 17 so it gives a count and it does this by getting a sense of how many bins there are and plotting this shape it's a little bit of an arc to write a histogram program there there's a there's a python book out there I think things chatter one of it in which sort of the first onethird of the book is basically how to write a histogram code It's a Wonderful book but because it treats this example it got terrible reviews reviewer said why do I want to learn how to code a histogram and the book's author is I'm teaching you how to write a code a histogram is an is an example of how to do that and I tend to agree if you want to test yourself of your understanding of data and your understanding of any programming language and any visualization language quote a histogram in it and have fun so so it's it's a nice Challenge from many perspectives the data challenge the language challenge the visualization challenge all of that yes so you said Aral data should be numerical many companies do that that they that they want archival dat data to be have only one data form only one format why is that so because as I said when you store data how do you store it let's say that you've generated an analysis the analysis is done correct and you've decided not to destroy the data you're going to keep the data in your company's databases or in your own database how will you keep it you can take a technology let's let's let's pick an example let's say what's pick an example SQL Excel whatever let's say Excel let's say I keep it in Excel now if I keep it in Excel what will I now do so let's say I have an Excel spreadsheet let's say my cardio data set let's say this data set now in addition to the data what do I need to store with it information yes metadata how do I store that metadata yes so one possibility is I can have a text file like that like I had at the top of this describing all of this which is typically what happens in Excel storage it describes this and it describes there's one file calledd and another file called descript or something of that sort which basically describes the variables and the idea is that they have the same name and one extension gives you the data the other extension gives you the description of the variables that are in this data correct now this is good now what's going to happen on that data certain code has been run that code is going to assume certain things about the data what do you want that code to assume about that data whatever you want that code to assume about that data should be available in the data dictionary now if that code is stable enough to realize that whatever Fields you give me I will run on that's cool but if that Cod requires you to know what kind of data is being used let's say discrete data let's say continuous data in the future you'll be doing things like linear regression logistic regression linear regression will make sense if the variable is a number logistic regression will make sense if the variable is a zero or a one if you have that problem now in the metadata you need to be able to tell not only what business information this variable contains but also what kind of a computational object it is so the code can run so therefore what people often say is that I'm going to make it very simple and I'm going to assume that my entire data frame consists of only one kind of variable so that when I run any algorithm on it I know EX ly what kind of data input that algorithm is going to get but I'm saying it's a practical answer that many companies often often have and I've worked in a couple of companies at at least one company where this was very seriously done so we had to we had to when we put data back in we had to convert it and for in the situation that I was in it wanted everything in categories so what we would do is we would take continuous data and we would do what's called Fine classing which means that we would divide not into four pieces but into 10 pieces desile one desile 2 desile 3 desile 4 up to desile 1 and every variable was stored now not in its original numbers but as 10 9 8 7654321 so let's suppose that I tell his income is n what that means is I know he's in the ninth desde 10% of the people or more have income more than him 80% have less than him he's in that bracket and all variables were stored that way now what happens is every algorithm knows that every variable is going to be stored that way and you can keep writing algorithms that way otherwise what would have to happen is every algorithm will need to be differently and let's say you're doing credit scoring let's say you're doing CRM models you're doing something of this sort and youve built a very sophisticated CRM model that tracks your customers and it works now suddenly you've got a new variable coming in the Twitter feed and suddenly nothing works what to do go back and rebuild that entire model that's going to set you back 3 4 months that's going to set you back a few thousand so you say no any variable that has to go in has to go in in this form and if it goes into this form my algorithm can deal with it so when in such case it might not affect the I mean the efficiency of the model that we generate yes yes and in practice I'm going far away from topic now in practice an profession analyst has to struggle between doing the right thing badly and the wrong thing well you want to do the right thing well but the right thing well is going to cost you time money data and everything so you struggle between saying that I'm going to get a flawed model quickly built on a new data set or I'm going to get an inefficient answer on a Model that's already been built and let's see how far it goes and so these are more cultural issues with how an an analytical solution is often deployed in companies they vary very much from industri to Industry they vary very much from um company to company from the culture of a company to a culture of company they depend on regulatory environments in certain environments an auditor like entity comes in and insists on seeing your data show me your data let's say in finances is sometimes happening regulat agency let's say Reserve Bank of India goes into a bank and says show me your data all this npas Etc show me your order book show me your loan book correct and now that has to be done and the decisions you made have to be done in a way that is patently clear why you have done this so very often people say I don't want to make the best risk decision I want to make the most obvious risk decision which may not be the same thing at all but I'm being audited so that's a practical question and and I don't have a clean answer to that but I do know what happens is it right no it's not but we live in a world that has that kind of imperfection my one of my teachers his name was Jerry Friedman you'll see some of his work later on he came up with algorithms like projection Pursuit cart Mars gradient boosting he created many of the algorithms that you'll be studying one of my teachers at Stanford when he ran our Consulting classes he would say this solve the problem assuming you had an infinitely smart client and an infinitely fast computer after you've done that solve the real problem when you do not have an infinitely smart client and you do not have an infinitely fast computer this was in the early 1990s where computer speeds were a lot slower and we didn't have powerful machines like this around so a lot of this is done in in that kind of situation where where you are uh where you struggling for continuity when you're figuring it out imagine yourself as an analytics manager and I hope many of you will be and you have an analytics team sitting in front of you correct you're looking at them and you're looking at them in the eye and you know how much you're paying them and you know that half of them are going to leave at the end of the year what are you going to do with regard to the modeling and things like that your first order of business is going to be to ensure continuity in some form keep it simple right keep it simple keep it obvious for the next bunch of people who are going to come in and for that you'd be willing to trade a little bit of make it right so now the new person coming in will now not want to solve a very complicated kind of situation this is not where you want to be but and I do not want to depress you on day one but it's also the fun part of the profession right it's also what makes it interesting and sort of interesting and exciting right it's not all bad okay so the histogram command summaries of what these histograms are and each gives you a sense of what the distribution is and as you can see from most of these pictures most of these variables when they do have a skew tend to have a right skew maybe education has a little bit of a left skew maybe education a little bit of a left CU that a few people are educated and most people are here but even [Music] so right now here's an interesting plot um mro life has this as well but seor has a better version of it this is what's called a box plot you've seen a box plot there's a box plot um people are unsure as to where this box came from because there's a transtion called box who's used this before but this box came from it used to be called a box and whisker plot these are the whiskers this whisker will go this is this is the median this is the upper quartile the top edge of the Box the bottom edge of the box is the lower quartile the end of the whisker is 1.5 times the interquartile range above the box right if you want a formula s of the whisker the length of the whisker is 1.5 * IQR should I have a break now a little bit maybe huh so we okay 345 whatever we'll go after there Le yeah I haven't stopped I just got distracted so 1.5 * the if so it goes up to that if a point lies outside it the point is shown outside it if the data ends before it the whisker also ends correct what is the whisker okay what is the whisker all right let me explain another way the whisker is the maximum the top of the whisker is the maximum the bottom of the whisker is the minimum okay not okay okay some of the points are outside so this point here now what is this plot here age for males so what this means is this is the minimum 18 or whatever it is and this is the maximum 48 or whatever it is minimum the maximum so if you see nothing else on the box plot no other points other than just the box and the whisker then your five point summary is sitting there that's it right now what happens if you see points like this outliers what is an outlier an outlier is a point that lies more than 1.5 times the interquartile range above the box so this whisker will not extend indefinitely it'll go up to 1.5 times this box and then it'll stop and if any points are still left outside it it'll show them as dots you can treat this as a definition for what an outlier is same thing same thing in the other direction the logic is symmetric no no that means this it hasn't it's the data is ended here the data the data is ended here is was there any other number tried instead of 1. I suppose so and you can change it you can I won't try it now but you can go to the box plus syntax and change that so you can go to boxplot syntax and you can change that 1.5 it's not hardcoded into the algorithm I'm I'm I'm I think 95% sure as statistician I'm never sure about anything but but I but but it's it's it's a parameter in the in in in the uh you should be able to pass the parameter in the Box function default is 1.5 you should be able to change it and what's the color part that's the medi which one the color coloror oh no these these two colors these two colors are because I've asked for two things I've asked for male and feale if I if I had three of them that's okay but how the the oh this this one here oh this is Q3 the lower is Q3 and the upper is sorry lower is q1 and the upper is Q3 so this qu qu range so so for males between the bottom bottom whisker to the end of the box is a quarter of your data the box is half your data and the top of the box to the end of the whisker is quarter of your data so the middle line is the middle line is the median the middle line is the median there is also function inbox plot you can play with where it'll give you a DOT and that dot is the a mean I mean you can you can you can ask boxplot to do that but but a mean is not a general standard component in the fiveo summary it's a different calculation not a sort but if you want to you can make box plot to give a DOT on the mean as well by definition yes so so mean Medan so half the data is between um 24 and 34 or whatever that is half of all my all the men in my sample are between those two numbers I think box plot doesn't allow you to change the shape of the box I think that is set that's sort of central to the idea of a box plot it does allow you to fiddle with the size of the whisker but I don't think it allows you to fiddle with the size of a box in other words if you change that to something else let's say the 20% point to the 80% Point 80/20 rule that's no longer a box plot it's another interesting plot the significance of it is exactly this as we had seen before the significance of it is is that the data looks like this it's right skewed think of the picture so this is your q1 and this is your Q3 this is your Q2 or the median then the median is going to be closer to q1 than it is to Q3 in the same way that the minimum will be closer to the median than the maximum same idea this is a summarization for numbers if you were to summarize for categorical data what's called a cross tab or a cross tabulation this is simply how many products are there product category 195 498 and 798 they've got three kinds of treadmills and they're trying to understand which who was using what kind of treadmill our business problem is to understand who was using what products this is a cross stab what is this this is something that will be used for categorical variables no box plot will make sense here there are no numbers so now you can ask interesting questions here if you want to and you can think about how to answer it is that for example you can ask a question is there a difference between the preferences of men and women possibly is there a difference in the products that they that irrespective of gender is there a product that that they prefer you can ask all kinds of interesting questions and you can find ways to answer it which we will do not in this residency but next time around for those we can ciz it also categorize for those preferences so this is simply once again this is descriptive all this has done is it has simply told you the data as it is what I'm saying is that if for this if you want to do a little more analysis on it you now have to reach a conclusion based on it so for example one conclusion to ask is is that is that do men and women have the same preferences when it comes to the fitness product they use now that's a question to answer that question it's enough to look at the data but just looking at it will not give me the answer I need to be able to find a statistic to figure that out a statistic that does what that in some way measures that difference let's say measures the difference between men and women or what we will do is not measure that what we'll do is we'll measure that if there was no difference between men and women what should this table have looked like and then we'll compare the difference between these counts and that table but that's the interesting part of a statistical statistic which we'll do that's called a k Square test it's coming up in the next residency but that's the prediction part or the inference part of this description this is just the description you can do a similar thing here this for example is for um marital status and product what product you use are you not very dependent whether you're partnered or sing what is marital prodct maybe it has to do with age or maybe they're correlated should you use one as opposed to the other okay right you can use counts as well if you see instead instead of instead of doing it this way instead of seeing it as a table if you want to see it as a plot you can ask for counts so there are things like count plots and bar plots which allow you to do counts in the lab you'll do probably a few more of these this is simply another visualization of the same thing uh for those of you who like things like pivot tables in Excel h of Microsoft has made you know wonders of us all in corporate life they were I was told that you know you can have um you can have a master's in in in Bachelor master in anything engineering is good Etc and it's nice if you have if you have you know phds in a few areas but what you really need is a PhD in PowerPoint engineering I mean that's a necessary qualification for Success so certain tools have been used so therefore those tools have been implemented in many of these softwares as well this is the pivot table version of the same data set this is the last sort of not last but still this this is a this is a plot uh let me show you this plot and then we'll end or we'll take a break this is a plot that is a very popular plot because it is a very lazy plot this plot requires extremely little thinking pair plot of a data frame right you don't care what the variables are you're telling it nothing about the plots You're simply saying figure out a way to plot them Pair by pair and it does that so for example how would you read this plot on this side so it creates a matrix the rows are a variable and the columns are variables what is this this is age versus age age versus age makes no sense so what it plots there is a histogram of age doesn't like the Gap nature abols the vacuum I suppose python does as well so it now plotter what it should have plot is age versus age you're right it should have been a 45° line H but a 45° line is a useless graphic particularly if the same 45° line shows up in the diagonals so to make a more interesting graphic it plots the histogram there this analysis this kind of analysis sometimes has a name associated with it the name is univariate univariate means I'm looking at it variable by variable one variable at a time when I'm looking at age I'm only looking at age it's called univariate analysis it's just a word uni as in uniform same form unicycle cycle with one wheel things like that univ variant huh unit but for the other set of data also replicate the same kind of pattern if I'm going to give other set of data another set of data it'll replicate the same it'll replicate the same nature of the data there'll be histogram here again so yes so what it will do is remember that this graph the nature of the graph so let's let's see this right so where is gender here where is gender here is it there is gender is gender in my data it is there so when I did past plot my data what did it do with gender yes remember in info when we did info here remember how it has stored the data no not any here so it had product gender and marital status it had identified as objects in the data frame when it had formed the data frame so now what does it tell you about the about the command the pair plot command only yes it will it will ignore those objects so in answer to your question if the data frame has been stored has been captured with integer 64 basically integers or numerics in it it will plot if it's only objects he'll probably give a null plot yeah how say again AG is why like that this is the histogram this is the same plot this plot is the same as which plot this one it's the same as this one here age no this is not age versus age this is just age age versus age would have been a 45° line but it's not plotting that it's not plotting that in the diagonal it is not plotting age versus age in the diagonal it is simply plotting AG's own distribution yes with the count so what it is doing is it is essentially running hist on age all each observation and putting it on the diagonal yes what can 20 again there is a b there is a bin what can that from each bin it's a count count of people in that age it's a count of the number of people who are in that age group here this is age no this miles this is age this is age so say let's say between here between uh let's say say 40.5 and 43.5 whatever these numbers are there are three people it's a remember the histogram is a visual thing you can datamine a histogram if you want to which means you can se you can find out what those are and you can see it inside inside histogram just ask for a summary of that it'll give you what the features are of that histogram but the histogram is not meant to be used that way it's meant to be used as a as an optical device to see the shape to see the count it's an art to do a histogram if you change the bins a little bit the histogram will look a little different so I would suggest that unless you've got a lot of experience in this or you really enjoy the programming do not fiddle with the histogram it shape will change I'll show you a little later after the break not change the histogram but what shape is no not not in default you can go in and change it on size but the bin width Etc the binwidth of histogram takes a little more to change right so you can there's stuff out here you can find other things in which you can play this so there are ways to do it okay so quickly ending we're losing our food so these different plots and we'll continue after the break the rest of it is simply an X versus y so for example this is age versus education this is age versus education so the second graph from the first one is just rotated yes he's right if this is education on the Y AIS and age on the x-axis or vice versa then these two plots one and two and two and one are just mirrored images of each other a mirror image rot you right depends on where you look where you put the mirror but yes mirrors so I remember when I was a when I was a kid mirrors would confuse me so I would ask a question like this that when I see a mirror left and right gets switched but top and bottom don't I never understood why you know huh due to gravity you can think I mean left and right gets switched but top and bottom don't I thought it was something to do with a mirror and then I thought it was something to do with my eyes you know maybe because they left so I looked at it this way and that didn't help so yes it's an important point when you do symmetry it's a good catch it's a good catch to realize that there aren't so many plots there actually only half as many plots because the plot on this side of histograms and the plot on the opposite side of histograms are the same there was another question that one of you asked is that many of these seem to look like rows and columns in the sense that what are these rows now what does this row look what does this mean it means that this variable Fitness this variable Fitness actually has very few numbers in it it has a number 1 2 3 4 and five now why is that because remember how I Define Fitness it's my perception of whether I was fit or not in my original definition of the variable here you go self-rated and fitness and one to five scale where one is in poor shape and five is in excellent shape this was the created data so in this data set I now have that this variable in it these kinds of variables sometimes cause difficulty in the sense that there are some there's a word for it these are sometimes called ordinal variables so sometimes data is looked at sort of you know numerical and categorical and categorical is sometime called nominal and ordinal nominal means it's a name name of a person North Southeast and West gender male female Place Etc it's a variable essentially it's a name ordinal is it's also categorical but there is a sense of order there is a of order dissatisfied very dissatisfied so there's an order order therefore ordinal this variable the fitness variable can if you wish be treated as an ordinal categorical variable so for example the liquor scale is that so the seven point scale not satisfied very dissatisfied dissatisfied moderately dissatisfied neutral sat moderately satisfied satisfied very satisfied mark one this generates a data from a scale of say 1 to 7 or 0 to 6 so it'll show up in your database as a number like for example here you can say instead of 1 to5 very unfit moderately unfit okay relatively fit very fit instead of giving one to five give it that way and you code it up this way your choice so sometimes when you have data that looks like this the data the python or any database will recognize it as a number because you've entered it as a number but you analyze it as if it is a category so the opposite problem also sometimes exists in that sometimes you get to see a categorical variable show up as a number but you know it's a categorical variable a zip code is an example a zip code shows up as a number but it's obviously not you can't add up zip codes right you take two places in Bangalore and you want to find a place between them that's not the average of the zip codes it might be close but you can't do arithmetic with zip codes the other difficulty with zip codes is that they can be many of them which means that as your data set grows the number of zip codes also grows so the number of values that a variable can take grows with the data and this sometimes causes a difficulty because what happens is that in the statement of the definition of the variable you now cannot State how many categories there will be present so you know that there will be more zip codes coming you just don't know how many more zip codes will be coming but you also know it's a categorical variable so you can't treat it like a number and so there are some special types of you know problems like zip codes that require special types of solutions so the plot itself is a very very computational plot if it recognizes it as a number it plots it if you don't want to make it plot as a number change it to a character most softwares including python will allow you to do that now this is in some way a graphical representation of it for the for the end of this session we can talk a little bit about the numbers associated with it so here for example my data age you can also go you know do age if you want to and things like that this is the mean um 27. 7888 is the mean so you can extract it there are functionalities of the mean that can be that can be recovered like trimming etc etc that if you want to you can calculate the standard deviation you remember the standard deviation formula that strange formula that I wrote on the board this is the standard deviation formula if you want to calculate the standard deviation you can do this for other variables as well this is an interesting plot so I don't want to go too far into this plot but it's an interesting plot right in seon there's a warning on the code this is called or what they're referring to is a distribution plot so this is a plot that tries to look at not what the data is but what the distribution is so remember I was drawing these odd pictures pictures like this and drawing lines on it those were distributions so what he trying to do is it's trying to go after the distribution of the data now what does this mean it means that it says this it says that there is an underlying distribution of the age variable this distribution is a distribution you do not know however you have a sample from that distribution how big a sample about 180 observations from that sample of 180 can you guess at what that distribution is in other words can you give me a curve it's an answer to that problem and it gives a curve why is the raw data not enough so the raw data is not enough then this goes to the heart of what the statistical problem is is because I am interested not in the age of this particular group of people I'm interested in the corresponding age of another very similar group of people why what is the problem I'm trying to solve I'm trying to solve the problem of who is buying my cardiac equipment now when are these people going to buy my cardiac equipment at some point in time okay now what is my data but for whom is this data who have I got the data for people already right people have already bought so I have a problem my problem is I want to reach a conclusion about my future customers based on my old customers how to do this what mathematical logic allows me to say something about the future based on the past yes in short the way to do this is to assume that there is what is called a population we'll talk more later at this stage to assume a distribution to assume that there is a distribution from this distribution I have seen a sample today from this distribution I'll see a sample tomorrow the people are not the same because the people who are going to use my cardiac experiment cardiac equipment yesterday are not what going to use it next year if it was a same I never have a growing business there's no point analyzing data of customers unless I want them to buy more things or I have new customers coming in so what is common between my observed set of data and the data for my new customers that commonality is what you can think of as a distribution so he says that from from this can you give me a sense of what this distribution is and from this distribution I can think of other people coming so what we'll do tomorrow is we'll talk about a few distributions certain few specific distributions and how to do calculations in the distributions for now what this graphic does is it simply calculates that distribution for you I'll explain very very briefly how it does that won't go into too many details what it does is it takes the averages of points yes so you mean that for a sample distribution cannot be done I'm saying that for a sample why so why is not the sample the distribution itself why am I not saying it's a good question why am I not saying ignore the curve why am I not saying that the original histogram which I've seen three or four times before why is that itself not the distribution that's similar to the following question let's suppose that you have done a blood pressure test and you've gotten a few measurements you've been tested twice today let's say prear you know before breakast after breakfast next week also you have done this let's say you have done this for whatever be your reason youve done this say once a week for a month so now you got four readings no not four read eight readings now these eight readings is that the distribution no you have yes so there's something something that says yes if I want to understand what my blood sugar is and what it will be going forward if I do not get treated then certainly there's a relationship between this and what will happen in the future for example if I behave exactly the same way if I eat exactly the same way or I exercise or not exact exercise exact exactly the same way if I smoke if my lifestyle is exactly the same as it is I would expect my readings to be the same but what about it is going to be the same and what about it is different I don't quite know yes it is true that those eight numbers are a representation of the distribution but they're not the distribution itself if they were the distribution itself I would be forced to say that in the next month I would have exactly these eight readings but I know that's not true but I also know that from these eight readings I can say something about what will happen next month it's not that there's useless information there so if my readings this month for example are let's say 110 120 115 125 let's say good health 130 Etc I know these these are my readings I know with some idea perhaps that that if things become remain the same next month they will not start becoming 22 20 210 215 230 they will not become that how do I know that because I have this readings this month so the idea of a distribution is to be able to abstract away from the data the the random part and the systematic part and the systematic part is what remains as the distribution around it there's going to be a random variation and that random variation is going to exist from data set to data set like this month and next month like this bunch of customers and and another set of customers who buy cardio equipment maybe from another branch of my store if I am for example uh running let's say a chain of stores let's say that I'm oh I don't know not to pick names but let's say I don't know Reliance fresh or something of that sort and I want to understand how my stores are doing let's say I take five or six stores and I study them extensively how do I know that those results are going to apply to the remainder of my 500 600 stores what is common between these five and the remainder how are they representative of it what part of it applies to the rest and what part of it does not how do I extend it how do I extend your blood pressure readings to the next blood pressure readings how do I figure this out that is the heart of Statistics that's called statistical inference to abstract away from the data certain things that remain the same and certain things that do not so a distribution is an estimate of that underlying true distribution of age and so it's not as rough it's smoother how smooth is something that the plot changes that the plot figures out on its own like histogram but you are free to change it you are free to change it there are fun there are functions within it functions within this plot and that's it's a it's a fairly sophisticated function which you can do which you can do many things with I mean it's a fairly sophisticated thing there are many many functions available within it so for example this bin of histogram this allows you to say where should the boundaries of those his the histogram part of it be whether you want to plot it or not whether you want to plot that that what I was calling a distribution the gaussian kernel density estimate it's a sophisticated way of saying the same thing and there are functions available to put into that so you can change this it's it's one of the one of the most sophisticated plotting functionals that you'll be able to see I wouldn't suggest doing it now get a little more experience in doing this but gaussian will not make too much of a difference what will happen is if if there is if there is no smoothing out here it'll look like a it'll look like a normal distribution this little these little Wiggles will go away me um it's better gives you a Smo curve we will discuss that a little later as to when maybe tomorrow as to when it's a good idea to do that and when it is not H just hold on to that question in a little bit we haven't talked about the gaussian distribution yet I'll deal with that when the gaing distribution comes for now what this is is it gives you a visual representation simply a descriptive representation of the underlying distribution hence a distri hence a distribution plot distribution you always want to compare with the like the samples eight samples if take an example you distributed them then you're taking a current sample the current yes so if my distribution is correct let's suppose in an Ideal World if my distribution thinking is correct then here's what would happen if I take my old new old Aid data and I do a histogram sorry I do a dist in the new data I do a disc again these two should be very similar the histograms may be different yeah but the distributions should be similar if I've done my analysis correctly does that mean the variance is I wouldn't use the word variance I'd say variability it means that there is a there is a s this is called sampling variability in other words there's a VAR variability that is due to the fact that you've taken a sample there is an underlying truth but you're not seeing that truth because you're taking a sample there is an underlying level of your blood sugar but you're not seeing that because you've taken only a very small sample of your blood a few milliliters where there are liters flowing around and at only for at a few second in time there are many hours in the day there are so many other things that your reading could have been but if it is a good if it is a good sample then what would happen is that I'll be able to cover this variability so if I want to get a sense of what your blood sample actually is and I want to sample this well what I will do is I'll take samples in different kinds of situations one thing they cover for example before eating and after eating that that they cover I maybe maybe I want to cover other things as well for example in certain kinds of diseases they're very conscious as to where to take the blood from because the the metabolism in the blood changes based on what certain diseases and I won't make for example you draw the blood near the liver the liver is the body's filtration system so essentially you want to figure out the nature of the blood when it flows into the into the liver and and then after it flows out of the liver in order to understand whether the liver is filtering your blood correctly or not now to do that you need to draw the blood in very specific places so in order to do that therefore you your experimentation should cover all of that what does that mean for example in business terms let's say that you're looking at sales data and you want to understand your sales distribution well don't focus on certain sales people look at your bad sales people look at your good sales people look at your high selling products look at your no selling products cover the range of possibilities if you do not cover the range of possibilities you will not see the distribution and if you do not see the distribution you will not know what where the future data will come from and if you don't know that you'll not be able to do any prediction or prescription for that histogram is just a summary this is also just a summary but the histogram summary applies to just this data set this distribution is pretending to apply to a little bit more what is the definition of it definition of distribution the definition of a distribution doesn't apply to the data so a distribution function so to speak is just this so for example it's sometimes defined this way f of x is equal to the probability that X is less than or equal to X this is sometimes called a distribution function f of x is equal to the chance that age is less than or equal to say 15 age is less than to 16 a to 17 and now let me confuse you even more h F ofx this is the derivative of x the differential of X is the density function which is the area under the curve this is called the density function which is what this plot is plotting this is sometimes called the density function the density function so the distribution function is the integral of the density function and the density function is the derivative of the distribution function if you're very maty in all of this huh so so what they're plotting is plotting the density function I showed the so this is actually called the density function the reason I'm calling it a distribution function is because it says distribution here I was hoping not to confuse you clearly I failed go ahead yes that's the idea yes ah now you now you now you've hit the problem of Statistics bang on the head how do I get an idea of a distribution that applies to everyone based on only one sample sitting in front of mezen that is the million dollar question and that is why people like me exist right that is the whole point of the subject and it is a hard problem it is a hard problem because you are trying to draw a conclusion outside your data you are you're not nobody is interested in your data nobody's interested in your data right everybody's interested in their data or in their problem nobody's interested in your data now but you still have to analyze the data that is in front of you and reach a conclusion that makes sense to them the bank has to look at its at its you know portfolio and figure out what his risk strategy should be the clothing store needs to figure out look at its sales and figure out what loads it should make great learning has to figure out its course reviews and figure out which faculty members to keep you have to look at your uh your expenditures and figure out how much salary to negotiate for huh how will you do all of this how do you do all of this by the way based on some sense of distribution right so so when you go and you negotiate for a salary now you're not going to negotiate for you know 100 cres you might but you say no one's going to give me that anyway so maybe you're good enough I don't know but but but what you do is you essentially say you do roughly something like the following you figure out how much money you need and how much money you are expecting and that to Drums to some extent is based on your expenditure and what you want to do your expenditure is also based on that you have a certain income and you're spending based on this you're doing all of this on a on a on a regular basis H you're standing on the road correct you're standing on the road and you're trying to decide whether to cross it how do you decide experience you got past data and that data is telling you please cross the road how that data has not seen that car H k53 3619 with his driver has not been seen by your data set how are you Crossing because you're making the assumption that while I have not seen him I've seen many others like him so so there there's this story right so you know um a taxi driver is is is is going at night on the road etc etc and he's just running left right left right so red light cross no issues ETC it just keeps going the the driver the passenger is getting very scared stop the the driver he says in Hindi apologize for the language and I'm the lion of the road who will stop me he goes through all the red lights and then there's a green light and then he stops and then he says why will you stop now the guy says so the guy on the other side so he's logical right so his data is saying that there are people who cross the red light so therefore if I'm standing in light there's red light on the other side cars are going to cross that red light right very logical so therefore and we do this all the time so while we are not trained as statisticians at least normal people are not they behave like one based on the experience now your objective and the objective of an analytics professional is to translate this logic into a algorithm into a procedure the that the company and the computer understands and that is not easy for starters let's say that you that you that you are here and you say that this is an average right this is the mean age this is 28.78 A8 and you could say that this is an estimate of the mean of the distribution this is not this is the mean of the data but you are not interested in the mean of the data why you not interested in the mean of the data because you're not interested in this particular set of 180 people but you are interested in the average age of my customers so now the question becomes what does the age of my new customers have to do with the number 28 are they related yes you would say that they're they're like a copy of what I have now that's interesting so they if they like a copy of what I have then will I see 28.78 again and now you'll say probably not there a copy but not that much of a copy most likely ah now we talking so now How likely is most likely and what about it is going to be the same and what about it is going to be different y AIS of the distribution so the y- axis of the distribution you'll say is the same Al so for example you could say that this 78.8 is an estimate of the population distribution which means that yes it comes from the history prog it comes from the same data history comes from but it also comes from this distribution but there's also this nagging feeling that I do not know for sure I do not know I don't know what this new data is going to be so what will happen is we will not give the answer 78 28 sorry we will not give the answer 28.7 we'll give an answer that is like 28.7 plus minus something we'll say I do not know what the population mean is but I'm going to guess it's around 20 8 I know it is not going to be exactly 28 but 28 isn't useless for me either it's going to be around by by around 28 how much around 28 now certain criteria come in what will this depend on it will depend on the variation of the data if the data is standard deviation if the data is very variable this plus minus will be large and the nature of the data right same yes the nature it will depend upon how many things I'm averaging over if this was 180 I'm so sure if this was 18,000 I'll be even more sure if this was 18 I'd be less sure so it depends on how much data is being averaged over the more data I have the Sher I am about the repeatability of it the Sher I am that I will see something similar again it depends upon how sure I I want to be if I want to be 95% sure if I want to be 99% sure if I want to be 99 the more sure I want to be the bigger the the the the tolerance I must have on my on my interval and those are things we'll get to so those are also desriptions but those desriptions are heading towards being able to predict so now if I give you this 28.78 I've given you a description of the data but I'm not given you a prediction if if I've now given you 28.78 plus minus something I've now begun to give you a prediction H today is about descriptive analytics we're not we're not predicting anything we'll get there but this plot is in some way a first measure of of of of looking at this idea of a population and of a distribution associated with the population this is yeah huh if the if the variation less this Curve will be Sharp flat means variation is more if the curve goes this way it means that there is a lot of variation I'm unsure about the middle in that case you can't get the proper prediction it's harder you need more data data is more definitely not the variation of the average would go less so let's suppose that you have no control over your diet I'm not accusing you of anything it happens to humans but let's suppose that you are doing a job in which your lifestyle is very varied you travel from place to place you eat in different hotels sometimes you don't eat at all sometimes you stress out a lot sometimes you're naturally going after trains and sometimes you're sleeping for 12 hours in a row your life is highly variable now let's suppose and there's nothing wrong with that many people have very varied lives but let's suppose I'm now trying to measure the blood sugar of such a person what must I do now try to other variables or at the very least what I need to do is if I simply want to get a good blood pressure measurement is I have to measure it under many different circumstances or I could argue I don't control your circumstances I can control your circumstances so I can say for example that go and measure it at this time or go and measure it take a glucometer take a glucometer and before going to bed do this H or after you've just had a very hard day do this I can give certain instructions to cover all the corners or I can simply say I don't know but what you need to do is you need to measure your blood pressure or sorry your blood sugar say every 6 hours and then tell me what happens but you need to to do this often because I expect your blood sugar to be highly variable simply because your body is being put through a enormous amount of variation in a business situation let's suppose that you've introduced a new product you do not know if this new product is going to sell or not what will you what will you do I mean how will you measure it you've just introduced it based on past data you'll do that but you've just released it you can measure current dat no different situation I've just released the product all that is over all that is over I now have just released this this watch in the market what typically happens is people track the market very very closely number number of History the ads number of sales made everything the reason is because they're not sure how much this will sell see the question is what changed what changed was your product release now the competitor could be reacting immed immediately my point is not my point is not that there are many things to look at which you should my point is that when there is a change in the distribution when there is when there is an un unknown distribution coming in front of you whose variation you do not know you tend to get more data you sample more frequently or you get more data you you you you figure this out H we do this all the time for example let's suppose for those of you who who have kids let's suppose your kid is is going to a new school what will you do you'll ask more questions to them you'll get more data you'll find out what is happening in school what are the teachers like it because there's too much variability standing in front of you now with those answers and then you do a few trips to the school you're now a little more you know you may like it you may not like it but you're at least more more informed that distribution is now known to you so you get more and more data that's why you get the experience that's why you start getting that experience if you have that experience already in other words if you know the distribution very very well and you're comfortable with it it'll take time to get there and that's why this big data world is becoming so interesting that by the time you've understood a problem the problem is not important anymore there's a new problem now this is good right that's why you guys have jobs huh but also means that the answer to that is it also means that when you have new data you solve a different problem you don't solve the older problem better which is what a statistician to some extent is trained to do that as you get more and more data get the distribution better get a better idea of the unknown make a better product but the alternative view is make another product solve a different problem if you have more data so the CEO is now saying I have more data give me more more of what solve another problem for me give me new customers that I can go after and things like that so therefore that problem is is a problem that statistic Big Data people often and and it's not an easy problem to go after that as you have more and more data coming in how do you utilize it how how do you how do you how do you make efficient use of this information do you get tighter estimates of what you're going after you're doing sentiment analysis of text many people do text analysis you'll write you know Twitter code in Excel and you'll do Laten semantic analysis you'll look at positive net you know net sentiment scores and things of that and now the question will be that you know that this is going to change people's opinions are going to change correct so over what granularity do you expect people's opinions to say the same do they change every day if they change every day there is no point looking at a person over an average of days because that average is nothing every day is a different opinion on the other hand if their opinions change let's say on a monthly basis then you can look at Daily averages and average them you'll get a better estimate of that monthly rating so so so you have to make a guess as to whether I'm estimating a changing thing or whether I'm estimating a solid thing better and that's not a that's not an easy thing to do it's is a I know it's happened to me I don't know whether it's happened to you or not but there are times in my life where I have simply not had haircuts what that means is I've gone 6 Months 8 months a haircut has been like a weight loss program right I'm not cared what I look like I'm not sure I do now but you know when things become very unhygienic I go and get a haircut there have also been times in my life when I've been a lot more conscious of what others think about me you can imagine what points in my life so now I groom I'm very careful I get my hair done and you know I'm all correct and I'm getting my hair cut much more regularly now what am I doing so in the second case what I'm doing is I'm trying to make sure that I'm reaching a certain distributional standard in other words there certain Target distribution that I have and I'm interested in getting there I'm intolerent of variability I'm saying that I'm going to estimate this distribution I'm going to stay close to it in the first case I was not I was perfectly okay with the variability and in certain cases you will be okay with the variability and you will not want to estimate a distribution of this type and in certain cases you will you will want to estimated very very well you'll want your hair to be done very correctly you'll want your product to be targeted to a very specific age group you'll want to know that when I am targeting to this particular age group what advertisements do I want to show you'll want to advertise it on television and you'll want to know who is watching the program on which you advertising this are they are they College people are they professionals are the old people sitting at home who will use this and therefore where will I advertise my cardio product there times when you want to know this very very precisely or as precisely as you can so therefore this number this mean number and this number from a description perspective from a description perspective is perfectly okay it is just the average but from an inferential perspective it's just the beginning of the journey it's just one number and we're going to have to put a little more bells and whistles around this go ahead you have lots of questions clearly uh usually what we read is that a variable with not normal distribution should not be taken further in the stud okay so we haven't talked about normal distributions we will do tomorrow but so statisticians need to make assumptions about their data one of the assumptions is what he's talking about it assumes a certain distribution it says that I'm going to assume that the data has a normal distribution is an assumption now why do statisticians make assumptions like that one reason they make assumptions like that is because they make the calculation becomes easier now just because the calculation becomes easier doesn't mean the calculation is correct because if the assumption is wrong the calculation is also going to be wrong but because of the Assumption you can do many of these calculations and if you don't make those assumptions these calculations now become difficult or even impossible given the data at hand so a lot of the tests and a lot of the procedures that we'll be talking about are going to make certain assumptions we'll see one in about an hour or so and if that assumption is correct I will have a strong model but if that assumption is wrong I will still have a model that is that that is indicative so there was an econom I think Paul Samuel said I'm not entirely sure who but someone who said No George box I think the box and the box box he said that um all models are wrong but some models are useful right so the question is it may still be useful if in many cases the distribution is expressly allowed to be not normal the domain tells you that let's say that you are in an engineering domain you know the data has a certain shape and the engineering domain tells you that it's a shape and the shape is sometimes called a wable distribution what that means is that if you reporting out let's say the failures of something you're reporting out the failures of gas turbine blades as I spent a number of years doing that we had to report out a viable distribution we didn't report out a normal distribution we had to report out a viable distribution in the finance industry you report out a log normal distribution means and variances of it every industry has its own favorite distribution because every industry has its own generic data form now even within the industry a particular data set could violate that rule and then it becomes interesting that as a statistician do you now use a higher power tool set a more powerful tool set to solve that this leads to certain complexities the first complexity that often runs into is which one and do I different do I do it differently from someone else is like a doctor who looks at a patient and says that uh you know what the textbook says that I should do it this way but I like this guy he looks different H I've never seen anyone like him before more so let me ignore the textbook and treat him this way I think he'll get better now could he could I be right I could but I'm taking a risk so every time you're making an assumption on your own and following through on it you're taking a very similar risk you could be right for that particular case but the Precedence you have far fewer precedences to go on and as a result of which later on when you extend it Beyond to someone else you're going to have to you're going to find it hard to do so so therefore people often make assumptions and distributions in a sort of in a sort of historical sense that they've known that this has worked moderately well over a period of time and they're very hesitant to change it for particular cases sometimes they're not allowed to in regulatory terms they're not allowed to uh any accountants here accountants right so accountants so you know this so if you're an accountant you have to do your books in a certain way right let's say that you're measuring cash flow there there is there is a certain way in which you will measure cash flow right now you may say that in this particular month your business was done a little differently so I'm going to show better cash flow this way if you can you know you're running into trouble now you may be right you may be right in the sense that that may actually a better way better way of doing it but but as soon as you go out of CFA CFA as soon as you go out of a very standard way of doing things things will be a problem and the same kind of logic applies to often statistical analysis as well so as a result of which like an accountant you are you are doing the right thing approximately most of the time in in in in machine learning there's a term that you might see there's a term that's often called like supervised unsupervised Etc is called pack learning pack learning it's a it's a deeply technical field and pack stands for probably approximately correct probably approximately correct I'm not telling you anything if I'm wrong don't blame me but I'm probably approximately correct and the probabilistic part comes from statistical thinking the approximately part comes from machine learning thinking and and and it's a it's a it's a deep field it's a serious field but it it it puts a probabilistic statement or an approximation so therefore at the end of the day whatever method you use there has to be a sense of how generalizable it is you will do that um you'll do that fairly soon in in a couple of months you'll do a first hackathon and your first hackathon all your hackathons will have a certain feel to them a common feel for hackathon is I'm going to give you a data set you build your model on the data set and I am going to have a data set that I'm not going to show you and I'm going to tell you how well your model has done on my data set and you have a day or 6 hours or whatever to fill around with your data set and show Improvement on my data set this is what you'll do you'll do it twice I think in your in your schedule what does that mean it means that by being very good on your data set doesn't necessarily mean you are successful you have to be good on my data set but I'm not going to give you my data set this is not as impossible as it sounds this is a very standard problem and this is a typical problem you will not find this hard you'll find this very easy by the time you get there no no no not a problem you all will your predecessors will you will you'll get you know 96 99 what whatever percent accuracy not to worry technically this is not hard how oh so right so there are two answers to that one is if the mean is different from the median then you ask no no mean being equal to the median from a distribution sense means that these are the two numbers okay if the distribution looks like this and I have a I have a parameter mu we we're going to do this later when when statisticians use a Greek letter they're referring to something that they do not know right It's All Greek to them so mu is a population parameter it exists but it is unknown it exists but it is unknown now if the distribution is nice and symmetric like this then this unknown thing in the middle can be estimated using a mean or it can be estimated using a median now the question becomes which is better and the answer to that roughly speaking is this that if there are many outliers if this distribution tends to sort of spread out to the tals then use a median because of the reason that I said the median becomes stable to outliers if this distribution has a more Bel shape curve of this particular kind the mean is more efficient at this a better answer is what if the nature of the distribution is not that but it is like this then the the median may be here and the mean may be here now you're asking different question now it's not a Statistics question it's a common sense or a science likee question which one are you interested in are you interested in per capita income or are you interested in the income of the typical Indian correct for example let me ask you this how much time or give me one number one representation of the amount of time that you spend on a website I'm asking for one number don't tell me the number but think in your head as to how you would answer this how much time do you spend on a website by a website what I mean is this aage yes but what does average mean so how would I do this so so here here's what I'm asking you you're cruising the web every day let's say so what I'm asking for is a number like this that and the amount of time that you spend on a website you go to different websites and you spend a variable amount of time on each of these websites for whatever be your purpose sometimes you're just passing through sometimes you're seeing a video sometimes you're sending an email blah blah blah whatever and every session I'm thinking of as a different website and you go to if you go to Google twice then I thinking I'm thinking of that as two websites so session wise so to speak now I'm asking for representative number so how would you come up with that number what's what's what's a fair answer to that a mean so if I do the mean here is how I would do it on a given day I would so the first website I've gone to I'll find out how much time I spend there second how much time I spend there third how much time I spend there fourth how much time I spend there and I add this up and I divide that's the mean right what would be the median the median would be i' look at all those times and I sort it and i' put this in which is going to be larger it depends is correct but in this particular case so think of your think of your typical browsing habits now everyone's browsing habit is different huh but just think of it and Network people who deal with network traffic deal with this problem on a on a regular basis so here is what usually happens most of your sessions are actually quite short for example a query you go to a website and you post a you post a query or you go to your Gmail and you check whether there's been a new email or you go to a favorite website news site and see whether something new is there or not most of the actual Pages you visit you don't spend a lot of time on but sometimes you go to a website and you spend a lot of time on it let's say you write an email let's say you see a video so what does your data look like many small numbers and a few big numbers this is what is called a heavy tail distribution the distribution the histogram sometimes looks like this heavy tailed this is the right tail this is called a tail of a distribution a tail to a statistician is not an animal thing a tail is usually refers to the end of a distribution something called a heavy tail distribution and and network traffic is a is an example of a typical example of heavy tail so now here is what happens people in this particular case the mean and the median are carrying very different kinds of information the median is essentially saying that for a typical website that you go to how much time do you spend on a typical website now the if that number is low that is an indication that most of the time you are shall we say cruising or browsing on the other hand if you if you're looking at the mean and that number is high then you know that you're spending a lot of time on certain very specific websites and this points to two very different kinds of people so the mean and the median are carrying different kinds of of information with them both useful so in answer to your question it depends on what you're going after and and and in certain things you will see one of them being naturally used as opposed to the other there's also a third one called the mode which is which is actually harder when we when we were sta students we study M median mode and the mode is the peak of the distribution what is most likely and the reason the mode isn't talked about much is because the mode actually algorithmically is very hard to get at the mean is a very simple algorithm the median is a very simple algorithm the mode is a harder algorithm you can think about how to write a program for the mode if you want to it's it's a much harder algorithm so the mode essentially what is the mode of this distribution for example so let's take a look at one of them this is what um this is income for men what is the modal income the modal income is here it's some somewhere around 55,000 where this maximum is correct for women it's here maybe just less than 50,000 so you understand what the mode is it is the it is the highest frequency or the most common value but in practice that's actually a little difficult to do if I give you a set of numbers how will you calculate the mode but will you see a spike what is a spike so I'll give you all your ages how would you calculate the mode so one possibility is you you'll look at the age and ask which age is the commonest where the count count of the AG where the count of the age is more but that almost almost means that your data is not numerical you're almost thinking of the data as being categorical because you're counting how many observations there are at a value the idea of a numeric is that it's sort of continuous it's not chunked that way so for data that is chunked up or categorical you can easily calculate the mode but something that is not and so the mode therefore has become less fashionable because it's not a very easy thing to go after when we were in college the more was something actually quite easy to calculate here's the way we would calculate the mode here is the histogram and the way we would calculate the mode is this we draw a line from here to here we draw a line from here to here left to left take the highest class draw this cross line draw this cross line and here is the mode this is the way we would do the mode in the pre-computer era right I went to college where we didn't have any lap laptops and things like that running a program meant running to the computer center with pieces of paper so many of these things were done by hand and this is something very easy to do by manually this is not that easy to do on a computer the logic is Twisted you have to figure out what the binwidth is therefore you have to make therefore his estimation of mode and his estimation of mode will be different from the same data set that is not going to be true for the mean or for the median and as soon as two different people find the same answer to the same different answers to the same question you know there's a problem with the statistic so therefore this is so so the mode isn't done as much these days these are the histograms sort of my data histogram it's this is a way of separating out the histograms in other words looking at the histograms by different column equal to income essentially means at which variable the buy says which gender so it's and they go Side by size because they essentially tell you as to what the difference in the distributions is so what does this tell you I could have plotted a uh a dis plot here as well or other code could have but this has says that there is a little bit of a difference between the male and the female distributions in shape as well as in the actual value so to speak and so from a descriptive perspective you can keep doing analysis of this kind to see whether there is a difference not just in the in gender variable but in other variables as well do people travel the same amount of miles on different um on different devices a plot like this will tell you um to compare these two what we can do in the next residency or you can do as an assignment after that is you can say is there a statistical difference between the miles of products that are traveled or that are used for between the different products in other words is there a difference between these three products in terms of how much usage they see and you can compare three distributions and we will compare three distributions in time okay now the last idea I want to talk about today is we've done we we've talked mostly about univariate which is one variable we saw a little bit of a plot but I want to talk about shall we say by variate by variate means two variables at a time if you want to talk about many variables at a time that's called multivariant but before we get to many let's get to two so we've we've looked at one notion or we've looked at two Notions we've looked at the notion of location location means that if there is a distribution what is its middle and that can be mean or median we have looked at variation like standard deviation range and an interquartile range but when I look at distributions of two variables there's a little bit more to it there is a relationship between the two variables that I want to want to be able to capture a sense of relation or a sense of correlation that how do I measure whether one variable is related to the other variable or not remember I'm still describing I'm still trying to find a number like a mean like a standard deviation I'm trying to simply describe a number if that number is this correlation is high if that number is this correlation is low what should that number be there are many many ways of defining such a number here is one and is there in the book Let's suppose so I'm going to do this slightly abstractly so I've got I've got numbers that look like this X1 these are my points so for example if I look at say a plot here let's take one of these um this is say miles and income the amount of exercise done and income each of these points has an x coordinate and a y coordinate these coordinates I'm calling X1 y1 X2 Y2 X3 Y3 X4 y4 x180 y18 you understand there pairs of observations this is say X1 y1 this is say X2 Y2 this is say X3 Y3 the pairs of observations this way right now xar is what 1 / n X1 + xn and I'm going to write this simply because I'm going to have to write something a little more complicated now summation I is equal to 1 2 n x i if you don't like the sigma notation that's fine you can write it with dots I add a little complexity here Y Bar which is the average is similarly summation I is = to 1 2 n y i okay now I'm going to write something here I'm going to write summation x i - xar y i - y bar I'm going to write that down I'll tell you why I'm writing that down but look at that what is X i- xar it's it's sort of like a variation or a spread of XI from its average similarly Yi minus y bar okay now when is this term x i - xar y i - Y Bar when is it positive when both of these are positive or both of them are negative right now both of them are positive means what both of them are positive means XI is above average and Yi is above average right both negative means XI is below average or Yi is below average so imagine a data set that looks like this where is xar and Y bar somewhere in the middle here here is one line Sorry here is a line and here is another line for all the points here XI is above its average and Yi is above its average for all the points here XI is below its average and Y is below its average which means all these terms or most of these terms are going to be positive I may still have a point for example say this point where it is negative but when my data looks like this this number will be positive what happens when my data looks like this when my data looks like this then XI is above its average sorry Yi is above its average and XI is below its average that means one of these is positive and one of these is negative that means this guy is negative so when the data looks like this this becomes negative what happens if the data looks like this the positives and the negatives will this number being negative means when one is high the other is low for example let's say height and weight height and weight means what the taller you are the heavier you are the relationship between the two my doctor says that I'm about four or 5 kilos over overweight I say no doctor I'm about 2 in too short I don't have a weight problem I have a height problem your interpretation so so therefore so if you want to therefore get a statistic that captures whether your data whether your variables are moving together or in opposite directions opposite directions for example might be something like say weight of a car and mileage of a car bigger cars have lesser mileage which means that if you have an above average weight car that's probably has a lower average lower than average mileage so this particular measure this is an addition when I divide it by 1 / n minus1 to take an average effect this thing is called the co-variance of X and Y this is called the co-variance of X and Y covariance are very heavily used in certain areas um they're heavily used for example in um you know Dimension reduction in principal components you'll see that in time um they're used in finance for portfolio management and things of that sort this is called a covariance of X and Y what is the covariance of X and X the co variant of X and X which means instead of Y I'll just put X this becomes 1 / n minus 1 summation I = 1 to n x i - xar into x i - XR which means x i - X bar squared which is the square of the standard deviation this is sometimes called the variance of X which is the same as the standard deviation of x squared so that thing that before I took the square root that's called the variance with the square root is called the standard deviation without the square root is called the variance by the way it's all there in the book H so in case you didn't get it you can see the video or you can read the book or these are very standard definitions so the covariance is a measure of the nature of the relationship between X and Y if the covariance is positive they're moving in the same direction if the covariance is negative they're moving in opposite directions if the Coan is zero then many things can happen either the data looks like this there's no relation or maybe the data looks like this not a normal distribution this is not a distribution this is say price and profit for example what is the relationship between price and profit by the way price huh so this is the theoretical relationship between price and profit on this side as price goes up your profit increases because you're getting more money per product and on this side with even higher price fewer people buy your product so your profit goes down now for such a thing there when I the average is somewhere here so the correlation also becomes zero another way to think of it is it's positive on this side and negative on this side so if this is zero it doesn't mean that there is no relationship it could mean that there is a complicated relationship something that is positive on one side and negative on the other side and not that simple I once remember doing an analysis um in which we were trying to find out it was about attrition why people leave companies and inside it there was a model that we were trying to for some reason trying to find out the relationship between or trying to understand where people stay do they stay close to the office or do they stay far away from office and now what do you think is relationship between say experience and distance to home higher The Experience closer the distance the experience is the home tenor or experience okay um we had normalized for that in other words think of it as just experience but we were looking at populations in which experience Loosely translates to age but you're right there could be people who join join the company very old I agree with that but let's simplify life and say that you have a data set in which you experience uh and here's what we found that that early on in their careers people live close by in the middle they moved away and towards the end they again became closer now this was an observation there's no signs to this this was just simply scene in that particular company this particular thing would happen but remember the point is not to describe the point is also to predict to understand and things like that so we had we had to build a story around this when we went to the CMD and said that you know here's what we had done uh so so so you can make up a story around this and the story we made up correctly or incorrectly I don't know is that in the beginning to some extent people have low dependencies you typically coming you're unmarried Bachelor ET you're also ready to work a lot harder so staying close by is convenient you get a PG or you get an apartment you stay close to you stay close to work because staying further away from work gets gets you no particular benefit it's just inconvenient but as you as you reach in some way Mele so to speak things become very complicated there is a spouse here he she may have a job there are kids there are schools there's kinds of houses that you can afford and so this solves a more complicated optimization problem and you may not be able to find a solution to that problem close to work but people who survive even longer in the company earn enough to solve this problem through other means and then what happens is they move back to work again you know bu a villa close to Etc and now there are multiple cars to take you know people elsewhere kids are often grown up so the number of dependencies are a lot less you may agree with the story you may disagree with the story but the point is that there's a complicated relationship you're trying to explain based on what the data is now the use of it I won't talk about much so this this number is a number whose sign positive or negative tells you about the nature of the relationship but only the sign tells you the value is much harder to interpret the reason is because I can measure these things in what whatever units I want suppose I'm measuring you know you know height say height and and weight and I measure height in in in centim and weight in kilog that's one answer but I can measure height in feet and weight in pounds and get a different answer I can even make this number Much Higher by measuring height in millimeters and weight in milligrams I don't know why I do that but I get so this as a value is a entirely dependent upon the units of measurement which makes it a problem so what statisticians do when they reach this situation is that they normalize things they make the unit go away so the way the unit goes away is you divide this by the standard deviation of X and you divide this by the standard deviation of Y now I can do this on the board without writing anything again but I would suggest you write the whole formula again huh when I divide this by the standard deviation of x the standard devation of Y now the units cancel out now this value becoming one means XI is one standard deviation above average in whatever beats units and Yi is say two standard deviations above average in whatever its units the unit has gone away this number is called the correlation between X and Y and the correlation between X and Y is a number between 0 and I'm sorry is a number between Min -1 and 1 the correlation is between Min -1 and one if the if the data looks like this then it is one if the data looks like this then it is minus one this is the correlation it is a measure of the relation reltionship between two variables measured in this very peculiar way it is not just a measure of the relationship it is a measure of what I would say the linear relationship between X and Y a nonlinear relationship or a strange relationship could cancel out positive and negative and end up with zero or a low number so if the correlation is close to + one there is a strong positive relationship between the two strong positive relation means what if one of the variables is above average then the other is also very likely to be above average and vice versa so what I can do is this is the when I do my data and I do do core as a function right this gives what is called the correlation Matrix again it'll calculate it only for the things with numbers if it doesn't in other words if you give it a data frame and this doesn't happen then just make sure that you only take the subset of it which has only the numbers do not calculate correlations for things that aren't numbers if they're not numbers there are other ways to calculate assoc iation we'll see that later as well now based on this what do you see first of all the correlation between age and age is one why well it's a 45 Dee line right by definition it is one that this is a number that comes from one data set with one kind of relationship what does that say anything about the Practical world so to speak it's another way of stating saying what I've been saying all along how does your data have anything to say about these relationships outside the data the problem is a little clearer here maybe but the problem exists for anything so for example there is a correlation of 28 between education and age H 28 means that there is a positive relationship but a not a very strong one where is that where is that graph this is age education was a second one right so this one right or this one whichever way this shows that there is a weak positive relationship between them when one goes up the other does have a sight tendency of going up now I should warn you that there is no sense of causation here there is no sense that if x goes up then y goes up because correlation of X and Y is the same as the correlation of Y and X definition this is a symmetric concept it makes no attempt at causation that's a different thing altogether so this is a positive this these this is a positive relationship it's a weekly positive relation relationship this is about usage and education is about 40.4 income and education is about 62 miles and usage is about 48 miles and fitness is about 78 let's see miles and fitness this is Miles and Fitness nothing in this data set has a negative correlation but you might have seen it if one was negatively correlated to the other negative related to the other close to zero you're looking for low correlations right so age and usage for example is a very low number age and miles in other words age doesn't seem to have much to do with things shall we say other than income but age and income doesn't really have much to do with your product per se it'll be useful in when you do clustering later on variables like that are useful to try and segment H Rich old people always an interesting segment yes population is zero correlation means zero that there is no relationship between the variables it could mean for example that a plot that looks like this let's pick a variable so uh closest to zero is what age and usage uh so age and usage is where usage and this one now this age and Miles that also is prob thing low probab this one no there's no relationship between them in the sense that there probably is a relationship in the variability in other words there's more variability here than here but if I want to draw a line through this the line doesn't have a positive slope or a negative slope there's no there's no idea that says that if one of them is above average the other is also likely to be above average so low correlation means that there is no sense that one being above average Rel to the other being above average no increasing no decreasing correlations are notoriously hard numbers to interpret but they're also very useful summaries particularly for large data the question that he asked is to does this make any sense in the real world has two components to it component one is your relationship between the two related to a linear concept so for example uh I was talking about height and weight what is the what should be the relationship between height and weight linear so if I plot height versus weight I should see a straight line okay how she's going to say not necessarily okay removing outliers we're all outliers aren't we okay um have any of you heard of a concept called the BMI body mass index in this DNA you have all heard of body mass index what is body mass index height by height by by no there's no age in it height by weight squared height by weight squared now so BMI is height by weight squared now if BMI is height by weight squared what what does that tell you about the human body height by weight squared is what is called BMI and this number let's say should be around 25 if you are healthy what does that tell you if you are taller what will happen how will your how should your weight increase no there's a square here half the half the rate so roughly this should mean let's say that this is correct let's say roughly that this is correct if this is correct what does that mean it means that height is approximately 25 into weight squared if you're healthy that means if I see a bunch of very healthy people and I plot their height versus the weight I should see a curve like that not a straight line she's figuring this she's saying now why why why why if I'm twice as tall as should be twice as heavy yes if you want to give it a fancy name correct no you're right right right that is a parabola undoubtedly true now you now you could argue as to why is it height by weight squared that's a slightly different question why isn't it say weight by height so let's suppose that you so weight by height means what so let's suppose that these two they're not the same well let's suppose that there so so so so these two so this is a certain height this is a certain height if I put this on top of this what happens to the weight if these two are exactly the same this is going to double or if I take two of these I don't see two of them I apologize but anyway okay so here's one more so these two so if I do if I put this on top of this this doubles so therefore if I look at objects such as this then by doubling the weight and doubling the height so height by weight it remains a constant correct so if I'm looking at BMI for bottles this way it should be weight by height so if you were a bottle your BMI would be weighed by height okay now imagine that you are a football now if you're a football and your height doubled how much bigger would you be you understand the problem this is a football it is now twice as high how much heavier is it by factor of what huh by so the Vol height has doubled volume has gone up by what no no huh p r what no 4 by3 P CU it has grown up by by a cubic Factor so now for a ball the BMI should be weight by height cubed so you're not growing like a cyinder and you're not growing like a football you're growing like something between a cylinder and a football we all are not you personally so which is why it looks like that right babies grow like cylinders we don't we don't grow like because if we grew like cylinders we'd be a lot thinner think of yourself imagine yourself when you are you know five or six and now double your height You' be a lot thinner right similarly you don't grow like a you you don't you don't grow like a football as well imagine you s five or six and now imagine you grew in every dimension in the same way You' be a lot fatter than you are now so therefore this relationship depends on the empirical relationship between height and weight for for the data that is available which is of humans growing and so empirically people have discovered that this is the object that should be invariant this is an example of what's called Dimension reduction two variables are being combined into one which is carrying information for you but it relies on a nonlinear relationship between the two that is not going to be P only picked up by the correlation so the correlation goes so far and no further it is not one of the more analytically useful things very often we do test a hypothesis is the correlation Zero versus is the correlation not zero to ask whether the correlation is real or what is often called spurious and in a later class I think about two or three residencies from now you'll spend some time on things like spous correlations in other words I'm finding a relationship between X and Y but is it real or is it due to something else it yes it could start as a basis for a causation it it gives you some summary of the data it is at best a descriptive measure of Association sometimes people want to see it in in another form this is what's called a heat map it is exactly the same thing as a correlation except that in a heat map it gives you nice colors it gives you nice colors and you can change those colors so to speak um here's the index of what the color is minus one is pale blue positive Etc and and positive was in the same direction so it gives you a sense of what the color is so sometimes when you have lots and lots of variables this is two fewer set of variables for a heat map to be useful um so for example let's suppose that you're looking at a product catalog a few thousand products and you're trying to find the correlation between sales of those products across time and across geographies and you give a display of you know where and so you do a heat map and you find those regions where the products are sort of clustering up we often do it in medicine through what is called micro are we essentially we look at data from genes and let's say there are thousands of genes and and you look at the expression levels of each of these genes and you say these are the genes that have been expressed and these are the genes that haven't been so if you are doing correlations of thousands of variables or hundreds of variables often a nice a nicely arranged set of variables with a heat map gives you a good picture of the data so heat map in this form is exactly the same as a correlation except that it adds colors to the numbers so that you're not looking at the numbers you get a visual picture of it so the you so so so the traditional choice of it is hot is related so red is related and white is not but there are many ways in which you can change the the coding of the heat map colors okay now comes to some extent a tool that is descriptive however it is the first predictive tool that you will see I will not want to use it like a predictive tool but I'll still show it so let me show you what the end product is the end product is I want to summarize the relationship between say miles usage and fitness VAR like these right now in predict in relationships of varibles such as this kind here's an equation miles is equal to - 5.75 + 20 into usage + 27 into fitness this is shall we say a targeted equation what is this equation as far as I'm concerned today this is this is a description of the data but the description of the data will be used in order to predict how many miles my instrument will run so think of what the instrument is the instrument is going to be is an engineer designed instrument and I'm trying to figure out how much it will be used how many miles it will be used to do that I will figure out whether people consider themselves fit or not and how frequently they use it and using that I want to get an equation for the number of miles this will run is there a descriptive way of getting at that equation so what this does this kind of an equation is what's called a linear regression model this is your first model this is going from descriptive to predictive I haven't run it yet I haven't run it yet I'm just saying what I'm trying to do self-rated Fitness on a one to5 scale I'll get there so maybe I shouldn't have shown you the output always dangerous to show good people output never show output moral of the story so so what I want to do is keep it deliciously vague huh so Y is equal to Beta plus beta 1 X1 plus beta 2 X2 I want to fit an equation of that type why do we have multiple variables I can do it with one variable maybe life is simpler with one variable you have given me so you can you can in the code AS you'll see you can you have one variable you can have two variables you can three variables you can have any number of variables I think they've chosen two to say that you I once had um we going from byar distribution multivar distribution so he did all of byari distributions then at the end he said now put 2 equal to n and we were telling him sir it doesn't work that way if you do it for N I can put n equal to two but if you want me to put 2 equal to n in which two do I put which n h so he's saying I'll show you it for two but if I show it for two you can do it for one and then you can do it for three you can do it for any but we can try it for one also if you want to so let's look at that and what what what am I trying to put here I'm trying to put miles here and I'm trying to put usage here and I'm trying to put Fitness here I forget which was where but anyway these two variables how am I using it to describe so you want to you might want to think of it this way if I give you three variables how do do I describe the relationship between them if I give you three variables how do I describe the relationship between them there are three variables in the form of something like that is one way of doing it now does that mean that in reality as you might say that there is a relationship between these three things no correctly so receiving prise maybe I don't know not necessarily correct not necessarily so when you do linear regression in the future any regression for that matter there will be three uses of it use one it is simply descriptive it will simply describe the nature of the relationship to you it will make no causal inference no sense that this causes this it it will give you no predictive model it simply describes and we'll discuss how it describes two it predicts predicts means when I put in another value of x and another value of X1 and another value of X2 I will get a different value of y which means that I've looked at data from all of you and a new person comes into the room with a new X1 and X2 and I'll put then her her number in and I will predict her why that is the predictive use of the of a Mor third prescriptive in order to get a different targeted Y what changes should I make in my X X1 and X2 to get different usage of the equipment what behavioral changes do I need to make in people to get them to use more an even more complicated use of the same thing so the same model the same principle can be used for different uses I am using it simply as a description simply as a way to summarize not univariate not bivariate but trivariate or multivariate I can do that with the 3X3 correlation Matrix but if I choose to do it this way now where is my where is the what number am I looking for Fitness is is here average number of miles a customer expects to walk or run average number of times the customer plans to use so I'm going to give it this variable and this variable and try and get an outcome for the middle one getting it the way to do it is something that I won't talk about too much so there's a there's there's there's SK learn which is you know one of the one of the learning modules that they learn in the sense of supervised learning import linear model regression linear model as a function and the slightly irritating big function here called linear model which is inherited from linear model you're giving it a y what is the Y the thing on the left hand side of the equation what is the X the thing on the right hand side of the equation what is r fit r fit means regression fit H and this fits my X and Y and this outputs something it doesn't output anything at this point in time now I have my regression coefficients and my regression coefficients are 20 and 27 my regression intercept is - 56 and my mil predictor is - 56.5 4 + 20 usage plus 27 Fitness how is this interpreted from a purely descriptive perspective it means that for example if usage Remains the Same and my fitness goes up by one unit then my miles goes up by 27 if my fitness Remains the Same and my usage goes up by say 1 hour or one unit then my miles goes up by 20 what does minus 56 means if you don't use it at all and you have zero Fitness you have run minus 56 miles makes no sense but nether does zero Fitness so the model is not necessarily written in a way in which this intercept makes sense which is why in the software The Intercept is not treated as a coefficient The Intercept is a part of the equation but is not one of the coefficients that you interpret this is pure description how does it how does it what does it do in case you are asking can I hope you don't what it does is this what it does is it looks at the data and what is my data my data is say y1 X1 X2 and it says this it looks at y1 sorry Yi minus beta minus beta 1 X1 i - beta 2 X2 I whole SAR this is my prediction or the equation beta KN plus beta 1 X1 plus beta 2 X2 this is my actual what prediction is the closest to my actual in what sense find the difference between the prediction and the actual Square it and then minimize it with respect to beta 1 beta not beta 1 and beta 2 so what are bet beta not beta 1 and beta 2 their variables or their parameters that are estimated in such a way that if I estimated this way this plane is the closest to the data in what sense in the sense that the difference between the predicted and the actual is the smallest don't worry you will do this again you will do this again this is a very important thing in supervised learning in prediction mode in description mode all that is necessary for this to happen is that it describes the nature of the relationship between miles usage and fitness describes in what way in addition to the interpretation of the numbers there's also something else interesting here the positive sign what does the positive sign mean it means that as Fitness goes up miles goes up as usage goes up miles goes summarize the relationship between three variables treating one of them as a output this is a descriptive use of linear regression as a way to describe data is the description real to be decided to be confirmed to be analyzed to be understood right you do not know it is empirical it is based on data why is it why is it necessarily true is there a logical reason why this is to be the case yes you can do it with one you can do it with you can remove it so if I remove it what happens so what would you do you guys can do it with there you would move it here instead of instead of usage and fitness just have one of them there I have not given you any idea as to whether the description is good I've not told you whether this model is a good model or a good equation in the same way that I did not tell you whether the correlation was good or whether the mean was good I've not given any quality assessment to anything these are ways to describe the quality of the model how accurate is my mean how good is my prediction these are things that are going to be inference and pred we'll come we can't answer those questions before we get to probability the middle here sense of language on it yes H Fitness and usage as h Huh huh that's true so you're saying that it it doesn't make sense for certain values which is true which may well be as I said I am not saying that this is a good predictive model what will happen is you will you will what will happen you will study a model like this and you'll ask certain questions what questions might you ask for example here's a question that you would ask you asked a question that if I fit a model like that is this coefficient that is in front of this variable actually equal to zero because if it is actually equal to zero then there is no relationship between the output and that variable so what we do is we ask for a statement of this kind if say Yi is equal to Beta + beta 1 X1 + beta 2 X2 I asked for the statement is beta 1 equal to Z and these are called hypothesis because if beta 1 is zero then this number should not be in the model and therefore this variable has no predictive power over this variable which is where the analytics part becomes interesting but to answer that question I need to have a sense of how do I know whether this is zero or not and to answer that question I need to have a sense of what the error around that number is so this number is not 20 it is 20 plus minus something in the same way that my mean of 28 age of 28 was not 28 it was 28 plusus something this is also similarly not 20 it is 20 plusus something and if that plus minus something includes zero then I can't say that this is not zero if on the other hand that plus minus does not include zero I can say it's a predictive model that's coming but for now this is simply a way to describe data and like for means like for correlations like for standard deviations and for linear regression all of these all will now see an inferential phase to them the mean must see a plus minus the regression coefficient must now see a test is it equal to zero or is it not equal to zero all these models all these estimates will now be put into an inferential test into predictive test how is how useful is it for new data because just describing current data is not going to be good enough for me I'm writing an equation like this I want to write this equation I want to write miles is equal to Beta KN plus beta 1 into usage plus beta 2 into fitness I want to write this the code now tells me what these numbers are this number is - 56 this number is plus 20 and the third number is plus [Music] 27 that's it you can call it intercept based on what your whatever your term is yes yes yes in X just put in another variable do comma another variable it can be any number try it out I mean you can do it now if I don't want to FID around with this I won't plot it my Purp Pur is to if I could plot it I would but remember there are three variables remember there are three variables why am I doing this because in two variables I can plot it I can also look at many variables at a time and see a correlation but if I have three variables plotting things becomes difficult if I have four variables plotting things becomes even more difficult but you still do it I think you have Tableau or in your um curriculum maybe I'm not sure but visualization technique can still help you but if you got a 10 variables then plotting is not a way to do it so how do I express the relationship between 10 variables by arbitrary equations like this what does it mean this intercept is if this is zero and this is zero what is this but as we had said this zero doesn't make sense and this zero doesn't make sense but this this is simply a line that goes through the data if I have data that looks like this for example all it does is it fits this straight line what is The Intercept where it cuts this axis does it make any sense maybe maybe not this is the place where the data makes sense but the equation is written so that it cuts the line here correct if I find a relationship between height and weight and I write the equation as Y is equal to bet or say weight sorry is equal to Beta plus beta 1 into height what is beta 1 beta 1 is the weight of someone who has height zero makes no sense right but giving me the freedom to have a beta 1 here allows me to get a much better line because I can move this line up and down in order to get the best fit it allows me an extra flexibility don't worry in fitting good models you will have enough experience in doing this my purpose is just to show you it as a way to describe three variables in one shot I am again I'm not building a models if I do it for two of them just miles and usage just just two of them this is an equation an equation just of this kind with one variable you wouldn't do this because there's nothing to model right there's no equation between one variable criteria for doing this remember my purpose is not to use this to select which variable to model when I'm calculating means and standard deviations and correlations I'm not using them to select anything I'm not saying that I will measure your mean because you're important or I'll measure your standard deviation because you're low I'm using this as a tool to summarize three or four variables which variables to use where in predictive mode you can do you can look at for high correlations and there are many other techniques that you learn in order to figure it out so just like a mean is a way to do analytics correlation is a way to do analytics stand deviation is similarly uh so what we had done yesterday is we had spoken essentially about descriptive statistics and descriptive statistics is the taking of data and to Simply describe it with the later purpose of either visualizing it or writing a report or using it for inference and prediction in later courses or later applications it is compared with um predictive statistics or Predictive Analytics and then prescriptive describing is simply a a task of summarizing ing a given set of numbers uh you'll do sessions in visualization in due course prediction is a task that is uh often in uh machine learning or a data mining professionals requirement to say that if something changes then what happens uh I should have made a comment that there are two English language words that mean more or less the same thing one is forecasting and one is prediction um in the machine learning World these words are used a little differently forecasting is is usually in the context of time so something has happened in the past what will happen in the future I'm giving you this week what will happen next week forecasting in the future prediction is usually used without any sense of time prediction is like I'm giving you an X you give me a y so I'm giving you one variable you give me a another variable so Predictive Analytics doesn't necessarily forecast anything despite the fact prediction itself is about forecasting uh so the words mean slightly different things uh it's a little like you know price and worth mean more or less the same thing but Priceless and worthless mean different things so so the words are used in slightly different context so in descriptive statistics um we had looked at certain ways of doing things for example we had looked at what is called univariate Data univariate means one variable for the univariate distributions we had seen certain kinds of descriptive statistics some of them were about shall we say location location meant where is the distribution and we had seen for example things like means and medians which talked about where is the distribution located we talked about things on variation where we have talked about standard deviation we'll talk more about things like this today standard deviation um Range inter quartile range here also we had terms for example like you know the quartiles the upper quartile the lower quartile these are parameters that are used in order to convey a message to someone saying that what is the data about so for example a five point summary talks about the minimum the 25% point the 50% point the 75% point and the maximum irrespective of the number of data points you could have 10 of them you could have a 100 of them you could have a million of them you could have a billion of them it doesn't matter it's still five numbers uh sometimes those five numbers tell a lot they tell about location they tell about spread they talk about skewness is the distribution sort of tilted towards one side is there more data on this side than on the other in terms of the data spreading out towards the tales so and so there are plots associated with this as well we talk a little a little bit about the plots later then we went towards the end towards the idea of let's say bivariate data by variate means that there are two variables in which we didn't spend a lot of time we talked about co-variance and correlation covariance is a sense of variability of two variables together uh its univariate version is a variance which is the square of the standard deviation a scaled version of cence is the correlation if the correlation is is close to plus one then it means that there is a strong positive relationship between the variables positive means if one goes up the other also goes up if one goes down the other also goes down negative means the opposite as one goes up the other goes down correlation is not to be confused with causation there is nothing in the descriptive world that says that this causes this there's no sign sence to this this is simple description the science to it and the logic to it and the use of it for for inference for for for business logic and things like that will come a little later for now we are simply describing then we have taken an even brief and perhaps even more confusing look at multivariate our first multivariate summary where we looked at the idea of a linear regression a linear regression is an equation of the form Y is equal to say beta plus beta1 X1 plus beta P XP where one variable is written as an equation of the others this is nearly done to describe the nature of the relationship between the variables correct it can be used for prediction it can be used for prescription if you want it to but that is not our purpose here our purpose is simply to describe a relationship why is this useful because let's say that you've got three variables four variables 10 variables you need a mechanism to say how these variables are connected how do you describe 10 things at a time there are Graphics out there there are famous Graphics in history where you have uh many variables being represented in in on one plot or one visualization so visualizing things itself so for example we looked at a certain kinds of plots we looked at for example histograms we looked at box plots sort of pairs which were essentially scatter what are called Scatter Plots so these are for the human eye these are things for the human eye to to see data and they have the limitations because we can only see data in a certain way we can't see very high dimensional data visually we can see up to three dimensions maybe uh for those of you who are interested about such things or any of you are in the graphics World Etc you spend a lot of time saying how do I how can I make people see things um so how many dimensions can you actually plot in uh python itself is is good at it uh but there are other devices so for example let's say that you're plotting you can have of course one variable as x one variable as y um another dimension can be maybe the the size of the plot you know if this is bigger then another variable Zed becomes larger it can be a color like a heat map a fourth variable if it is low can be blue and if it is high can be red another may be the shape of it lower values are circles higher values are more pointy so there many ways in which you can get summarization to be done so um when you do visualization if you do you'll see other ways of summarizing it but if you want to do it as a number then something like an equation that looks like this is often a good representation how one gets at these beta 1es and beta PS I I explained very briefly what happens is you you form this equation and you take those values of beta beta 1 and beta P that are closest in some sense to the data so if I draw a picture of say two of them y on X and I say give me a line which line should I take take the distance from the line to the points and make this distance the smallest get a line that goes through the data with the smallest distance to the points how is small measured small is measured by the square of these distances because distance on the above the line and distance in below the line are equivalent so if this is my Beta plus beta 1 x and this is my Y what I do is I look at y minus or y i minus beta minus beta 1 x i I is equal to 1 to n my N Point Square it this is the squar of the distances from the line and then I minimize this with respect to Beta KN and beta 1 that is how I get the numbers but if you're simply interested in what python or R does then the program will simply give you what the number is so what those will give minu of beta 0 bet what sorry what I will get it from those you will get the Val value of beta not and beta 1 find the value of beta not and beta 1 such that this is the smallest f for different values of beta not and beta 1 this distance will be different for different lines this distance from the line will be different which line will I take the line such that this is the smallest how to get the beta so take form this y IUS beta minus beta 1 x i s on a plot actually that points are where the point existed the point is here the points are here so these line line is basically which the line is a line I'm trying to find here is a point here is a point here is here is a point here is a point here is a point right let's say Five Points I want to describe the relationship between these Five Points therefore what I need to do is I need to find a line that goes through these points I want to write an equation of the type Y is equal to say I'll remove the B and X Y is equal to a plus BX I want a line like a plus VX going through those points there are many lines this is one line this is one line this is one line this is one line they're all lines which line will I use to represent the relationship between Y and X I need a criteria so what I do is I try to say let me find a line let's say that this is the line and find out how good it is at this describing the data now when is it good at describing the data when it passes close to the points because that is his purpose to describe the data because I want to say that this line without any data points is a description of the data but how to get that line position that's what I'm talking about so I need the value of a and b correct so how do I find the value of a and b for every such line A and B I find the distance of the points to the line so if there how many points do I have here I've got five points I've got five distances what are the points this is the point X1 y1 this is the point say X2 Y2 X3 Y3 X4 y4 and X5 y5 these are my Five Points how far is the point X1 y1 from the line This distance and what is this distance this distance is how much this distance is this point is y1 minus what is this point a so A+ B X1 that's the point on the line correct okay I can stop here but if I stop here what will happen is that if this is the distance then this will become a negative distance and this will become a positive distance and they will cancel or neutralize as you say Sir a is which point which is from that point to the line actually that is the a distance no a is this equation a is the equation of the line you want to know what the term a is a is this point B is this slope and I want to find a and b so this equation is a + BX so this point is y and this length of this line is y - A + bx1 squared plus for the second one what is it for the second one Y2 - a - b X2 whole s do this five times correct for every line you will get this number if you want to you can take a square root for every line you will get this number this number is the sum of square distances of the line from the data it tells you how far the line is from the data the larger this number is the further the line is from the data the smaller this number is the closer it is to the data if it is if the data is on the line what is the value of this zero so if every point is on the line or if the data is itself a straight line then this will be zero so I have formed this now I find the value of a and b such that that is the smallest for every A and B I will get a value like this if I take another line I will get another value of this for every choice of A and B I will get a difference distance from the data which A and B will I pick that A and B such that this distance becomes the smallest so can we have a issue where the algorithm means computationally we don't find you don't value of which can kind of minimize me the kind of uh okay so let me finish this so choose a and b to minimize this and that is the A and B that the software gives you this is called a linear regression right answer to does it have a pro does it have a Le Square this is why gaus was so successful and laas was not you will get a unique solution this what called a convex problem and this is a convex optimization because of the squaring if you have modulus values here there is a possibility that you'll not get a single answer but because of this and because of this square and because of the nice b-shaped curve that the square function gives you you will find a unique solution to this so how many times system tries the number of data point no the system doesn't do it that way the way the system does it is the system differentiates this with respect to A and B differentiates with respect to a sets equal to zero differentiate respect to B sets equal to Z Z and solves those two equations it doesn't minimize it doesn't minimize when this becomes very high dimensional this minimization this differentiation of solving it becomes a very interesting problem in mathematics in numerical analysis to do that you need typically to do linear algebra and in courses such as this and in machine learning books you will see at the beginning of the book you'll often find Chapters on optimization and linear algebra because of this or something similar to this that to represent a problem you often need a matrix representation and to get a a good learned solution you need an optimization so most machine learning algorithms are built that way because for as I was saying yesterday that you're going to tell someone to do something you're going to tell a car to behave itself on the road yesterday while going back today morning I heard that or I read that BMW and dler are setting up a you know 1 billion Euro R&D operation somewhere in Europe for self-driving cars etc etc two different Industries are trying to go towards making cars that don't need people the automobile industry is as well as the ride healing industry people like uh Uber and Lyft and Ola and these companies so now you're going to figure out the car is now going to have to be told when to go and when to stop but how does the car know that it has a good rule how does it know that it has learned and what is good learning as opposed to bad learning what is enough learning as opposed to not enough learning a computer is stupid all a computer can do is store a lot of data and do calculations quickly computers aren't intelligent to make the computer intelligent you have to give it an intelligent function you have to say Okay run your algorithm such that this thing becomes the highest it can be or this thing becomes the lowest it can be which is an optimization problem so what machine learning algorithms almost invariably do is they say that here is an input and here is an output give me an algorithm such that based on the input you can come closest to the output for example object recognition if I'm if I'm teaching computer vision or if I'm teaching text recognition or any of these let's say text recognition so I'm trying to understand what a word what the word is so so I'm the computer is reading something let's say in in handwriting and he trying to identify that as an English language or or a Canada or a Hindi phrase so it's going to write something down you write this and in my horrible handwriting I'll write something and that camera has to recognize what I wrote and transcribe it into something that you can read now how does it know it's done a good job what it needs to know is that this is what I think the word is and this is what the correct word is now tell me whether I'm close and if I'm close I'm good if I'm not close I'm not good but this has to work not just for one word this has to work for thousands of words so I must be close to thousands of words at the same time therefore I need to measure the distance from my prediction and my actuality over many many data points so all these algorithms what they do is they take your prediction and they compare with the actuality and they find a distance between them and they minimize the totality of the distance between the prediction the actual and an algorithm that minimizes that distance is a good algorithm it has learned well so they all do something like this with this is the prediction and this is the actual and a and b are the parameters in the in the prediction in other words find a prediction such that it is closest to the actual so this algorithm has become very popular it's probably the single most popular fitting algorithm out there this is called Le squares least squares H this is called least squares squares here's a Square least because you are minimizing it's called least squares and the least squares algorithm is a very standard way of doing things this has nothing to do with the algorithm itself the algorithm can be anything this itself can be a neural network it can be a support Vector machine it can be a random Forest it can be Association rule it can be any of your Logics but the question is if you give me the program how do I know whether the program is good so I give it what's called training data training data means I tell it what the right answer is sir yes this we can infer that this is the starting point of basically what the prediction and what that we are comparing and getting the result of that yes so this is the prediction this line is a prediction if you want to think of it the data points are the actuality the problem is this these dat points are also not the actuality the actuality is going to come in the future it is a training set for the data it is the it is the the data that has been given to the algorithm to train it but is not the algor it is not the data that the data will actually run on the car will run on the road the car will see its data points its people and other cars and the cows and whatever for the first time it will not have seen that data before but it will need to know what to do starting on learning basically yes so what do you do so what you do is you train the algorithm what does training the algorithm mean training the algorithm means you give it data for which the car is told what to do in other words you give it what they call Ground truth so you give it the why and you say here is the data or here is the situation please do the right thing so please do it such as this so here's a person who's Crossing the road please stop here is another person crossing the road but he's very far away calculate the distance compare with your speed and decide what to do he may be far enough for you to be able to see but you may not stop if you are driving a car it's quite possible that you are seeing someone crossing the road but maybe about close to 100 met ahead and you're not slowing down because you're doing the calculation that I have a speed the person crossing the road also has a speed and by the time I get there this person would have gone do not do this when doing a level crossing but we all do this all the time we do this when crossing the road so I'm happily doing that there is a car coming but I'm still crossing the road why because I know that I'll be able to cross the road before I get there so the car needs to be taught how to do these things so this data it's given data like this and says for the training data get as close as you possibly can to the training data and then it's given what's called test data and now the algorithm is told oh now I'm going to give you new data and now you're going to tell me how well you did on new data so suppose you were given a problem like this I'm not supposed to talk about this your your ml instructors are supposed to talk about this but suppose you given a problem like this in other words I've given you a data set and I'm going to tell you that your performance will be judged not by this this data set but on another data set that I am not giving you how will you what will you do how will you make your program ready generaliz yes so how how will you make your program generalizable so the usual way it's done is something interesting you say that okay you want me to predict data that I have not seen I will see if I can do that so what you do is all the data that is available you take a certain part of it and you keep it aside you have it but you don't use it and now the remaining part of the data you build your algorithm on and now you test it on the kind of data that you yourself have but have kept aside this is called validation data and now if your algorithm works on your own heldout data that data that your algorithm is not seen you are more hopeful that it will work on somebody else's new data this called validation and this entire cycle is often called test validate train or or train validate test Etc and you will do this in your hackathons but to do this the algorithm needs to know as to how good it is and it needs measures like this there are other measures so for example if you're classifying an algorithm good or bad positive sentiment on tweet or negative sentiment on tweet no numbers then you do not need this what you need is simply are you correct or are you incorrect if it is correct let's say you give yourself zero distance if it is incorrect let's say you give yourself one distance in other words you made one error and you just count how many mistakes you made but what is a mistake when it comes to estimating a number like say miles and things like that there's no mistake you're either close or you are far so you need to measure of how close that's a measure of how close so this descriptive method is used as a criteria for building predictive models this itself can be a predictive model but very rarely is it good enough too few things in the world are this simple are they things in the world this simple yes but as we said as we discussed yesterday even things like height and weight are not that simple there are complexities to that so for example you can have theories we said for example you could say uh let's say a savings rate what's a savings rate a savings rate is the proportion of money that you save now if there is a savings State what that would mean is that you if I take your income data and I take your consumption data that should form a straight line because you're saving up the same proportion every month but it's not if you go home and month by month you figured out what what your income was fairly precisely from your salary or from other sources etc etc and you also plot again fairly precisely as you can how much you you your household spend that month it will have an increasing effect probably but it it's very very unlikely to be a straight line in certain things you may be going after a law of physics but the law of physics May hold for Gravity may not hold for anything else I remember trying to apply this so one day one day Cricket sort of became popular when I was in school or thereabouts and one calculation was done as to as to how to figure out whether a a team is doing well or or how well is a Chase going so one point possibility is to Simply track the score the other possibility is to say that if you know how many runs you're going to get you tend to begin slow protective wickets and you accelerate so what you do is you build models for that you build models by saying that let me assume that the team is going to accelerate constantly which means that every over that comes later it's going to do it's going to get better its run rate is going to keep increasing steadily now if its run rate keeps increasing steadily then when will it reach the halfway point that is the same thing is asking the question that if I take a ball and I drop it how long will it take to get to the halfway point and there's a square root term there and the answer is about 50 divid by the square root of two about the 37th over or something of that sort so effectively the logic was if at if if you have reached halfway point below let's say the 37th over or so you are on track if not you need to accelerate a little faster that's using a physical law to try and predict something that is not a physical law and the laws of physics do not apply to CRI Cricket at least not in this way that I'm describing so therefore these laws will get you somewhere like a straight line etc etc but they are approximations and so what you will do is you'll build better versions of this when you use it for an actual prediction but the same argument holds for things like mean standard deviations and many such things if there's a specific problem you need to solve you may make get a better estimate for doing it yes someone had was asking a question question that yine yes then what the B is actually that what we talked about so so there are many ways to do that one is you just put it in you you find for different values of A and B you find what that number is and then you solve it if you want to do it the hard way you can still do it the hard way and the hard way will end up being something like this I'm minimizing and I should not be talking about this say Yi - beta minus beta 1 x i squ I was using a and b right so say A and B whole squ I'm going to minimize this with respect to A and B so essentially what I'm going to do is I'm going to call this let's say l Of A and B and I will say DDA of L A is equal to Z ddb of lb is equal to Zer and this will and I will solve these and this will give me two interesting equations and my answers will be this I'll tell you what the answer is your B hat your estimate of B your estimator B will be this summation x i - xar y i - Y Bar divided by summation x i - xar s and your a will be this Y Bar minus B hat xar so if you want formulas these are your formulas because I'm minimizing something to minimize something is the same thing as setting as derivative zero now that is also the same as maximizing something but this is where convex optimization comes in that this will have a minimum but will won't have a maximum so by setting it equal to zero I'm going to arrive at the minimum yes the minimum will come with respect to a right not with respect to X and yes X and Y are fixed the data is fixed the parameter is varying the range is X and Y are fixed for my data correct so my B is written in terms of a and so this is a Formula if you want to close it this can also be written as the co Varian of X and Y divided by the variance of X so if you want to calculate it for two variables what you need to do is you need to calculate the covariance and divide by the variance and here Y Bar minus B xar this means that the that the that the line passes through xar Y Bar the line passes through the middle of the data with the Der the minimum will be we are minimizing a right we minimizing with respect to a a is a variable how Min how it will do this so for different values of A and B the distance will look like this there's a particular value of a and b in which the distance will be this there's another value of a and b in which the distance will be this there's a particular value of a and b in which the distance will be this that is corresponding to the cor this is this is my my uh summation Yi - a - bxi s this value for different values of A and B I will get this so when I minimize this now to do this you don't need to do it all you need to do if you want to do it if you want to do it is this if you want to do if you want to do it do this do you have yesterday's code open it you can do it then right now that's it you stop is like saying I get the mean then what one use of it is to predict another use of it is to prescribe there are many uses of it a third use is to do nothing but simply to use it to to visualize or to summarize the relationship between two variables correct and we do this all the time so um so for example how do you measure how price sensitive your product is do you understand the question you trying to you're trying to change the price of your product why would you want to change the price of your product profitability maybe you want to increase it so that you get more money so people in marketing often want to understand how sensitive my sales are to price now to do that they come up with various kinds of measures one particular measure is what is called the elasticity of demand velocity of demand means this if my price changes by 1% by what percentage does my sales change well if my price goes down I would expect my demand to go up but by how much now there are certain assumptions to this for example it's it's assumed that the same number works if it increase price as well as your decrease price so this is called the elasticity of demand so therefore to get the elasticity of demand but what is the elocity of demand the elocity of demand is essentially a slope a slope that relates something like this that if I have Demand on this side let's say sales on this side and price on this side I have this negative slope the slope of this is what the elasticity is so very often you do equations like these in order to Simply get at a number that has a certain meaning for you so the slope of linear regression between log sales and log price is the elasticity of demand for that product I mentioned log sales and lo log price because elasticity is done in terms of percentages a percentage increase in price and a percentage decrease in sales if I don't do it as a percentage there's a problem now my measure depends on my units is it, units per rupee or what it depends on what I'm selling and want currency and that's not a good measure so I measure it not I measure it as percentages but when I measure it as percentages have to meas on the log scale so there are many models like this where the equation itself is used to Simply describe a parameter something that tells you a little bit about the market like an elasticity of demand you're not using it to predict anything you're simply using it as a descriptor so you say that this is this is an inelastic product if this is an inelastic product what does that mean it means that if you change its price there won't be too much of a change in its demand classic examples of that for example are salt if you change the price of salt a little bit they at least certainly domestic salt there won't be too much of a change in demand there might be a little bit but there are certain things that highly in elastic you change it a little bit and the demand will change a lot and marketing people are very sensitive to to this idea saying that do I is my is my demand elastic or is it in elastic if I want my prices to go up then I want the demand to be in elastic because I don't want my demand to go down if I want my demand to if I want my prices to go down or if I'm pulling my prices down then I want the demand to be elastic because I want people to say that your prices are going down therefore I will buy more so marketing analytics is very concerned with things like this so therefore sometimes an equation of this kind is built just to describe something right so what I'm going to do is let's go down and since we are going to do this just on two let's pick it just on two so let's let's change this to maybe miles and let me remove this so what I'm going to do is I'm just going to do it on one of them I I suspect one parenthesis might then work I suspect this might work two x has to mean two simply because of the weight entered ah because it because I have not done anything on this data set now this one is a comment so what do I have on the coefficient here 36 and 22 so what is my equation based on this miles is equal to - 22 plus 36.2 n whatever into usage okay all right now let's try to do this manually if you want to if I want to do this manually what so I need to get at each of these things so now I need to find for example I need to find let's say the co variance between miles and usage how do I do that tell me no not a sample the data is present in me so if I so now I have things like for example my data miles correct so I can I can calculate things on this so for example I can do I can do this this the mean I can do calculations on this okay so now if I do say say what is the standard deviation syntax St not DV okay this is standard deviation okay let's try one more this is the variance what is the variance the square of the standard deviation now I want to find the covariance of these how will I find the covariance not necessary remember I have the correlation function how did I find the correlation function here I found a correlation function from here so I've got a number of ways to potentially do this one is I can do it with the correlation function or the covariance function in other words so for example I can try doing this how do I write it here my data dot this thing here was the correlation this gives the covariance Matrix this gives the co Ian Matrix okay now what is the value what is the value of B according to my formula so covariance of which variables now which are which are my two variables miles and usage where is that co-variance is it this number 42.6 71 right okay divided by what variance of what VAR no variance of usage why not why is in why but why is in question mark why so so so my data dot usage Dot bar is this also here in the data it is because what is a sort of what is the diagonal element of this what is this number usage dot this is usage 1.17 this number here here this number here is the variance okay what is the equation usage this is my X this is my y this is my X so covariance of XY divided by the variance of X So based on this what is my answer my answer is going to be for the the answer for my coefficient is going to be where is the covariance here 42. I can do it you know manually almost 42. 71 divided by where is the variance 1 1 7 say you know you know six or something of that sort 36. 318 where is my value of B here this is my slope right how do I get my how do I get my intercept mean of Y so my data this is the mean of all of them so what do I want which mean do I want mean of what is my y here miles where is my mean miles let's say 103 say point oh I don't know say9 minus what is my slope say 36 point I know 32 or something of that sort star what is my X usage which is 3 point say 4 five so 3.4 5 3.3 4 five or something like that right will star work that way minus 22 what is my coefficient minus 22 so if you want to you can do this from first principles by using that formula I'm not asking you to you can do it just by running linear regression but what it is is this you can also check the units what is the unit of B the unit of B is a unit of Miles divided by the unit of usage what is unit of the covariance miles into usage divided by variance which is miles into miles so the ratio of this is usage by Miles what is the unit of a miles this is in Miles this is in miles per usage and this is in usage so the unit all makes sense which one units what is the unit of the covariance the unit of X into the unit of Y why is that remember the definition of covariance it's the product of an X and A Y so this is in the unit of the product of X and Y this is in the unit of the square of X so the product of X and Y divided by the square of X X and Y cancel out y this becomes the units of Y by X which is what B should be B should be in miles per usage 36 means what it is 36 miles per usage unit that's what B is B is in some units because B is in some units we will run into some difficulty when you use this in predictive models because let's suppose I want to figure out is this number equal to zero or not because if this number is equal to zero statistically speaking then miles doesn't depend on usage but because this is not a dimensionless number I can make it anything I want to by simply changing the unit that makes the statistics a little hard so I cannot simply look at this number and say whether it's high or low I can make your height anything I want to by simply changing the scale I can make your height A Million by see simply using a small enough unit so simply taking a raw measurement tells you no idea of the value of its magnitude yes that same arent will work for any of those parameters so therefore when we do testing when we do hypothesis testing we need to normalize all these numbers by something and that's something is typically standard deviation so we'll do that okay so let's end this the purpose of this was just to tell you what that regression line is and then there are similar formulas as the dimensions increases it's hard to do this manually for two of them you can do it manually for three of them it's hard to show manually which is why I changed it because I would not have been able to do this for two way variables the formula becomes a lot more complicated for two variables and which is why people don't use the formula for say 10 variables now what I want to do is I want to talk a little bit about probability this the slide deck should also be there with you so you have to cope with the idea of probability is to be able to cope with this uncertainty what is the uncertainty that we're talking about here the uncertainty is that is that when you observe OB erve something you're not entirely sure what the value is not because of a measurement perspective but because you do not know what the corresponding population number is you do not know the truth of the number another sample will give you another number there is uncertainty and this uncertainty is being me is usually captured by a probability H here is interesting question what is the probability that a man lives for a thousand years the empirical probability what is empirical probability mean empirical probability means you ask a question has anyone live for a th years if the answer is is no then You' say that the answer is zero if anyone has lived for a th000 years you'd say tell me how many people have Liv for a th years so one interpretation of probability is simply you see it there's a criticism to this point of view of one of our teachers Professor dbas who many years ago would say that if you want to find the probability that little girl is going to fall into the river how many little girls do you want to walk next to the river to find out so in other words not all probabilities can be thought of as let me just see how often it happened so you need a little bit more than this so some words the words are often useful to know probability refers to the chance or likelihood of a particular event taking place an event is an outcome of an experiment an experiment is a process that is performed to understand and observe probable outcome probable outcomes the set of all outcomes of an experiment is called a sample space this is correct and it's easy to understand with one problem who is performing this experiment now when you use probability you are some you're in you're in two modes in one mode you are performing the experiment what does that mean let's say you are running a marketing campaign or you are designing a portfolio or you are manufacturing a product or you are recruiting people or you are testing a piece of code you are doing the experiment and sometimes you are not doing the experiment somebody else is doing the experiment and you're simply observing the customer is buying or is not buying the product is failing in the field or it is not failing the portfolio is making money or it is not making money the person you've hired is staying on or is quitting it is not your experiment you are simply observing the outcome of it so sometimes you get to do the experiment and sometimes you do not we used to call these things experimental studies and observational studies and experimental studies is something in which you begin by designing the experiment and you have a handle over how much data you will collect an observational study is you just watch and you see what data comes in you will in your careers mostly be working with observational studies because of the nature of data today there's just a lot more that is simply being generated without any anyone asking for it in certain very peculiar situations you do experimental studies for example nuclear explosions right why why do countries want to test nuclear devices collect primarily to collect data primarily to figure out whether this thing works or not or how does it work or not so they do little experiment to say boom let's see what happened because otherwise it's all computer simulations and you got no idea whether this happens or not I remember running into trouble with my engineering friends on this working on the design of a of a fairly large um aircraft engine U and and there was a question of saying that you know what is the thrust or what is the efficiency of the engine and I and I stupidly made the observation that why don't we test it out and so they looked at me this side that side Etc as if you know how how do we going to explain to this idiot and then patiently one of them said to me uh very kindly he was a very ctly gentleman older than me and he and he took the responsibility of telling me where will it go so his point was that if this engine fires up it's going to want to move where will it go pointing to the difficulty that I cannot easily do a full-blown test of a jet engine because if I do start it I've got to give it enough room to move somewhere so where do you want it to go so where do you want it to go so you will not be in a situation to do that very often so when you say experiment it is sometimes your experiment and sometimes it is not in rare situations will you be in an experimental s like in AB testing and websites for example it's a common job marketing people often asked to design websites and they asked to say does which is a good website so you do an AB test what's an AB test you design a website of say type A or type B maybe one is the old website and you let them loose and you find out how people react to the different websites this is a little tricky but I want you to think about this we will not spend a lot of time on it in manufacturing unit three parts of an assembly are selected we are observing whether they're defective or not defective determine the sample space and the event of getting at least two defective Parts um what is the question that I'm asking the question that I'm asking asking is here's a situation there are three [Music] parts for these three parts I'm interested in knowing whether they're good or bad the question is asking this describe for me all the possibilities which is what the sample space is so what are the possibilities don't talk about probabilities now just talk about the possibilities of what could happen we'll talk about the probabilities later all three could be good all three be okay so all three could be one way of doing it is this all three are defective two of them are defective one of them is defective and none of them are defective if you do it this way the sample space has four objects in it correct that is one way of describing it one minus we haven't yet gotten to probability but yes if you get to it you know it'll it'll be one minus that and the event of getting at least two defective parts means two defective or three defective three defective which is good so this is a this is this is one way of describing the sample space this is not the way the sample space is typically described you're not wrong but there's a problem and the problem is this so let's suppose I describe it this way in other words now I describe my sample space as let's say zero defective one defective two defective and three defective let's say these are my possibilities if I do it this way and this two defective thing is here now if I do it this way I will eventually have to get around to calculating probabilities and let's say I want to calculate the probability of let's say this event at least two defectives now how will I do that calculation now what happens is the way the probability calculations are done is that you you try to split this up and say that I'm going to find the probability as the sum of individual outcomes as the sum of individual events I'm going to split it into individual components and then add it up so therefore I will ask for you therefore what is the probability of two defect and what is the probability of three defect and what is the probability for example of two defects so let's say I want to find now what is the chance of two defects how will I find that how will I find the probability of there being two defects in this situation yes and how will I how will I how will I do that calculation 1 and 2 and there is no see there is you have not allowed me to even think think in terms of 1 2 and three there is no 1 2 and three there's only zero defective one defective two defective or three defective your sample space has lost all identity as to which one is defective so do you want to revise your opinion of what the sample space is what do you want to Define it now as ah correct so what you can do is you can Define your sample space not in terms of the count of defectives but in in terms of whether each individual item is actually defective or not correct in other words what you're doing is you're essentially saying let's say good bad or good defective or G what does this mean the first is good the second is defective and the third is good if you do it this way how many elements are there in the sample space eight because each of these can be good or good or bad these are your eight possibilities now from this what can happen is using these events you can now add them up now what happens is if I'm looking at let's say two defectives which ones are relevant say this one is this has two defectives this has two defectives and so three of them have two defectives in it correct one of them has no defectives three of them have one defective three of them have two defectives and one of them has all defectives so this is another way of writing the sample space what this will do is this will allow the calculation to be a little easier and your objective is to be able to make the calculation a little easier so in this particular case for example just just to get the calculation out of the way let's suppose that the chance of a defective let's suppose the probability of a single defect let's say is 20% let's say 20% there's a one in five chance that my unit is this seems too high you won't survive let's say 10% one in 10 is defective one in 10 is defective if one in 10 is defective the probability is now 10% then what are the chances of all of these I'm asking for a common sense answer to the question we'll get to the concept A little later one would be defective so the chance you understand the chance that a single one of them is defective is 10% the chance that a single one of them is defective is 10% and let's say that I want to solve this problem what is the problem the event of getting at least two defective Parts in other words I want to find probability of let's say two defectives what is this let's work it out it's a good example to work out we we'll understand many things as we do it the chance of a single defect is 10% I'm asking for the chance that I've drawn three of them and that I will see two things two of them being two defectives this needs a little bit of work let's do this patiently let's let's work this out right now the chance of two defectives can happen in how many ways we just saw it right now let's suppose that I want to calculate the chance of three defectives to calculate the chance of three defectives here is what I claim I can do I can add up the chance of these three right equivalently I can do it this way probability of two defectives is equal to probability of gbb or b g d or DD G is this correct there are only three ways in which I can get two defectives you're okay with this okay now I'm going to do something really interesting I'm going to write this as P of g d d plus P of d g d plus P of ddg now let me explain what allows me to do this what allows me to do this is the fact that if this happens these two cannot happen they are mutually disjoint both cannot happen together which means that if I draw a picture they're like this so if I want to find the chance of being here or here or here I can simply add up the chance of being here plus the chance of being here plus the chance of being here why can I do that because they're disjoint there is no common thing if this happens then this does not happen these are two separate things disjoint okay now let's look at each of this probability of g d and D this is what event the first is good the second is defective and the third is defective I'm going to write this as P of G multiplied by P of D multiplied by P of D I'm going to multiply I want yes I want yes because I want to use a technical term here and the technical term that I want to hear is independent independent means that the whether the first is good or bad tells me nothing about whether the second is good or bad they are independently good or bad this is an assumption but I think the problem allows me to make that assumption I'm making a part it's good or bad I'm making another part good or bad and these are independent of each other I'm trying to sell a product to him and to him whether he buys or not is independent of whether he buys or not that's an assumption it may be true let's say they're from two different neighborhoods or maybe if they're neighbors and I'm going from one house to the other maybe it is not independent maybe if he buys he's more inclined to buy so Independence is an assumption in this case I'm making that assumption when events are independent I can multiply the probabilities what does that mean for example let's say that he will buy a product for me let's say 10% of the time in other words for every 10 people I want to sell my product to only one person Buys so there's a 10% chance that he will buy my product and let's say there's an independent 10% chance that he will buy my product what is the chance that they will both buy my product 10% multiply it by another 10% so first he has to buy that's a 10% and then his 10% will be like 10% of that so 10% to 10% there's only a 1% chance that they will both by my product multiplication is allowed when things are independent so Independence means that the probability of both of them happening or let me re let me rephrase the question in this way let me write it as one more step let me write this as probability of G and D and D I'm going to write this g d DS as G and D and D which means the first is good and the second is bad and the third is bad this and I will now write as g into D sorry into probability of D in other words if there's an and then I can multiply when if things are independent these laws will be clearly written later so if things are independent I can multiply if there is an and if things are disjoint then I can add when there is a or common sense rules but they require a little bit of sort of logic in calculating so I'm going going to take this on take this to the top now this is going to be what this is going to equal p of g into P of D into P of D plus P of D into P of g into P of D plus P of D into P of D into P of G now what is p of G .9 90% .9 into .9 into plus uh very right clever thinking into three ah you guys are ahead of me here 3 into .9 into .1 into .1 even more generally let me even let me be even smarter than you are I'm going to write this as 3 choose one into 1 to the power 2 into .9 to the^ 1 it's a sightly sophisticated we have writing the same thing correct why did I write it as three choose one because why was it three how many ways could there have been two defectives out of three so correctly speaking I should actually have written this as three choose one or three choose two because either I can choose it as one good or I can say it is too bad so three choose two is the same as three choose one whichever way you want to write it so maybe a slightly better way to do it would be to say this is T choose two what are these two the two bad defectives out of three what is this 0.1 the chance of a one defective what is this two there were two of them what is this 0.9 the chance of the good right so how many Goods how many bads were there two how many goods were there one in how many ways could I have chosen two bads out of three three of them right this is the answer this is an example of a distribution called the binomial distribution which we'll see and this calculation you don't need to calculate your python will will do for you like all good things right look and feel for many of these classes see it once understand it and then you know ignore it huh because someone else can do the calculation for you we'll do it not to worry we'll get to it after the but what is the answer 3 into say1 into .1 into .9 somebody tell me what that is I've got no idea 2 point 4 2.43 2.43% 0.02 43 this number someone verify on there 0.027 okay and about 2% 2.7% or a little over 2% that is the chance of seeing two defects when there is a when the chance of seeing a single defect was yes so now this is as you can see this calculation is not about defects or anything of that sort this is just a counting argument this is just a counting argument so for example I could have asked the question um that I'm I'm trying to sell a product to three people my chances of success is uh is uh I don't know um 10% what's the chance that I'll be able to sell to two of them today I'm going I'm a salesperson I know I can call upon I'm going to three houses today let's say that I sell children's books I've gone to schools I've set up my stall Duan and there I've got three addresses of parents who have been kind enough to say please come to my home and you know I'm willing to listen to you so now I have on my on my cell phone three addresses I'm going to go to today I I know that my chances of selling this are not good optimistically even 10% which means that if I try to sell to 10 people only one will probably buy so my chances of success for a single person is about 10% so now I can ask myself what is going to happen at the end of today how many will I sell what is the chance that nobody will buy what is the chance that one person will buy what is the chance that two of them will buy and what what is the chance that all three of them will buy so what is the chance that two of them will buy 2 2.7% there's only about a 2 and a half% or roughly 2 to 3% chance that I'll end up with two of my of these people buy not no I for me I Define 10% as buying so which is I'm I'm saying that this calculation doesn't depend upon whether is defective whether is buying anything what it depends on is the probability of an event and you're asking the question how many times will this happen and that can be a defective part that can be a sale that can be the loss of value of of a portfolio that can be the attrition of a of a person that can be a hit on a website that can be a clickthrough rate it can be a very small number for those of you who are in digital marketing what's a typical CTR what's a typical click-through rate any of you with that industry so what's a typical clickthrough rate for for you website clicks huh email clicks but what's a click through rate so what a click through rate typically means is of the people who pass through an application for whom the application is an image in other words an impression as they say what percentage of them actually click on it this is very important for digital marketing you so you're showing ads all these websites come with ads Etc someone's paying for those ads and they want to know what the click through rate is when I see the ad what percentage of people click on the ad and it's typically a very small number have you ever clicked on an ad no most normal people don't but people still advertise people still advertise clickr rates is very small let's say clickr rate is 3% let's say that means three out of a, people can be expected to click through now I can ask the question for example that if I I want to have let's say you know so how many Impressions should I have that depends on how many clicks I expect to have I expect to see so if I want to have let's say 100 people clicking on my ad that gives me a rough idea as to how many how many I should reach how many Impressions I should be I should be having I can also answer a question like this what is the chance that I will have more or let's say less than 100 people clicking so you can ask the question what is the chance that less than 100 people click in a month how do I calculate that with this I can because what I need is I need an estimate of how many Impressions there are in a month from that that is shall I say my n now if that is an N I can then calculate yes this we are yes all def I could have if I wanted to do at least two right if I was solving this question at least two then you're absolutely right I should have added the last one there's a question of so I've written it as this probability of two defectives if I did it as at least two you're absolutely right I should have done that I could have done that sir how do you define this impression I'm not an impression is basically the the so on a website if you see the ad in other words for a session so someone has gone to that website and that ad is present on that website at that point in time that's an impression if someone has actually clicked on that ad in that session that's a click so the click through rate essentially is if I'm showing you the impression are you clicking on it so I can look at the number of Impressions because basically that's that's the number of times that website has been visualized and that ad has been on it I can also calculate the click through itate how many people are do are looking at that and now I can ask the question what is the chance that I have less than this so so then what would I do I this number would be let's say the number Impressions this would be say 100 or and this would be say my click through rate and this would be my 1 minus my click through rate so my 03 to the power say 100 those 100 people who clicked the the 19 the the the the 100 the number of Impressions minus 100 people who did not click and the number of ways in which I can get get 100 people out of let's say I don't know a million impressions I do a calculation like that I wouldn't do it I'd get someone to do it for me so this is where we are heading now I'm going to slow down a little bit take you through this conceptually just just to get these terms understood slightly so first of all what is a probability a probability is a number between 0 and one it is often calculated as a ratio the number of ways that is fa able divided by the total number of outcomes this is not the only way of calculating a probability and it very rarely works but it is often a conceptually easy way to understand it's a number between 0 and one zero means impossible one means certain the probability is a pure number it doesn't have units the philosophical question of is the glass half full or half empty types of probability different ways of doing it now here's what I was talking about as mutually exclusive events these are two things that that have nothing in common mutually exclusive they exclude each other out an example if you're drawing from a deck of cards you can either draw a king or you a queen or neither of them but if you're drawing one card you cannot draw both a king and a queen just like if you're drawing a a part it's either defective or it is not defective if you're a physicist you would think of Shing as cat hm and so physicists have a lot of fun with this you know the story of shing's cat right so shinger gave an example of a cat saying it's like the position of an electron so there's a cat in a box um and this is very unfortunate for the cat and there's a vial of poison now that vile of poison is a little unsteady so it could fall down it could break open and if it does then the then the Box fills up with fumes and the cat dies so you know that there's a box of Po there's a vial of poison in the box and there's a cat now the question is is the cat dead or alive it's a closed box you know that there's poison in the box and there is a cat question is is the cat dead or alive and the answer is you do not know until you open the box now if you open the box you can see whether the cat is alive or dead this called collapse of the wave function in quantum physics which means that the event has already happened but until the wave function is collapsed you do not know whether the cat is alive or dead if you can observe an electron the electron is here but if you're not observing the electron you don't know where the electron is it could be here or there or anywhere so the electron is buzzing around the room in physics this is an important idea and a lot of probability theory has come from physical considerations if things are mutually exclusive you can add of the probability as we had said this is king or a queen what about two independent events two independent events are events such that if one of them happens then it no way influences the occurrence of the other one in other words if he buys then nothing about it if one of them is defective it says nothing about the other thing being a defective so let me ask you a question now let me go back to my previous picture of mutually disjointed events are these two events independent no why are they not independent yes if I know that the king is drawn I know something about whether the queen has been drawn or not I know in fact that the queen has not been drawn these two are most certainly not independent so please don't confuse these two concepts so you are talking about the next event for the next no I'm talking about the same event I'm talking about King these two events so for example if I talk about let's say one particular one particular unit being G or D good or defective if I'm talking about one of them then the picture looks like this it can either be G or it can be D it cannot be both this is for one of them but if I am talking about two of them then say this can be G1 and this can be say D2 in other words the chance that the first is good and the second is defective now these two are no longer disjoint why because both these things can happen together it is quite possible that the first is that the first is good and the second is defective that's quite possible but they're independent independent means if I know that the first is good it tells me nothing about whether the second is defective or not so if your picture sort of intercepts then you know that you cannot add up the probability in fact you know a little bit more you know that if you want to add up the probability you can but you have to somehow take out this little common part so dis joint two separate things you add it up it's a or this or this not both if such a situation happens you can add the probability up and you can also add up but you need to assume Independence we will break all these assumptions soon this is the simplest possible way to do calculations I have to get to a a little bit of a nightmare called base theorem rule for comput rules for computing probabilities this um language here cup and cup cup and cap language there's some set theory um some people find it comfort in to see that language others find it complicating uh it's called the union so Union means so Union in general means the collection of two things so you know those let's say this this is a and b probability of a or b if there is a common part then probability of A or B is equal to probability of a plus probability of B minus probability of A and B the chance that both happens is the chance that one happens plus the chance that the second happens minus the chance that they both happen if things are disjoint then this becomes zero because I know that both cannot happen but in general this term stays this is called the intersection they both happen simultaneously here's an example what is the probability that the selected card is a king or a queen so this assumes that you know what a card deck is so 52 cards 13 uh four suits so how many Kings Four Kings how many queens Four Queens so what is the probability of a king 4 by 52 1 by 30 what is the probability of a queen 13 what is the probability of a king or a queen 1 by3 plus probability of a queen 2x13 the other way to do it is if you want to how many ways can you get a king or a queen eight ways 8 divided by 52 which is also same number what about the second one what is the probability that the SL is a king or a diamond so again there are two ways of doing this P of king or diamond is p of king plus P of diamond minus P of King and Diamond this is let's stay on 52s this is so this is 4X 52 1X 13 is also correct plus probability of diamond 13x 52 2 minus King and Diamond there's only one such card 1 by 52 this is 15 by 52 another way of doing it is how many ways can you get a king or a diamond 16 ways the whole suweet of diamonds and there are three remaining Kings or there are 13 k 13 diamonds there are four Kings but I've double counted one of them in both so I should subtract it once second statement and um remind me what the second statement was king or a king or diamond oh the the question okay you're saying there's a king selected card is a king or a diamond youve drawing one card at random from a deck of cards and you're asking is this a king or a diamond let's say I'm trying to I'm trying to sell him something correct one event that I'm interested in is is he going to buy my product or not the other interesting question is let's say for example is he an IT professional or not correct now is there relationship between these two things not really but I may be interested in the joint probability of them not because of this event but because I want to calculate another event that's interesting to me which is if I know that he's an IT professional can I sell him something in other words suppose it is not independent suppose I now know that whether he buys my product or not depends on whether he's an IT professional or something else let's say I'm trying to sell him a computer peripheral and I may be assuming that if he's an IT professional he may be more interested in a computer peripheral if he's not he may still be interested but if he's an IT professional he may be more interested in this particular peripheral in that case what I try to do is I try to I try to use one unrelated event as information about another one in other words I'm saying it is not actually unrelated so these ands become interesting so effectively how will my calculation go my calculation will go this way say that if I want to find the probability that he will buy my product given that he's an IT professional then my answer will be let me find the probability that he's both will buy the product and an IT professional divided by whether he's an IT professional why let me first calculate the chance he's an IT professional within that let me now find the chance that he will buy my product a given B is equal to a and b divided by B this trick this trick is always used in analytics to say this and we'll do it before I've received an email is it spam or not which means I need to find tell me the words and I will tell you whether it is Spam so now I need to relate the words to spam so I have two unrelated Concepts but what I want to do is I want to say that if I know one of them maybe I can get some information about the other similarly here I'm this may be about a color and this may be about a suit but if I know about one of them maybe it gives me a little bit of information about the other we'll see examples later just thinking so what if there was an additional question or diamond or both it can't be both because I'm drawing one card can be oh you're asking just about this that's one 152 so what of the question was or what is challenging is that is this an exclusive R he's asking is this an XR in computer science in other words he's saying that when I say or am I excluding the case that both are allowed but you a single no that confusion Still Remains if he's very py so he could say he's making a distinction between two statements king or a queen king or a diamond or king or a diamond or both and need to specify both and in that yes you are correct needs to be mentioned explicit huh so his mind works in ways in which the default is the exclusive your mind works in ways in which the default is not the exclusive but it's a valid it's a it's a valid criticism to make that that in English language when you use it do you which or do you mean in when I say this in in probability theory if I say a union B and if there is an intersection I include that intersection set theory is not confused about this for set theory a union B is just this set and if there is a common part that's in it and in it only once is this region so what I did was we translated this into set theory and he's saying that maybe I should have been a little more careful because there's a difference between this set and the following set which is just this part and this part multiplication rule when things are independent I'm allowed to multiply an example there are two subjects the chance that you will do well in one of them is 70% the chance that you will do well in the other is 35 is 5% the chance that you will do well in both of them or a and be the corresponding grades is is the multiplication of the two which is 35% here comes the interesting part what happens to events which are not independent what happens to the or I'm sorry what happens to the multiplication and there are various ways in which this two parts written so the currently the way the formula is written is a and b is equal to e multiplied by a yes multiplied by probability B given a this is the way this expression is written sometimes it's easy to understand this way sometimes it's easy to understand this way probability of B given a is equal to probability of A and B divided by probability of a I want to know what is the chance that b will happen when I've already been told that a will happen so first I find what is the chance that a will happen and within that I take the fraction of both A and B happening this is the same as saying the Top Line A and B is equal to a given B this means what this means A and B is first a happens then given that a has happened B has happened correct if a and b are independent what do I know if a and b are independent then A and B is a into B of B that means that if a and b are independent independent probability of B given a no is equal to probability of B stare at that for a while if they're independent then this will become P of B and so P of B given a will equal p of B but is this not exactly what Independence is if I tell you that a has happened I have not changed the chance of B that is almost by definition what Independence is that by knowing that one of them has happened has told me nothing about the second one one by knowing that the first unit was defective told me nothing about the second one by knowing that the first customer bought my product told me nothing about whether the second one will buy it or not so this these statements are understood in different ways sometimes this is a good way to understand it sometimes this is sometimes this is but this is a more general form for doing it we'll see example of this this one needs a little bit of work to understand from a pack of cards two cards are drawn in succession one after the other after every draw the selected card is not replaced so you're drawing one it's like a normal deal the second one now comes after the first one what is the probability that you get both draws you will get Spades in other words you'll get two Spades two draws two Spades what is what is the chance of that so here's a structuring of the problem a is that the you get a spade in the first draw B is you get a spade in the second draw so what is the chance of a the chance of a is 13x 52 is the chance of the first one is a spade now I want to find a and b and the way I do it is this what is the chance of a and then what is the chance of B given a in other words I've drawn a spade and then what is the chance that I will draw a spade given that I've already drawn a spade the first time and the answer to that is minus one because there are now 51 cards left in the deck and there are 12 Spades remaining so 12 by 51 so the answer is 13x 52 multiplied by 12 by 51 now what would the answer have been if I had replaced the first card it would have been 13x 52 multiplied by 13x 52 because of Independence I put it back right if I put it back when I put it back the second draw looks exactly like the first one so knowing that I had a sped to begin with has been lost because I've already put that sped back in it is it is a situation of independent experiments this one however is the case that the result of the first or the result of the second depends upon the result of the first but isn't it like we are assuming the second one that we have already picked the first one as a spade huh because that is what is being asked for what is the probability that in both the draws you will get Spades so I'm drawing one and I'm drawing a second one what is the chance that they're both Spades here's a here's a here's a similarish question um what is the chance that I will get two adjacent two adjacent seats on my flight if I don't pre-book yeah so it's a similar kind of calculation why is it a similar kind of calculation same seat again so you want two adjacent seats but for two adjacent seats to be picked by you those two there must be two empty adjacent seats now two adjacent empty seats means what that means can you calculate that probability yes you can but when somebody books seats let's say that one particular seat has been booked what happens to the probability of the seat next to it being booked so so the probability of a seat being booked of a single seat being booked let's say is um making up a number let's say 50% a single seat being booked is 50% now I'm telling you that a one particular seat has been booked let's say you know 15a has been booked now I'm asking the question given that 15a has been booked what is the chance that 15b will be booked will it be 50% will be more than 50% will it be less than 50% it'll be more than 50% at least if you're modeling reasonably well it would be because a whole bunch of people will be booking pairs we'll be booking pairs so now if I know that one seat has been booked if I know that 15a has been booked now the chance that 15b has been booked is going to be more than 50% which means that my chances of coming late and finding two adjacent cat is going to go down because I'm looking for cats that are unbooked the chances will be more right no the chances will be less because as people book so people book adjacent seats more than at random so the probability of two adest seats being booked is not the product of the individual seats being booked it's more than that so the probability of me finding two empty adjacent seats is going to be less because I'm looking for empty seats so here's an example of doing this conditional calculation um marginal probability is a term I'll explain when I do the example so here's an example a survey of 200 families was conducted information regarding family income per year and whether the family buys a car given in the following table so the 200 data points 200 surveys have come and they've been distributed in a cross tabulation like this we did a cross tab like this yesterday as well this is the cross tab one access is did they buy a car or did they not buy a car the other is an income statement income below 10 lakhs or income greater than 10 lakhs now why might why might I be interested in this data which segment huh to figure out who buy my who buys my cars can be sold whether cars can be sold and whether that has anything to do with income and if it does have anything to do with income then is high better or is low better I don't know so what I've done is I've arranged my data in this particular way and now was asking a few questions what is the probability that a randomly selected person or what is the probability someone is a buyer of a car it's a it's you don't even need to look at the Full Table this is 80 by 200 h probability of let's say car this is called a marginal probability why marginal because from the picture it's at the margin it's at the margin of the table which is where the term originally came from this called a marginal probability there are many things going on but you're asking a question only about one margin in this case the margin of the car you're not interested in the income this is called a marginal probability what is the probability that randomly selected family is both a buyer of a car and belonging to income 10 lakhs or above both buying a car and income 10 lakhs or above 42 on 421 200 okay a family C random is found to be belonging to an income of 10 lakhs and above what is the probability that the family is a buyer of a car if the income is more than 10 lakhs what is the chance of a car so this is probability of car given greater than 10 L 42 by 80 interesting 42 by 80 why is this you're you you're right is 80 that's a sample size right you understand the logic but that is exactly the same as this probability of car and greater than 10 LHS divided by greater than 10 lakhs why because car greater than car and greater than 10 lakhs is 42 divided by 200 and greater than 10 lakhs 80 divided by 200 200 200 cancel out this again becomes 42 by 80 but the thinking is absolutely right this goes in the denominator because this somehow says that out of how many people am I going to select and then on the top is how many are both this is called a conditional probability this is called a conditional probability by the way what is this number this is for example less than 50% sorry greater than 50% what is the chance of buying a car which is about 40% that means if I did not know your income I would guess that your chance of buying a car is 40% if I did know that your income was more than 10 lakhs now your chances of buying a car went up to over 50% therefore it's worth my while to find out whether your income is more than 10 lakhs because it at least by the sample data tells me that that's going to influence in a positive deduction whether you will buy my product or not so I'll try to find out this is in terms of words this is called a marginal probability marginal and this is a conditional you might have a a little bit of trouble with these words but conceptually this is not very hard and so this is the calculation that we just did Bas when he originally wrote this paper so he talked about it nobody understood him only after he died did somebody find it in his papers and I said okay this is going to take a long time and then they explained it to others let me explain what it tries to do yes sir middle one What what is the is it or no on the board card car and greater than 10 lakhs it's fine you said marginal probability and this this this one okay this one is a joint probability this is a marginal this is a joint and this is a conditional so conditional is a joint by a marginal a conditional is a joint divided by a marginal a joint is a marginal multiplied by a conditional so the base theorems idea is the following what it does is it switches which event is being conditioned on it switches between a given B and B given a now when would you need to do this here's an example you want to find out whether the the whether the email that you're receiving is Spam do you use Gmail Gmail often identifies things as spam and moves them somewhere how does it do that partly it it looks at the Mals and headers and it uses a very very complicated algorithm but let's suppose you are building an application of this sort and you want to do it based just on the content of the email so you want a following kind of program you want a program that says that if I know the words of the email I can tell you whether it is Spam or not which means I want the following thing I want the probability of spam given words if I tell you the words can you tell me whether this is Spam or not this is what I want to do correct but how will I solve the problem I'll solve the problem by finding the opposite conditional what is the opposite conditional the opposite conditional is what is the probability of words given spam now why do am I interested in this because this one is easier for me to do in the following sense what I can do is let's say in my research lab I can collect lots and lots of documents and I can identify them as spam or not spam in other words I can manually go in and I can tag them so let's suppose that I've looked at a thousand of these and I've targeted let's say say say 800 of them as spam and 200 of them as normal spam or maybe I go after things that are spam and find 5,000 of them and go after things that I know are not spam and find 5,000 of them now I can solve the opposite problem which means that if I know that it is Spam I know the distribution of words and if I know that it is not spam I know the distribution of words I can do this inside my analytics environment so now I know that if it is Spam this is what the distribution of words looks like if it is not spam this is what the distribution of words look like using that I will now twist the problem and say now if you give me the words I will tell you whether it is Spam or not now how do I do that I do that doing this now this is a very easy formula to understand why because this formula essentially says this that why is why is this equality true this equality is true because let me rewrite it slightly let me say what is the probability of let's say spam and words what is the chance of spam and words in other words there is an and there now I'm going to write this Like A and B but here's the interesting thing when I wrote I can write a and b in two ways I can write it as B multiplied by a given B but I can also write it as e multiplied by B given I have a choice as to which is first and which is second so therefore I can write this in two ways I can write this as spam given words multiplied by words but I can also write it as words given spam multiplied by spam do you understand the trick but what does that mean that means these two things are equal no if these two things are equal that expression now follows now I know that probability of spam given words is equal to the probability of words given spam multiplied by probability of spam divided by probability of words so to execute on this what do I need words given spam which I told you what to do probability of spam which is an estimate of the proportion of emails that are spam or not spam and probability of words that has no conditioning in it this is what's usually called a lexicon or a dictionary so if you give me a dictionary of the language I can give you this denominator if you give me shall we say an IT estimate or a sociological estimate as to the proportion of words or proportion of emails that end up being spam I can give you the probability of spam and if you give me things Tagg does spam I can find its dictionary distributions if you give me things that are TAG does not spam I can find it so therefore I know the right hand side therefore I know the left hand side and now if you give me the words I can tell you the probability that it is Spam so it's either thought of in the way I just described it which is sort of flipping these two probabilities sometimes it is described the following way spam given words is an update of just probability of spam this probability of spam part is sometimes in beian language called a prior and spam given words is called a posterior which means that if I know the words I have a greater idea as to whether it is Spam or not if I know his income if I know he's an IT professional I have a better idea whether he'll buy the product or not if if I know the income is more than 10 lakhs I have a better idea whether he'll buy a car or not if I know the words I have a better idea whether it is Spam or not and to do that I flip it this way and because of applications like this base theorem has become very very Central to machine learning because for example think of the autonomous car what is the autonomous car's decision problem something is crossing the road should I stop in other words given cow should I stop now think of the think of the the problem that has to be solved to do that I can flip it now to flip it means what to flip it means essentially flip it by saying cow then stop essentially I now have to tell the program so so I I say stop given cow so now I to solve it by basum cow given stop so I need to say these are the situations in which a car is stopped and these are the situations in which a car has not stopped so in a stopped situation look at what that car saw and in a not stop situation look at what the car saw like spam and not spam and now I can flip this and say therefore if this is what I saw I now know whether to stop or or not it's a neat little logic so this is this is what base theum essentially does so it will be a foundation to supervis learning right it is one way of doing supervised learning it is one style of doing supervised learning and there are um there are supervised learning algorithms that are explicitly this for example beian belief networks but BBN there's some supervised learning algorithms that are this but aren't explicit Bally so for example linear discriminant analysis where what you do is you find the posterior distribution of being in this class given the data and so this class given the data is written as you know class given data so and vice versa so there are at least two of these algorithms that you will study later distri analysis is one and I think BBN I don't know where cul but in general you will find it to be a very useful trick I'll come back and I'll show you the the theory behind it if you're interested but this is actually all that's that needs to be remembered for its application so the question is an autonomous car his question is why don't I do the simple thing of saying that if you see something stop now from a for a computer following that logic the computer now has to know what should I do when I see something not if I see something and stop so it could say if I see something on the road then stop it'll then ask what happens if I don't see something I should keep going so this becomes a very simple rule that says that if I see something stop if I if I sort of don't see anything stop now what will this do to the car it won't stop in a signal so so this is a translation of a rule the difficulty will be the following and you can try doing it the difficulty will be that what precisely will the car see and it will follow that logic explicitly so if it sees a car that is coming quite far ahead it will stop you could say I'm going to draw threshold if it is further away from this in front the car in front then don't stop because you're expected to see a car in front and so if you're seeing a car in front please don't stop because something is in front but you now have to encode that and so that way of doing things is entirely feasible so for example there's a there's a whole branch of learning called case-based reasoning case-based reasoning and case-based reasoning essentially lies on that give me all the cases and give me the reasonings for all those cases but case-based reasoning sometimes becomes difficult if it becomes very very difficult to enumerate all the possible cases for example example in the spam problem I have to solve this problem for every conceivable word that the email might see because email is going to decide based on the words and if you do if you do do a full case-based approach if the email sees a word that it has not seen before the email will say what do you want me to do so typically when ban methods are used when it sees that word it'll do precisely nothing in other words it'll say if if certain words are there I will update my decision if those words are not there I won't it's irrelevant to it there's no evidence that it has so the other is a probabilistic way of thinking that base theorem or any of these related and this is probabilistic learning that when you do some when you when a when an autonomous system or any machine Learning System decides then what does it decide on so you'll often find in data sets the following situation I I should have had an example I pull it up all the X's are the same but the Y's are different all the X's are the Same by the wise are different two people have exactly the same characteristics but one has bought the product and one has not bought the product two people applying for a loan have given you the same information they come from the same Village they have the same income they have the same you know family circumstances they grow the same crops one farmer has replayed the loan the other farmer has not H car being tested out there's someone crossing the road identical scene one test driver decide to stop the other test driver decided not to stop same X different Y what should the computer do now think of this from a computer's perspective what is the computer's problem the computer's problem is if you give me an X I will give you a y now what do you want the computer to do in this particular situation because in your real data the same X is leading to different Ys what's an ideal solution here what would you do how would you think through this problem one possibility is to give it a probability that's one approach to the problem what that means is this that in your data set let's say half your people who have seen this X have given a y of zero and half your people who have seen this data set have given it a y of one the computer literally tosses a coin and decides which one to predict that's called a randomized response and sometimes it's done but that could be a disaster as well I'm sorry that could become a disaster as well it could but what would what give me another alternative safest alternative we could go for right which is safe how does a computer know that what condition is giving the same X its input is identical but action what are the consequence of this action stop anything happen see that consequence has already been worked out by in nature in nature if that consequence was there that would have already have been baked in so if there is a consequence to it and if there was a good consequence and the test driver would have stopped in all cases the cas driver would have stopped no the action of stopping of stopping going yes take that and discretion which is more fatal going is always fatal stopping is fatal that that that decision would have been made by the test driver as well would it not have been the raw data would also have shown that bias or are you teaching a computer to have a sense of value that the real human did not have two doctors look at the identical medical report one doctor says cancer the other doctor says no cancer you are building an AI system for medicine what should it say go for another test go for another test okay always you should see that you should see a very nice video uh of Watson you know what Watson is you should see the Watson videos if you haven't seen it and you want to be a AIML professional or an ml professional then you should see the Watson videos wonderful videos and you can see you can see the the decisions at at the bottom you can see the um you can see how Watson decides you know what whatson this is the Jeopardy videos so whatson playing Jeopardy and so some Jeopardy is a quiz question in which basically the answer is given and you have to sort of say the question or something of that sort so when you see the video you'll see at the bottom you'll see a bar and that bar is basically a set of probability statements as to How likely is this the answer etc etc and based on those probabilities Watson gives an answer and sometimes Watson does not give an answer because it is unsure of even its best answer so you should so when you watch it watch the watch the bottom of the screen the data that Watson is answering based on this particular we have doing but in general this problem is a hard problem in machine learning because in the real world you will have this issue if this was not the case if it was the case that that identical values of X give identical values of Y the machine learning problem would be a mathematical function fitting problem it would be a problem of simply saying if this is the x match map it to the Y just find the rule that Maps it to the Y it's not and the reason is not is because identical inputs do not lead to identical outputs and resolution of that has many many um procedures and possibilities for doing that one of them is a probabilistic way of doing things to answer the following question I will not tell you whether Y is zero or one I will tell you what is the probability that Y is one I will not tell you whether you have cancer or not I will tell you what is the probability that you have cancer I will not tell you what the probability of hitting something will be if I continue not answer it's not a definite answer I'm asking for a zero or a one and I'm not giving you a zero or a one I'm giving you a probability so at every time the car when it is driving is calculating a number given the scene what is the probability that I will hit something continuously based on what it is seeing now you decide based on that probability whether you should stop or not based on you know your risks Etc the the learning system does not do that the learning system does not say whether you should be diagnosed with cancer it simply says what is the chance that you have cancer now you decide based on your logic as whether that's enough for me to State whether you have cancer or not the learning system will not say what is the probability that you have defaulted on that that it will not say whether you will default on your loan or not it will say what is the probability that you will default on the loan now you decide how much risk you will bear that's U one solution to the problem it doesn't even try to predict the right answer it simply gives you a distribution on the possible answers now you decide as I said if you see the Jeopardy videos You'll see this in action you'll see the the data on which it does the category is 19 Century novelists what Watson wants to do then is preserve the lead not take a big risk especially with Final Jeopardy because just like for humans Final Jeopardy is hard for Watson now we come to Watson who is brand stoker and the wager hello see the full video 973 total of [Applause] 77,4 I would have thought that technology like this was years away but it's here now I have the bruised phenomenal why someone on a terror Watson look at that what it's doing is it's given probabilities on the answers H these don't add up to one these don't add up to one but what is the chance that list is the right answer what is the chance shopan is etc etc this number if it is below this threshold whatson will say pass it won't answer and it's there in the video a few number of times it doesn't know but it says that if I am more sure than a certain threshold and if I'm uniquely sure it will also not answer if multiple of these cross here which means both of them are probably right and I don't know which is right they both sound correct to me again I might stop so for each question what will check the what is the probability of the he'll do that every question based on hearing it so if probability is done by Python language or any of machine language thing then what is that we are here for meaning what is our role in deciding that deep philosophical questions why are we why are we existing at all H why why are we here at all so so yeah yeah yeah so um so one one reason you're there is to provide test data to the system or what's called Ground truth in words you need to give it spam and you need to tell it once that this is Spam just like he's saying I need to tell it to stop I need to say that this is a dangerous thing so so human needs to initiate that but yes people are asking that question a lot that is that human initiation necessary now the trouble with that is that the the the value system that is necessary to decide that this is a good thing or a bad thing is something that computers do not have and it's extremely difficult to encode that it's a lot easier to encode in a computer in some way this is good or this is one decision this is one decision and also if you want to encode a cost to it and if I do this this is what cost reinforcement learning does this if you take a wrong decision there's a penalty function that hurts the computer in terms of an objective and the computer knows that if I want to reduce shall I say that pain factor I should avoid doing this like babies learn that's called reinforcement learning I don't know whether you'll do much reinforcement learning in this course or not but you will do that so you you so you so you so you build algorithms of that kind there will come a time where that will not be necessary for us it is not necessary but even we even humans have to come with our genetically coded information we also cannot begin from scratch we already come coded with this there's a school of thought that says that that's all that there is that this information is passing along in other words um a hen is an egg's way of making another egg so an egg wants to make another egg right now how does an egg make another egg through a hen it makes a hen and that hen makes the egg another egg right so which means that there is a basic information content the genus trying to say I need to survive so there's a sequence of acds and G's that has a survival Instinct and the only way it can do that is to get another organism to create a copy of it viruses do that brilliantly right the big war going on on planet Earth for a few billion years and still continuing it's a deadly War it's called no winners and is going to continue is a war between bacteria and viruses nobody wins right these two are at each other for donkey ears because they have two very different ways of dealing with information right a virus is retrovirus type thing a virus is basically just DNA with the protein around it the way it reproduces is like certain Birds we learn in mythology that information gets into another organism typically a bacteria so a virus forces a bacteria to make another virus and obviously the bacteria doesn't like it and so the bacteria over billions of years have figured out how to prevent doing this and viruses have consequently adapted and have repeatedly kept kept doing this and so information transference has a long long history in the real world in the in the in the Computing world the challenge of saying that how do I input the information how do I get the machine to learn is something that we are rapidly evolving in the reason this this current generation is so excited about it and I I'm not that old but even in my career and I've been doing this for I don't know about 25 years or so roughly speaking I've seen three or four waves of it you know goes up it goes down it goes up it goes down and different the current version of it essentially is based on certain deep learning algorithms that have come and have made it a lot easier to feedback information so you know recurrent neural networks con all these neural networks now have the ability to feed context and feed information a lot more efficiently which means this idea that a computer can pick up context and use it to get better algorithms is there uh and that scares a few people mightily because what it means is that as a car keeps driving very well it's knowing that it's driving very well and it'll keep doing certain things so so the school of thought that says that therefore maybe the car should have a few accidents just like maybe there should be a few nuclear explosions let's suppose that you go and get an HIV test done HIV tests are routinely done let say you have surgery or anything like that Etc HIV tests are done so let's suppose that for whatever be the reason an HIV test gets done and the test turns out to be positive I hope it never happens to you but let's suppose the test turns out to be positive the question is how scared should you be very that's a reasonable answer but let's work it out so to do that trying to calculate the probability of HIV given positive test this is what I'm interested in calculating because my life may depend on it there are many ways to do this here's a suggested root now what I'm going to do is I'm going to write this version of the formula down H without necessarily and you'll see what what it means here so what I'm going to do is this I'm going to write this as probability of HIV and plus divide by probability of positive correct conditional is joint divided by marginal now I'm going to write the numerator as probability of positive given HIV multiplied by probability of HIV I'm going to twist it it here's why these are numbers that are much more available to me what is this number this number means that if I have HIV what is the chance that the test will be positive that's called the sensitivity of a test a test maker has to report that this is the proportion of people who have HIV this is the incidence rate it has nothing to do with me it's like my dictionary it's just the fraction of people who have HIV so these are numbers that I know one from epidemiology and one from my test manufacturer divided by positive and I'm going to do something very interesting on the positive I'm going to write this positive in two ways there are two ways in which someone can become positive HIV and positive plus not HIV and positive okay disjoint there are two disjoint ways in which I can end up being positive either I have the disease or I do not have the disease now I can write this as this side already calculated is the same number probability of positive given HIV multiplied by probability HIV plus probability of positive given not HIV multiplied by probability not HIV this is this formula just exampled out we're going to apply this and see what happens let's what are the numbers that I need I need a number of probability of HIV probability of HIV is a incidence rate for HIV what's a good number for this 01 Mo okay let's say .1% that's actually very low the HIV rate is a lot higher oh let's say 1% 1% 1% of people have HIV and 99% don't what this also means is that probability of not HIV is 99% okay I also need a few other things I need for example this probability of positive given HIV this is a measure of how good the test is if you have HIV what is the chance that it will report that you have HIV what's a good number for this 99% 95% what does 95% mean that if you have HIV there's 95% chance that I will find it equivalently for 100 people who have HIV for 95 of them I will find it yes can which one I asked this is this will come from the this is called a sensitivity number it comes from the test a very good test may have this at 99% 99.9% a not very good test or a cheap test test may have this at 90% I'm assuming that this test has 95% pick your own number its sensitivity is 90% the other number is sometimes called specificity so for example let's say I go the other way positive of negative given not HIV which means if it if you do not have HIV what is the chance that it will say you do not have HIV again 95% again 95 in other words I'm I I have a fairly simp simp test which is 95% accurate whatever your disease state is 95% of the time it will give you the right answer okay now let me re askk the question I've given you a test that is 95% accurate I'm now telling you that your test is positive what is the chance that you are HIV positive 95% that's a reasonable guess right let's let's work it out negative not HIV is 95% so what is positive given not HIV 5% correct okay now I have everything that I need to calculate this what is positive given HIV 0.95 correct into what is probability HIV 01 if he's given it as 1% % downstairs again 95 into 01 plus what is this positive and not HIV 05 multiplied by probability not HIV 99 correct someone please work this out on a calculator or on what is collectively exhausted it is depend means together they cover everything yes which means in our particular case you have HIV or you do not have HIV there are no other possibilities they exclusive why because either you have HIV or you do not have HIV yeah yeah that is huh but exhaust exhaustive events means there are no other things so this uh B uh given HIV positive is 95 so yes because of that that 95 we are not calculating this which one the last one 5% this 5% this is I think one minus this for not HIV if negative was 95% then positive will be 5% what is this number point I have two I have high variance in my answers anyone else 0.16 0.16 there's 16% chance you have HIV if you test positive why is it that a fairly accurate test a 95% accurate test my wife and I have a have a have a biotic company we're trying to release a product on molecular diagnosis for infectious diseases if we get 95% we'd be thrilled our investors would be thrilled we'd be in business this is not easy to attain particularly cheap we're trying to keep the cost of our test fairly low for things like UTI and stuff like that but so where is the problem SLE size false positives 95% Pro but there's a there's a there is a there is a there is a problem of false positives here um so another way of seeing exactly this same calculation or pretty much exactly the same calculation is the following thing so I'm going to rub out base theorem here which is exactly this I leave it to you to link this to bi etc etc but sometimes it's easy to just understand it as an example as how it's done but I'll show it to you as I'll now show it to you as a picture I leave this here and now let's assume that I begin with a population of maybe 100,000 people let's suppose that I've got 100,000 people who are are being tested let's say now of these 100,000 people some of them have the disease some of them do not have the disease some will I don't the total is this is my sample space so to speak now let's say how many of them have HIV 1% 1% so that's how many, so a thousand of them are here so these are HIV and how many are not HIV 999,000 correct now of these 1,000 how many of them test positive 950 and how many test negative 50 okay of this 999,000 how many tests positive and how many tests negative so these guys should test negative so what is 95% or what is 5% of 99,500 of 999,000 4950 is 5% so 5% is a wrong which means 4 4950 are here this is 5% of 999,000 and so how many are now here negative 94,000 about that 95,000 this number won't matter much anyway so you're okay with the situation here now let's look at all the people who've tested positive where are all the people who have tested positive these guys have tested positive and these guys have tested positive so how many people have tested positive in all so 950 plus 4 4950 of them how many have the disease 950 calculate this this is exactly the same calculation you did before arithmetically it is the same calculation 4950 is the cprit here here 4950 is the culprit what does that mean it means that there were a lot of people who had a false positive now why was there are a lot of people who had false positives because there were a lot of people who did not have the disease for that large number of people who did not have the disease only a few positives will swamp the positives of the people who had the disease which means most of the people who are testing positive are actually healthy people who have had the misfortune of the test going wrong on them but because there were so many of them it affected the probability so if it is an epidemic for example which is not very rare then I think test is no but what is it for you so so what is the moral of the story now so therefore what will happen let's say therefore let's say you go and let's I'm pretty sure this hasn't happened but if somebody gets a positive HIV test what will the doctor say check one more get a retest done why because let's suppose this is my test let's suppose this is my test and let's suppose now I've changed the test to saying that I will say you have HIV only if you test positive twice in a row you test it twice and both times you will end up you show up positive now what happens to these numbers what is now the positive given HIV and what is now negative given HIV first of all what happens to what happens to this what happens to the What is the chance of a false positive now so the chance of a false positive which was previously 5% now become yes now becomes you must it must go wrong twice so 05 into 05 and then one minus that 5% of 5% 5% of 5% is what it's a quarter of a percent or something like that or even less maybe that becomes now a very large number so this number becomes much smaller the chance of a false positive becomes much lower and because the chance of a false positive becomes much lower this number becomes a lot lower and now the number begins to approximate what you think it would but for this to work I must be able to multiply the two probabilities that both tests went wrong that multiplication comes from Independence which means the second test that you should do should be from a different laboratory which would have its own biases it'll have its own problems but they'll be independent of the first guy and you can multiply this out and this problem will go away if it doesn't multiply out this if the same result happens in other words if the same thing shows up then this5 will not go down so now this difficulty with based rule this also again for example this shows up in many things even in even in business so if I if I'm trying to detect let's say fraud I'm trying to detect fraud and I and have a fraud detection I'll go and I now say if I see this signal what is the chance that it is fraud by base theorem that will be low the reason that will be low is because most transactions are not fraudulent transactions and so even if there is a small possibility of detecting a non-fr fraud transaction as a fraud transaction I have messed up my algorithm does it mean if we run it twice we give the accur you have to do the test independently running the same program twice will not help you huh so in the biological example you need to run it again for in a different test in a in a in a machine learning situation what does that mean it means you have to give it fresh data different data from the same situation shall we say which is a little harder but that's fine so this is base theorem sir that last World and spam problem how does it kind of map to this how does it map to this okay well it looks it looks completely different does it not ah okay we'll do it this way what is the proportion of spam and not spam let's say this is Spam and not spam H what is the prop so I need to know this this is the proportion of things that are spam and not spam independent of what is in the text what's the proportion of emails at our spam what do you think 5% huh 30% of spam okay you guys know your inbox it also points to a healthy social life right so now what now let's suppose that we fix the problem and I'm going to solve the problem not for not for words but for one word so what's a Spam like word for example by congratulation congratulations right okay okay P cool congratulation H so now so now I want probability of congratulations given spam what is probability of congratulation given spam if congratulation is there then if it is spam what is the chance of the word congratulation will be there 100% that's little to huh 75% let's say let's say this let's say this is 75% right then what else do I need because what is the problem I'm trying to solve huh so I'm trying to solve the following problem I'm trying to find probability of spam given congratulation this is what I want to find I want to say that if I see the word congratulation what is the chance that this email is Spam that is the problem I want to solve now to solve that I'm solving the opposite problem I'm saying what is spam what is not spam what is congratulation given spam and I need and I need one more probability congratulations congratulations not spam what is this 25% not necessarily one minus this this is a separate calculation but it could be 25% if you want to let's make it 35 h huh which means if it is a genuine email if it's not spam there's 35% chance of the word congratulations will be there now I don't need to make this up as I said in a laboratory I can look at all Spam things and I can count how many times congratulations shows up in it so now let's suppose let's suppose this is here let's suppose I know this now can you do the calculation you can do it using base rule you can do it using the drag diagram if you want to just try what is the answer so what we are saying that congratulations not 35% that is know these four numbers are known to you well actually you know these are the same number so three numbers are known to you if it is Spam then the chance of congratulation is 75% if it is not spam the chance of congratulation is 35% now I want to find what is the probability of spam given that there is congratulation now how do I how do I of four three are known right all four are known this is my shall we say the information that's available to me some of you can try using the formula some of you can try using the picture so if I do it using the formula what will it look like spam given congratulations is equal to probability of congrats I'm going toize this spam multiplied by spam divided by probability of congrats given spam multiplied by probability of spam plus probability of congrats given not spam multip IED by probability of not spam this and this is what congrats given spam is 75 into probability of spam is3 divided by 75 into 3 plus congrats given not spam 35 multiplied by not spam is 7 point no is it 47 or you might want to draw a picture like this like we had drawn before begin with another typical number let's say 100,000 you'll do it as spam not spam on the spam side this is 100,000 emails on the spam side how many will there be 30,000 on this side 70,000 on this side how many will have congratulations this is on the stamp side 75% of them will have congratulations so 75% off 30,000 that's what 22,000 500 or something like that and the remaining will not have congratulations how many here will have congratulations um for not spam 35% of 70,000 what is 35% of 70,000 huh 24,500 and so what is my answer 22,500 divided by 22,500 + 24,500 which is presumably my 47% you can do this as well so without opening the email without opening the email and seeing the email the chance that it is Spam is 30% but if the word congratulation is there in the email the chance that it is Spam has gone up to 47% now you would not do this just for congratulations you do this for a whole bunch of words which means that instead of congratulations congratulation and something and something Etc which means that instead of congratulations here it'll be congratulations and something and something and something here which means for these probabilities you will need to say congratulation and something else let's say another word what's another word offer so you would now say what is the probability of spam given congratulation and offer now you would need congratulation and offer but if you assume Independence there this can be congratulations given spam multiplied by offer given spam so word by word the probability can be calculated and it can be put in this approach you'll see studying text mining one of your courses it's called the bag of words approach the words that put into a bag irrespective of their order and things like that yeah for one yes yes each yes so so each of these the a will then be a new event and that new event would be different words and so that those different words will be thought of as the product of each word so the chance that that the words congratulations and offer are there in the email is the chance that congratulation is there in the email multiplied by the chance that offer is there in the email that's an assumption an assumption that is built into the bag of words model if you don't like it it what you have to do is you have to give me the joint probability of offer and words and those models are also there they're called Byram models spam and not no spam and not spam are where spam and not spam and not spam are in the bi right but in the formula for each case yes we are adding two times these two spam and not spam yes yes so you're relating it relating it to two here there were there are key possibilities number of possibility will be combination of no the number of possibilities in this case no and there could be other possibilities here here the things I'm deciding between are just two spam or not spam in this formula the number of things that I'm deciding between are many for example in your Gmail how many categories are there there's social there's promotions and primary so instead of it being spam I can Define it as primary social and promotions so now I need to find what is the probability of primary given congratulation promotion given congratulation and social given congratulation there are three of these now that can that now you can apply here there's B1 B2 and B3 so you we've already seen an example of a distribution I'll simply tell you what it is the binomial distribution what is the binomial distribution the binomial distribution is a distribution of Simply counting the number of things the number of defective products H um the number of customers that receive service etc etc exactly like the applications that we were talking about this is the statement we have already seen the probability of getting X successes out of n trials is p of X is equal to n choose x p ^ x 1 - p ^ x where the individual p is the probability of getting success in one trial you remember my formula of Point 1 to the^ 2 so it's that formula what does this formula say this formula says that if p is the probability of success of a single trial then what is the probability of getting X successes out of n trials n trials p is the success probability of each trial what is the probability of X successes n choose x p ^ x 1 - p ^ n minus X how do I think this through what is a trial a trial is the total number of attempts that I'm making the total number of products that I'm making I'm making three products the probability of each product being defective is 0.1 what is the chance that I will get two defects 3 ches 2.1 to the^ 2.9 to the^ 1 P successes P into P into p n minus X failures what is not a success is a failure whose probability is 1 minus p and there are X ways of choosing that original n sir in this case it's a like these trials are like with replacement these trials are not just with replacement yes they're with replacement it's not like it's it's a it's a population so to speak in other words an actual experiment is not being done um it's imagined that someone is doing this experiment repeatedly so yes if you want to think of it as replacement as replacement it's a model for example here A Bank issues card statements to customers under the scheme of MasterCard based on past data the bank has found that 60% of all accounts pay on time following the bill if a sample of seven accounts is selected at random from the current database construct the bin probability distribution of accounts paying on time what is the question being asked the question being asked is this that I am looking at seven accounts and I'm trying to understand how many of those accounts are paying up how many of those accounts are paying up now what values can it take what what are the possible values that that that that my X can take 0 1 2 3 4 5 6 and seven six means none pay on time I'm sorry zero means non pay on time one means one pays on time seven means all pay on time the chance that every one of them individually pay on time is 60% and I'm going to make the assumption that these people aren't talking to each other so they're behaving independently the 60% chance applies to everyone separately which means that if one person has paid that has had no impact on whether another person has paid or not correct let's do one of these calculations let's say what is the probability that um let's say um how many people two people pay on time so 2 pay on time what is the answer to this you can use this formula directly but two people pay on time means 6 into 2 6 into 6 not into 2 to the^ 24 to the^ 5 these are the five people who have not paid on time these are the two people who have paid on time so this 6 into 6 into this 6 into 6 into point4 into point4 into point4 into point4 into point4 the seven people now that is one Arrangement how many such arrangements are possible seven choose two arrangements are possible those two could be the first two they could be the next two they could be the first and the last there are seven choose two of those for each of them is a pattern paid paid not paid not paid paid and every time you see a paid 6 every time you see a not do not paid point4 the 6 you're going to see twice and the point four you're going to see five times therefore this formula so can you expand choose one 7 7 choose two is a Formula which simply says how many ways can I pick two things out of seven the formula for it is 7 factorial divided 2 factorial into 5 factorial H which is 7 in this case 7 into 6 divided by 2 which is I think 21 21 the 21 ways to pick two team two out of seven because I asked for two I can do it and the problem asked for all combinations I've just solved it for for one particular answer I need to do it for 0 1 2 3 4 all of them if I add them what answer will I get I'll get the answer one because something must happen isn't the number of Trials eight no the number of Trials is seven the number of outcomes is eight if I toss one coin I can see two things so there are seven outcomes there are seven people so 0 1 2 3 4 5 6 7 that's eight the eight possible outcomes all right so now there is a file here um it's called I think um binomial distribution example You' You' import a few things for plotting and for the stat functions then I'm going to set up the problem how am I going to set up the problem in this particular case just by specifying an n and specifying a p what is the n in this case n is the total number of Trials why is it seven for me because there are seven customers correct p is 6 where do I get this 6 here right this 60% what am I doing here what I'm doing here is I'm creating the sample space I'm creating the set of numbers for which I want to calculate the probability so this one here the range function 0 to 8 so when I do this it creates an array of eight numbers 0 to seven do that zero really has a value we don't need of course we do there is a there is a reasonable probability that nobody pays on time same place wherever you got the other one from can you repeat bu for the basis of the formula how it was how it was formed this is X people have paid so this is p so think of it as P into P into p x times and think of 1 - P into 1 - p n - x * because X people have paid and what allows me to multiply the probabilities because because if there's 60% chance you pay there's also 60% chance you pay and so when I when I figure out the chance that both of you pay is going to be 66 into 6 and if he doesn't pay and I want to multiply those 6 into 6 into point4 now how many sixes are there how many successes I want how many point fours are there how many non successes are there and how many such possibilities are there how many ways can I get two successes that is what I'm calling 7 choose two which is 21 why is it 21 you are going to pick two people out of seven how many ways can you pick them the first person you can pick is 7 Ways the first person who pays on time the second you can pick in six ways 7 into six but if I pick U First and U second that's the same as picking U First and U second so I've double counted so buy two so 7 into 6 by 2 which is my 21 so uh one quick question when in Practical world I look out for binom distri this application this kind of application or another kind of application for example I can change this to saying uh in sales I am I am selling my or I I am approaching seven leads the chance of a conversion for a lead is 60% what is my sales distribution okay so I'll use that information for example to to figure out um let's say that um how much budget should I have for the sales team for example I could say um you know what I'm going to approach seven leads and I'm going to get sales however how are those sales going to be made those sales are going to the sales are going to be made on the phone but to confirm the sale I need to be able to send a sales person to the person's house and get their signature this person is going to take a certain amount of time to travel through the famous city of Bangalore and get stuck in the traffic jam and get there so I'll be able to get at most three signatures in a day and if I lose it I lose it or four signatures or so let's suppose that therefore I employ one person is that good enough so now I'm asking the question what is the probability that I'll end up making more than three sales in a day because if I end up making more than three sales in a day I'll not be able to close all the sales so this becomes a Salesforce question it becomes a question of saying that based on my ability to sale I should have a sales team if my sales team is too short too small there's a probability that they will not be able to close out all my sales and I leave money on the table if my sales team is too big I'll be spaying for that sales team but they will not have enough to do so yes the binomial distribution is used left right and Center big in contact Cent In in contact centers yes it's used same same argument in contact Cent for example one reason it's used is how many escalations do you expect so in many of these so how do I execute on this so I've given the I've I've I've I've created the array now here's the command that you need to know this command calculates that formula that n choose K that formula that formula is calculated right by the way you can manually do this if you want to Once which is your 21 into 6 ^ 2 into .5 to ^ four does anyone want to manually do it once no one has any just to check otherwise you'll just trust the output that's fine but if I do this binomial stats. binomial pmf pmf stands for probability Mass function in case you want to know what Earth that means huh probability Mass function so this thing is called a prob ility Mass function probability clear Mass means it's almost as if you're thinking of a a solid material and the probability as being physical Mass how much mass is in each number how much mass is in each number so what it this number the pmf simply is this number it's a calculation of this number so now if I ask for binomial if you do it without the equal to it'll just give it directly all right so it done it just takes a little bit of time so binomial is an array so what is this number here for zero so what is this in the business context this is the chance that nobody pays on time the number of people who pay on time is zero so it's about1 16% number of if what is the chance that 1% pays on time 1.7 1.7% two people pay on time about 7.7% three people pay on time about 19% four people pay on time about 29% five people pay on time time 26% 6 people 133% 7 people about 2.7% okay curiosity question how many people would you expect to pay on time seven no remember there's 60% chance that everyone will pay yes four or five in fact the right answer is 7 into. 6 which is about 0 4.2% so you would expect to see about four or a little more than four people pay on time and the chance of four people paying on time is what 0 1 29% and the chance that five people pay on time is about 26% if you want to plot this there's a there's a slightly sort of you know jazzed up version of a plot here so the first line says plot it you know it says binomial then there's a title there's a labels and then finally the plot command itself I think that's a plotting artifact I mean it tells you what to plot you can remove it and see what happens here's an interesting thing someone's asked what happens when I add up all the probabilities which is what I get here I don't need it it's a check sum so so one Poss one possibility of a business outcome is what is the probability that say more than six people do not pay their bills on time now in a collection team in a bank certainly is interested in that because you have to go after that there's also a question of what is the entitlement on my on my on a specific month so a bank is going to make money or a telephone company whoever is going to make money on the amount of Bill that's actually paid now the fact that a bill has been given to a person doesn't necessarily mean they'll pay it like here so how much money does the bank actually expect to make it has to have an estimate of its Revenue per month how does it get that by doing a calculation of this kind here's a little formula if it wants to help you the average of a binomial distribution is given by n into P we just discussed that total number of Trials into the probability 7 into 6 which means for example that if I think that my success probability of a sale is 10% and I approach 10 people the number of people I expect number of sales I expect to make is 10 into .1 which is 1 does that mean I will make one sell no the distribution goes from 0 1 2 3 up to 10 but the average is at one similarly the average of this distribution is where it's at 4.2 but where is the picture where is the average where is 4 4.2 somewhere here right somewhere here is 4.2 this is the center gravity of the of the distribution the standard there's a standard deviation formula if you want to know NP into 1 minus P the standard deviation will make a little more sense when we talk about the normal distribution I hope I'll get there now there's another distribution which is used a little less in practice you guys are all very practical types how is it how is it used any examples why would we use mean or standard deviation the question the kind of question he asked so I want to make an estimate for example as to how many people will pay pay my bills because based on that I will decide so I can do it two ways I can for example say what is the number of people I expect to pay my bills 4.2 what is the number of number of sales I expect to make what is the number of Errors I'd expect to have in my code what is the number of defective products what is the number of expected customer recalls that I have whichever industry you're in there are events that happen in that industry and you're trying to find out and estimate for it one estimate for it is an expectation like we discussed yesterday but remember this one is not coming from data 7 into 6 is not a calculation Based on data I didn't give you any data on people paying their bills on time I give you a theoretical distribution this is an assumption that I made H it's not an average computed on data so therefore when I make the distribution assumption and I say based on that distribution what is the expected number I should see will I see that all the time no that's why there's a distribution so there was that array that you gave like probability of one person two in reality when you do real solution this this array is something which we have to chge with the historical data in the lab right no so this would be used and and it is often used where what will come from the what will come from the data one thing that can come from the data is the P yeah that's yeah okay the P just the P not the distribution itself okay but probability of one person paying two person paying yes and so that will not so for example so I want to find that next month next month for a new customer on next month how many people will pay their bills on time that's user case now here's the way I do it I ask myself last month how many people paid their bills on time but it come but it may come from the data so the P comes from the data but the calculation for saying how many people will pay their bills on time comes from the next month it is done for the next month it makes no sense to do it for this month because I already have this month exactly but let's take a situation that the probability that we added right probability of 1% probability of 2% the exact array that the python yes entire the data for the array will it come from the past data it already has because the p Has Come From the Past data yeah that's the question I that normally in a real situation that probability has to be computed in a lab based on past data right yes um Let me clarify that yes it came too quickly so there are complexities one is you might be supposing that it changes with time you might be you might be in a situation that does this that you know what I have two I have a collections problem means not enough people are paying so I might have a problem that looks like this that my my the number of people who pay their bills on time is 60% and I'm saying it's too low correct now I want to increase that how to increase that I my manager comes and says make it so such that the number of people say let's say more than five people not paying on time this number must be less than let's say .1% that's the goal now to do that I now need to change my P so I'll set my P so that the answer to this question becomes less than 0.1% that gives me a Target P now I must reset my collection process so that that P is attained to achieve that P so I can do I can create applications in various ways give me the p and I will tell you what happens or give me a situation that I want to achieve and give me a Target P such that it gets there the constant and the variables keeps yes the constant in the variable keeps changing what do I want to fix keeps changing so that dep this is a model this is a mathematical model how you use it is up to you this is one particular use case but there'll be many use cases for this you see one in logistic regression for example the PO distribution is a very similar distribution except that for the poro distribution that has a mass function that looks like this now this Mass function counts but does not count relative to a maximum the binomial goes from 0 to n 0 1 2 up to n the posa there is no n there is no total number of things for example I might ask the question how many fraud cases do I expect to see there's no sort of Maximum to that I could frame it as saying that tell me the total number of cases there are and that is my n and then I'll figure out based on a p how many fraud cases there are but there are situations where this maximum is something that doesn't quite make sense how many fraud cases are there how many cracks are there micr fractures are there on this bottle it's a count right how many eggs will the chicken lay it's a count it's not in some way a proportion like thing so if it's if you're in a pure count like situation you are in the situation of the so-called por distribution whose Mass function has this slightly different form called e ^ minus Lambda Lambda to the^ X where Lambda is the average if on an average six customers arrive every 2 minutes at a bank during busy working hours what is the probability that exactly four customers arrive in a given minute what is the probability that more than three customers will arrive in a given minute this is slightly different from a binomial why the reason is in the previous case they were asking for how many customers did not pay but there was a total Universe of customers seven customers there was a sample sample space here there isn't I'm not telling you how many could have come there's a series and that series could go up to anything so to speak this is the typical situation of a personal distribution where it's not a question of saying independent trials and how many were successes it is that I'm simply counting how many there are and I have no ideas to how many there could have been potentially how many fraud cases I do not know how many micro fractures I do not know how many customers could have arrived I do not know there's no maximum to it so there's similar calculation here for the same thing if you open the personal distribution example file now for the person distribution that formula for the binomial there were two numbers you needed to put in the n and the P for the Poo there is only one number there is only one number and that number is usually called the rate the rate at which my customers are arriving the rate at which I get fraud the rate or the density of of my cracks it's a rate number you can think of this rate number as a product of n and p as as the total number of opportunities multiplied by the product if you want to think of it as that so for the pora I need to be able to specify the rate and now I do exactly the same thing I again calculate the poso probability stats. poso do pmf now for computational purposes I am setting the range from 0 to 20 I can set to be any High number that 20 is not coming from my data that 20 is coming for a computational reason because I want to do the calculation for a finite number of points and as you'll see after 20 the numbers are very very small so the 20 is not there from the problem the 20 is there for my visualization but how do you come to that I can make it any make it any other number if you make it too low you'll be leaving some probability to the more than 20 you make it too high you'll you will be calculating a lot of zeros so what is my problem let's go here my problem is what is the probability that exactly four customers arrive in a given minute six customers arrive every 2 minutes at a bank what is the probability that exactly four customers arrive in a given minute what have I put my rate as six and here is my distribution this is what 2.4 into 10 ^ minus 3 so this is what 0.2 let's see what happens so what is the probability of 0 02 what is it for one1 for two for three 8 for four no for what so no what is it for what is it for say this zero 1 2 2 3 4 what is it for 4.13 133% what is it for five 16% was it what is it for six 16% what is the average number of customers I expect to see six 16% what is it what is that seven 133% for 8 10% now it start going down and it'll go down and by the time I've reached 20 it is already 0000 1 so if you had gone beyond 20 I would have seen even smaller numbers but I could have stopped for example let's say at 15 if I stopped at 15 where would this have stopped 1 2 3 4 5 it would have stopped here which is fine approximation 20 20 is an approximation 20 is 20 is a guess here is a distribution plot the same thing this is the plot of the distribution function whose average is at six by the way what is the answer to the question what is the probability that exactly four customers arrive in a given minute be slightly careful be slightly careful six customers arrive every 2 minutes the question asks for exactly four customers arriving in 1 minute which means if I'm yes if I'm putting six as the rate then I have to convert this question to saying what is the probability that exactly how many customers arrive every 2 minutes each customers arrive every 2 minutes or what I can do is I can change my rate to three this one is a distribution where you do most of the calculations with this is the normal distribution the distribution that corresponds to age to the mean spad all the continuous variables that we were looking at numbers numbers so if you're dealing with numbers then you deal with a distribution that has a shape like that this is called the normal distribution now the normal distribution the reason I wanted to get to is this because because of this picture now this picture puts the standard deviation in context so yesterday we talked about the standard deviation and a question often asked is what does a standard deviation mean what is standard about the standard deviation this picture tells you what is standard about the standard deviation so this picture means that if I have a normal distribution then the chance of being within one standard deviation is 68% as a numerical quantity this distribution is a distribution that has a mean and it has a standard deviation now the standard deviation has to be defined in such a way and the way the standard deviation is defined implies that the chance of being within one standard deviation is 68% the chance of being within two standard deviations is 95% the chance of being within three standard deviations is 99.3% so now if I tell you something like this that I'm telling you that for a group of people the mean height is say 5' 10 in with a standard deviation of 2 in mean height is 5.2 in and a standard deviation of 2 in right so mu so let's say 5 ft um 8 in and a standard deviation of sometimes denoted by Sigma of say 2 in I've told you some interesting things if you allow me a normal distribution I've now told you that 68% or roughly 2/3 of the people are between 5' 6 in and 6 ft oh 10 in same Stu this is 58 this is one standard deviation which is two and two so this is 510 and this is 56 and this is about 68% sometimes it's easy to remember it as 2/3 close enough two out of three are between these two heights 95% are between what and what 6 and 54 95 5% are between these two heights one and 20 are outside this range so therefore if I tell you the mean and the standard deviation I've actually told you a reasonable amount as to how the data is spread so sometimes the mean and the standard deviation are are reverse engineered so to speak so if you're a professional say I often do this so people say people often ask where is the data they said nobody has any data H so I say so so you know you're trying to figure out what what work to get so so so so you might ask a question um when do you typically arrive and someone says oh uh 9:00 thereabouts what's your earliest arrival time 8:30 what is your latest 10:00 so looking at this you'll now say so you you can decide as to what you should assume that the whole range of the distribution is say from say 8 8:30 to 10:00 and now this pattern tells you that if I go for three sigma covering 99.7% this whole range is about six standard deviations so to ex to look find the mean you just take the middle of it and to find the standard deviation you take the whole range and divide by six so I can get an idea of what the average is and what the standard deviation is without even getting any data from you but just getting a sense of the extremes it's a nonsense way of doing things but what it does is it allows you to cheat with essentially very minimal information so remember this remember these pictures they're helpful they give you an idea of what the distribution is like now by the way these numbers are easy enough to calculate so we'll do some calculations the normal distribution has a bell-shaped distribution so it's symmetrical the Tails could be extended it depends on two parameters mu and sigma see the power of it by giving you two numbers I've given you characteristics like this so and I can do calculations and this is the density function that equation if you want to think of it nobody does anything with this but and then you can do calculations on it so here's a here's a calculation I'm not sure whether this is a calculation that we had worked on um this is a calculation that we actually do in in some detail let's do it so the mean weight of normal of a morning break serial pack is 295 kg with a standard deviation of 025 kg a random a random variable weight of the pack follows a normal distribution what is the probability that the pack weighs less than 280 G now why would someone be interested in this one possibility perhaps is that maybe the target for the for the pack is something like 300 G and you're trying to understand whether you are with in tolerances or more or less or something of that sort so what is the probability that the pack weighs less than 280 kg so what do I need to do what is my picture like my average is 295 standard deviation of 25 on the gram scale and I want to find the CH of being to the left of 280 I need this area calculating this area is actually quite easy so let me calculate that area so I'm going to do it this way stats dot Norm dot CDF CDF stands for cumulative distribution function I'll tell you what's cumulative about it CDF now what is the number that I'm interested in probability of being less than 280 or if I want to be very clear about this point Sorry 28 now I'm going to do something here comma location equal to location means the middle of the distribution for me what is the mean 2 95 comma scale is equal to what is the standard deviation is that right are the numbers correct 27% this one here instead of0 28 you saying it should be 27 no I'm calculating the answer to this question what is the probability that the pack weighs less than 280 G this is the question to answer this the way I set it up was to say that what is the chance of being less than 280 when the mean is 295 and the standard deviation is25 because of certain technical aspects of other functions the mean here is referred to as location and the standard deviation is referred to as scale huh so if those terms location and scale confuse you just ignore it huh this first term is the is the number but otherwise huh this this this one here this one here makes more sense fine go ahead with this all right do you understand how the code works all right let's do the second problem what is the probability that the pack weighs more than 350 G what do you think the answer should be guess oneus yes one minus what stats Norm 1 minus stats do Norm now what should I do sorry norm. CDF 350 comma same thing about 1 39% 39% the chance of being more than 380 clear so what does CDF do CDF cumulative distribution function what does it do calculate the area to the left of less than therefore if I want to calculate the area probability more than I need to go one minus why because the whole probability is one what about the thirdd one one what is the probability that the pack weighs between 260 g and 340 G how to do this yes 340 so I now need to be between 340 and what is it 2 260 so less than 340 minus less than so it should be again let's say let's get lazy what is this number 340 and this is 260 right 88% 8 8% of my packets are going to lie between 260 g and 340 G like this it's an assumption it's an assumption we are making remember there isn't any data at all here there isn't any data at all here what numbers am I using mean and standard deviation so what I'm doing is what is the advantage that I have I don't need the data all I need is this mean and standard deviation what is the price I pay an assumption on the distribution no so the I could instead of using Norm have another distribution sitting there there's a whole range other other possibilities Bome is one there there are other distributions if you want to you would you would decide based on whichever distribution makes most sense for your application now in certain cases you know what those nature of those distributions look like for example if you're looking at lifetimes of things it's an exponential distribution or gamma distribution or something of that sort but there's a certain advantage to the normal distribution because of something called the central limit theorem and we'll cover that a little bit of it will be mentioned when in the in in in in the next residency Central limit theorem essentially says that if I take the averages of things or the totals of things I end up with the normal distribution the normal distribution is a result of averaging so if my observation is a total of little things then probably the normality assumption is a good assumption for that large data doesn't necessarily mean normal but if the observation is the total or the accumulation of lots of things so for example height is often normal why because our height is a is in some way a random combination of many things maybe the height of each of our cells and things of that sort so the normal distribution is often used as an assumption based on the central limit theorem the other part of it is that even if the data doesn't look like a normal distribution the the sort of generation for it the sample from a normal distribution doesn't necessarily look like a sample from a normal distribution so even like we saw yesterday the B shaped curve so it's hard to look at the data and say that it is not normal so the normal distribution is person to the that is often made um in the absence of you any other information on the data um it is obviously wrong in cases where the data has a very strong skew in one sense to another but remember in many cases you're not even talking about the data the question that you're asking is not a data question the question that you're asking is a probability question it's a situational question you're asking for effectively the following thing why would some why why is this an analysis of this kind done what data is it going after if anything you're talking about the data being normal or not normal what data is it even referring to why do I care about the first question what is the probability that a pack weighs less than 280 Gam one context for it could be that if a person buys a pack what is the chance that they're getting a a light pack in other words something that is less than 280 G the way you have to design your packaging also right true but my question is this where in all of that is a data where is the data in this how do you even think of the data is it a data problem at all I'm asking the question that is my product in Spec in other words what data are you referring to what is this a data science issue at all or is it not we asking the question is it normal is it not is is it a data question how you reach the customer yes and what kind of packaging sizes so what what data is that so data of what data of my skus how many data observations which data observations for whom for which customer when what data huh so kog quality check for what I could argue for example that this is about saying that if he goes in and buys that breakfast cereal will he get something that is below 2 180 g for the value of the price we pay yes but where is the data in the supermarkets for examp there is no it's a business question what dat dat does it apply to what I'm trying to say is that it's not a data problem at all you can solve it using you can say I'm going to gather a lot of data to solve the problem the data we already have on the basis of which we have no I'm telling you that this could come from the past mean this could so you could say that I'm going to I'm going to gather the data to get this number and get this number that's a good answer that in order to solve my business problem I need a mean and a standard deviation so that I can get a handle of what is the chance that it'll be underweight now that mean and standard deviation has to come from somewhere and I can say I will use data to get that mean and the standard deviation that's a good answer that you'll now say why do I need data in order to calculate mean and standard deviation why do I mean mean and standard deviation because that's the that's the least data I need in order to be able to answer this question which is the question I'm interested in answering will he buy the product will my network go down will I be under product there's a business question I'm interested in answering or there's a tech question that I'm interested in answering and often that is made independently of the data so for example the car has to stop autonomous vehicles right the the data that the car is going to react to is the scene that the car sees in front of it but that's not the data on which the algorithm is going to be based so the the so the data that the car sees is what it is reacting to similarly this is reacting to only one number 280 G I'm now solving the 280 G Problem by saying is it this so I'm giving you a packet and I'm asking the question is this underweight does this have less water than it should I'm interested only in that I'm not interested in any data so in hypothesis testing what we'll do when we come back is to be able to close out that question and say therefore from data how do we get to numbers like this which now means that I have to put the two pieces of this residency together I have to put together the idea of calculating means and standard deviations from data to the idea that it is a parameter being estimated to solve a problem so you would say that that data this 295 comes from data that immediately raises an issue but if it comes from data it comes from a sample and if it comes from a sample it's not accurate and if it comes if it's not accurate then how well does it solve my problem and life keeps going in circles like that so this is the probability side to it which explains why I need to have means and standard deviations in order to do a calculation and the descriptive part says I have the means and the standard deviations to do the calculations so does that mean that when when I had the sample size yes uh sample set that if it had normal distribution then I'm more reli no no if it had a normal distribution then maybe I'd be able to get good numbers around this and the plus minuses would be symmetric this calculation doesn't rely on the normality behind the 4 295 estimate this calculation lies on the normal ity of the future data which doesn't exist at all but what I'm asking is Will these numbers be more reliable the MU and the sigma if I had normal distribution no not necessarily not necessarily if I have normal distributions I'll be able to use certain very specific formulas that we'll see if it is not normal those formulas may break down a little bit so those formulas help me calculate huh so normality helps me calculate it helps me calculate how good these numbers are it also helps me calculate using normality what the answers to questions such as these are but the normality that I use right now 10 minutes ago had nothing to do with data and that to some extent is the power of probability that you're being able to answer a question like saying do I expect that the weight is going to be less than 280 G without having data in place for it the simpler answer would be give me the data and count how many are less than 280 G that's the simplest answer right what is the chance that the pack less than 250 g empirical go collect 100 packets and find out how many of them have weight less than 280 gam that's the answer to that question so why are we doing all of this because you don't have that because you don't have that data why don't you have that data because that's not the question I'm asking I'm asking the question is it less than 200 80 g I'm looking at a computer program in front and I'm asking what is the chance that there are more than five bugs in this code I'm looking at all the computers in my office and I'm asking um what is the chance that all of the employees today there's going to be more than two hacks or malicious attempts on my server there is no data yet there will be but by that time the hacks happened but I still need those numbers and I get those numbers using these distributions to operationalize those distributions I need certain numbers and I can guess them I can beg them I can borrow them I can steal them I can estimate them from data I can ask for a friend I can read a book see a standard I can look at market research yes I can do any number of things in order to get at those numbers I can look at an industry standard those two pieces will put together Shi is getting really nervous H noral this this picture is definitional for the normal distribution this picture this is definitional for the normal distribution so if you look at Six Sigma 6 Sigma will cover 99.7% 3 out of a th000 will lie outside the plusus 3 Sigma range not everything but roughly 3,000 this is 3 Sigma 6 Sigma 3 Sigma usually says 3.4 defects per million opportunities which is actually not 6 3 Sigma is not 6 Sigma it's 4.5 Sigma so 4.5 Sigma is about 3.4 into 10^ minus 6 that's 4.5 Sigma so if you look at Six Sigma literature there's a confusion there basically what it says is that if you have in order to get 3.4 defects per million to the customer you have to be within Six Sigma which is about one in a billion this is at this is at three standard deviations plusus three standard deviations if I go to plus- 4.5 standard deviations I'll be at around 3.4 into the^ minus 6 to reach that for the customer I need to go to Six Sigma here which is about one in a billion I must be more accurate in my factory floor for my customer so if I reach Six Sigma my customer will reach 4.5 Sigma and four customer 4.5 Sigma is the 3.4 10 the^ minus 6 so if you look at 3.4 the^ minus 6 it doesn't correspond to 6 Sigma little confusing but that's the way Six Sigma literature is written normal distribution is the normal distribution is just this as a formula Plus or plus plus or plus orus 2 Sigma is 95% actually actually plus or minus 1.96 Sigma is 95% and 3 Sigma is about 99.7% Infinity by definition goes to Infinity you want to cover everything plus minus infinite standard deviation
AI_LLM_Stanford_CS229
Stanford_CS229M_Lecture_18_Unsupervised_learning_mixture_of_Gaussians_moment_methods.txt
OK. So I guess let's get started. So today this lecture, we are going to discuss a few small stuff that are remained-- that are kind of left from previous lectures, and then we're going to move on to unsupervised learning. So I guess the first thing is recall that last time, we talked about implicit regularization of the noise. And we mentioned that in certain cases, you can prove that a regulizer prefers the noise, noisy GD. Noisy GD prefers smaller of this quantity, R theta, which is defined to be something like the trace of the Hessian. And in the first part of this lecture, I'm going to spend probably 10 to 15 minutes to briefly discuss why this is a reasonable thing to try to minimize, or try to kind of regularize. So why the trace of the Hessian is some meaningful quantity, or-- but this part wouldn't be exactly kind of rigorous, because you have to do some approximations and so forth. But I'm just going to do some kind of a somewhat heuristic derivation to justify why something like the Hessian would be useful for us to regularize. So I guess the thing is that how do we-- what is the Hessian? What is the Hessian? Maybe, actually, I should write l hat. This is the empirical-- the Hessian on the empirical loss. So maybe for simplicity, let's only consider the one data point. And let's say, suppose f-- let's denote f to be f comma theta. This is the-- I guess maybe I should say f theta of x be the model output. And let lfy be the loss function. Then what you can do is that you can compute. So then l theta, in this case, is just l of f theta xy. So in this case, then we can compute what the Hessian is. So the Hessian-- maybe let's call it l hat just to be consistent in terms of notation. The Hessian is the gradient of the gradient. Now what's the gradient? If you use Chain rule, what you got is that you get the partial l or partial f times partial f over partial theta. So this is number. The directive is back to-- so this is a scalar function. l is a function of f, but this is a very simple function because l is a scalar, f is a scalar. So this is a scalar and times the gradient of f theta at x. And now you are taking a gradient of a product of two quantities. One is a scalar and the other is a gradient. And then you can do chain rule. What you got is that you first, for example, do the gradient with respect to your first part. What you get is-- let me see what you got from the first one, is that you have l, the second order derivative with respect to f. And then you do the chain rule, you get this f theta x, this is with respect with theta, times gradient f theta x transposed. And this is in some sense the-- so this part is the gradient of this and this part is copying. I guess this is copying from here in some sense. But I guess this is something that you can verify offline, if you do all the-- if you look at all the coordinates and do all the calculations. And then you can also do the chain rule for the other part. So what you can get is the bl over df times the second order derivative of the model theta at x. The matrix of dimension p by p, if p is the number of parameters. And this is a scalar, this is a scalar. And this is a vector, this is a vector. So the whole thing is a p by p matrix. So we'll have-- suppose your loss-- so this is a general formula, which is just rigorously true. And suppose the loss function is lf y is equal to, for example, 1/2 times y minus f squared, then this formula becomes-- what is the second order derivative of this loss function with respect to f? Its function is a quadratic function of f. The loss function is a quadratic function with respect to f, and the leading term is f squared. So the loss-- the second order derivative with respect to f is 1. So this is equal to 1 times gradient theta-- of f times the gradient of f theta transposed. And then this first order derivative will be-- respect to f will be f minus y times the Hessian of theta x. So this decomposition is often called-- so what you can see is that this is a convex. Term. This is a PSD term, sorry. This is PSD because it is the other product of a rank one matrix, and this is non-active. And this is not necessarily PSD. So the Hessian may not be PSD in general, of course, right, because you have a non-convex function, but one of the terms is PSD. And in general, if you have a convex loss function in the first term, in general, this is called-- this is called-- I don't know why this is called this, but it's called Gauss-Newton decomposition. I think it must have something to do with these two famous people at some point. But it's called Gauss-Newton decomposition. And in general, the first term, this is always positive for convex loss function. By loss function, I really mean literally the top, the either quadratic loss, or cross-entropy loss. They're all convex, right? So this term, in almost all cases where we study the first term, this is non-zero. So the first term is-- this is PSD. PSD. And empirically, people found that the second term in most of the cases is small. So there could be multiple reasons for this. So empirically, the second term, f minus y, this is generally smaller. And while the reason could be that at least when you are at a global minimum, this term is 0. So when theta is at a global min, meaning f theta x is equal to 1, right? So global min would fit the data exactly. So in this case, then this term is actually literally 0. So f minus y 0. So this could be one reason why empirically, the second term is relatively small. Of course, this is not always true. It's not always true you can fill data at any point. But somehow, people found that the second term is somewhat smaller than first term. So if you don't care about anything super-- if you don't care about a very nuanced quantities, about a Hessian, then the first term is the reasonable approximation for the Hessian. Of course, in certain cases you do care about nuances. For example, when you care about whether this function is convex or not, you should talk about-- even any non-negative eigenvalue would make it non-convex. So then the second term becomes important. But if you just have a choice, then the second term is not that important. So that's the rough intuition. And now suppose ignoring the second term. this is a big assumption, but suppose ignoring the second term. Then we can see what's the trace of the Hessian. Second term. So the trace of the Hessian for whatever it is. For example, you can ignore it just because it's empirically small, or you can ignore it because you are at a global minimum. But suppose you ignore the second term. Then the trace of the Hessian, f theta is approximately equal to this derivative is equal to y, which is probably 1 if you have square loss, times the trace of this transpose which is equal to some scalar times the true norm of the gradient. So you can see that by minimizing the trace of the Hessian, you are minimizing the l2 norm of the Lipschitzness with respect to the parameter. So minimizing the trace of the Hessian, is somewhat similar heuristically, minimizing Lipschitzness of the model with respectful to theta. And I think-- so why is minimizing the Lipschitzness of the model with respect to theta is useful? Actually, first of all, this is indeed useful. If you just expressly minimize this, people have found that empirically this is useful. And why this is useful? If you allow some heuristics, you can also say that this is very similar to minimizing the Lipschitzness of the model output with respect to the hidden variables. I think this is something that we discussed probably a few weeks ago when we talked about the all-layer margins. So recall that if you have an original theta, which consists of, for example, a bunch of layers, suppose you have a deep network with a lot of weights, and then the derivative of the model with respect to some layer i, this is equal to the derivative of the model with respect to the layer above it times the layer mc. So this is hi plus 1 times hi minus 1 transposed. So this is the so-called-- OK, I guess now I remember the it's called Hebbian rule. But it's actually just literally a simple chain rule. In a narrow sense, this is called Hebbian rule. But technically, it's just really a chain rule. You want to take the derivative with respect to a parameter, and the parameter kind of comes into play that depends on hn as 1 and hi plus 1. This is hi. So here, hi plus 1 is w times hi. So this is the i-th layer and this is the pre-activation of a plus 1 layer. I guess maybe technically I should call this hi prime just so that I can distinguish it from post-calibration. But I guess you get the point. The point is that if you take the derivative with the respective parameter, it's actually very closely related to the derivative, with respect to a hidden variable, and a norm of the hidden variable, hi, all right? So this means that Euclidean norm. If you read the matrix and it's a Frobenius norm of this is equal to true norm times hi. True norm. So minimizing the Lipschitzness with respect to the parameters is similar to minimizing the Lipschitzness with respect to hidden variable. I think this is something we have discussed before when we do the all-layer margin, right? So then we talk about the derivative of the hidden variable, then this is kind of like all-layer margin. I guess you are maximizing all-layer margin, because all-layer margin is bigger if you have more Lipschitz model. You have a larger all-layer margin. So none of these steps can be made 100% rigorous. Some of the intermediate equations that I've written are exactly true. But I don't think all of these steps can be made completely rigorous. But sometimes, this is probably the nature of the networks where you cannot be 100% precise just because things don't match exactly. But I think the intuition is really just that the Hessian relates to the Lipschitzness of the model with respect to the parameter, and the Lipschitzness of the model with respect to parameter relates to the Lipschitzness of the model with hidden variables, which is kind of like captured all-layer margin. Any questions? OK. So this is the first thing about-- this is the remaining steps, the remaining remarks from the last lecture about the implicit regularization of the noise. And there's another thing I want to discuss, which is something I-- sometimes it's my omission. I forgot to provide a proof for one of the theorems that we discussed I think two weeks ago about the implicit recognition effect in the classification case. I think there, we only-- basically, at the end of the lecture, we only were able to kind of provide a theorem and the basic intuition. But we weren't able to really show the proof. The proof is very simple and short, just one page. I think it's a very nice proof. So I really want to show it to you. So maybe let's discuss that in the next part. So I'll remind you what the theorem was about. So guys, just this is I think two lectures ago. So two lectures ago, we showed the following theorem. The theorem was something like suppose maybe the context is that we have linear model classification and we have a gradient flow. We have infinitesimal learning rate, and want to understand what's the implicit bias of the algorithm in this case. And the theorem that we had was that gradient flow converges to the direction of the max margin solution in the sense that so the margin of your intuition is converging to the max margin solution as t goes to infinity. So here, wt is iterate of gradient descent at time t of gradient flow at time t. And gamma is the normalized margin. And gamma bar is the max normalized margin. I think at the end of the lecture, I think I discussed the intuition. The main intuition is that the cross entropy loss is really-- basically, you can do an approximation. And in certain cases, the cross entropy loss is an approximation of the max margin. So the main intuition, if you recall that lecture-- I will just very, very briefly summarize this. So the main intuition is that if you do a bunch of heuristic calculations, you can find out that the log of the loss is approximately equal to-- let's see. So the log of loss is approximately equals to minus times the norm of w times the gamma of w. So basically, minimizing the loss is kind of either you want to make the norm of w bigger, or you want to make the margin bigger. So we did this very heuristic simplification. Give you this. So that's why if you want to minimize the loss, in some sense, you are either trying to make the norm bigger, or you are trying to make the margin bigger. And it turns out that you can actually control both of these two forces, these two kind of tendencies. And if it's actually true that the norm is growing and the margin is also growing, both of them are trying to be big. And you can show that the norm grows to infinity and the margin grows to the moderate-- the largest margin. So that's the thing we're going to prove in this theorem. Any questions so far? And then one of the key things that we discussed at that point was that the log sum exponential and one of the key techniques is that log sum exponential is kind of like the same as max if your input has a large scale. So today, I'm going to provide a formal proof for this theorem, which is actually pretty-- in my opinion, it's very elegant and simple. And we only prove it for-- prove for only the case when the loss function is minus exponential t, the exponential loss. Recall that in that lecture, we also discussed that the logistic loss, even though it's called logistic loss, is actually very close to exponential loss. So we only deal with exponential loss, which is almost the same as the logistical. So the main feature is that as t goes to infinity, the loss goes to 0. So the t is supposed to be the margin, and when the margin is very, very big, your loss is very small. And the idea is that we can consider the smooth margin. So the smooth margin is defined to be-- I think in the lecture, we defined the smooth margin to be-- let me find out the-- OK, so I'm looking at-- So the smooth margin is defined to be the following. So consider the smooth margin. So the smooth margin is defined to be the log of the empirical loss over the true norm of w. So recall that we have tried to-- I'm sorry, minus log. So we have established this equation last time during the intuition, and that actually motivates the use of the smooth margin. You can see that the smooth margin is basically supposed to be approximately equals to the margin, gamma of w, if this approximation is true. But it's not exactly equal to that just because this is only approximately equals 2. So that's why we work with this smoother version, which is, in some sense, almost the same as the margin, but just more kind of closer to the loss function, l hat. And if you work with the smooth margin, you can show that the smooth margin is actually-- I guess we have proved this in the last lecture. So the margin is actually bigger than the smooth margin. So I guess maybe let's just write out exactly what this is. This is minus log sum of n exponential minus yi times w transposed xi the norm of w. And you can show that the margin is larger than a smooth margin. It's because we can replace each of these terms by the maximum-- by the minimum, right? This is just because yi w transposed xi is less than gamma w times the norm of w. Sorry, this is not n. So the margin is supposed to be something close to the margin, but smaller. So that's why it suffices to show that the smooth margin, gamma to the w, converges to gamma bar. wt converges to gamma bar. This is because you have the sandwich thing. So you know that gamma w is always then gamma bar. So basically it's the smooth margin is sandwiched between-- so if the smooth margin converges to gamma bar, then gamma w has to converge to gamma bar, because there is no way for gamma w to go beyond gamma bar. So now basically this is what we're going to do. We're going to prove that even the smaller value, the smooth margin, is going to converge to gamma bar. So then the larger value will also converge to gamma bar. And the proof is actually also pretty simple. So we basically show that-- we'll show a gradient flow will increase this quantity, the log of the wt, log of the log-- the log loss, because it decreases intuitively, because it decreases l hat wt. So let's do this formally, concretely. Because I think-- no. The statement itself, it increases the minus of the log loss. That's kind of like almost obvious because the loss itself is going to increase. But how much it increases requires some kind of mathematical derivation. So concretely, recall that the change in w is minus gradient lwt. This is the definition of the gradient flow. Then what you have is that the derivative with respect to t of the change-- the changes in the log minus the log loss is equal to-- so how does this change? You take the chain rule, right? So you first look at how does the loss depend on w. And then you look at how does w change, and how does the loss depends on w? So does the derivative-- so then this is a derivative of this, which is minus-- you use the chain rule again. So you get l of wt is the chain rule for the log. And above, you get a gradient of l hat wt and then fw dot t. And recall w dot t is really the gradient of the loss function. So basically, up to a sign. So basically, you get the gradient of the loss function 2 norm squared over l of wt. And this is bigger than 0. So this shows that this minus log loss is going to increase as t goes to infinity. But the important thing is how fast it increases is this quantity. This is something we're going to use. This whole thing is increasing is not surprising because the loss is decreasing. But we also want to know how fast this is increasing. And by the way, you can actually-- I think it's useful to use this, because we're going to compare it with. You can also write this as this, equals to this, just because the nabla l hat is just equal to w dot t, OK? So now with this, what we can do is that we can control what, eventually, after t step, what happens with the log loss. So what you get is that minus log l hat wt is equal to minus log l hat w0 plus the integral between 0 and t of the derivative of this quantity. And this is going to be using the equation above. You got that this log w 0 plus the integral of w dot t 2 norm squared over lwt dt. OK? So we basically now know how fast-- how large is the log loss, right? So recall that what we care about is this quantity. What we care about is-- what we care about is this and how does this goes to-- how does this go to gamma bar as t goes to infinity? And we have already dealt with the numerator, and we just have to-- we know how does this-- we somewhat know how does this change for the numerator. And we have to-- again, another thing is that we have to try to understand the denominator, right? So the denominator, you have to normalize this by the norm of w. So basically, next thing is that we're going to go with this term and compare it with the normalizer norm of w. So what you do is that you look at the w dot t squared. This is bigger than w dot t times w star. Recall w star is the direction of max margin solution. This is just by Cauchy-Schwarz, right? So the inner product of two vectors is less than the norm of one vector times the norm of the other vector. And the norm of the double star is assumed to be 1. So then we plug in the definition of the w dot minus gradient l wt w star. And then we plug in the true definition of the nabla l. So we plug in the derivation for the nabla l. So this equals to yi times the exponential minus yi xw transposed xi times xi and times w star. And then this is a vector, this is a scalar, this is a scalar. So basically, you can just take any part of these two and matched by the scalar. So this will be equal to-- I guess there's no minus here because there's another minus in the gradient, which cancels. So then this is equal to sum of yi times exponential minus yi w transposed xi times w star times xi. I guess maybe let's write this w star transposed xi. And this, we can see that this is the margin of the max margin because w star is the max margin solution. So this is always bigger than the max margin. So this is larger than gamma bar times-- I guess let me finish-- let me explain, because yi w star transposed xi is bigger than gamma bar. This is just because gamma bar is the margin of w star. So that's why every data point has a bigger margin than the margin of the data set. Gamma bar is essentially minimum over all data sets, right? And then this is equal to gamma bar times the loss. So with this, then we can proceed by dealing with-- we can proceed by dealing with this term to further lower bounds how-- control how fast you grow. So with this, you get log l hat wt is large than minus log l hat w0 plus-- so maybe just one more before I use. This let me just try to interpret what this is really doing. So this-- let's see. So in some sense, as a remark, what this is really doing is that so in wt, we show that wt-- so we show that wt is correlated with w star. That's what we are showing. So we are showing that the w dot t times w star is bigger than a non-negative quantity. Now how correlated is this depends on-- and the correlation depends on gamma bar and the loss. So in some sense, the-- and because you are correlated with the w star, it means that you cannot-- w dot t itself cannot be too small. And so this is another thing we got right. So w dot t is not too small, at least compared to the loss. So what is he saying is that if the loss is not too small, then you have to make some changes in your w. And if you have to make some changes in your w, then you have to make some changes in the log of the l hat wt. So basically, if the loss is not small, then your log of the loss needs to increase. The minus log of the loss needs to increase. So it's a little counterintuitive in some sense, but I guess-- so what we do next is that this control this additional terms that are circled here. So this term, if you use the equation we got, we got this is larger than gamma bar times you cancel out one of the law-- you can use this for one of the-- there is a power of 2 here. You can use the equation-- maybe let's get this equation one for one of these occurrences of w dot t. So then you get-- you are left with one, and then you got the gamma bar and I hat. l hat got canceled with the denominator and gamma bar is put in the front, so we get this. So basically, I'm applying equation one for one of the w do t true norm. And then you can use a triangle inequality-- this is by one-- and use the triangle inequality to say that this is larger than the integral of w dot t true norm squared dt-- get rid of that. This is replacing the integral with the norm, and got gamma bar times a norm of wt. So I guess next, you're going to see why we care about all of this. Because we care about this because now you can control how fast l hat, this log loss is improving compared to how fast the norm of w is improving. And this is what we really care about because fundamentally, we care about the ratio between them. This is the definition of the soft margin, or the smooth margin. So this means that the ratio is getting closer to gamma bar. So this term is a constant and this term is something that becomes closer to 0 as t goes to infinity. So wt goes to infinity as t goes to 0-- as t goes to infinity. That's why this term here converge to 0 as t goes to infinity. So that's why, if you take the limit, when t goes to infinity, we got this smooth margin. So we call that this ratio is the smooth margin is converging to gamma bar. So in other words, the limit t to infinity gamma tilde wt is equal to gamma bar. Maybe here you only get negative, and then you use the other way to show that. And you also know that-- OK, so we also know gamma bar is larger than the margin of any w because gamma bar is the max margin, which is larger than wt. And then you can show that the limit is actually equal to gamma bar exactly. So we're good. Any questions? OK. So I guess with this, we basically concluded our section about implicit regularization. So I guess just to very quickly briefly wrap up, so this is the end of the section about implicit regularization, and we have talked about a bunch of things like initialization. So a small initialization prefers a certain kind of solution, typically a small norm solution-- prefers small norm solution. And we are-- actually, in one of the cases, we also show that you can have interpolation between small initialization and large initialization. So in that case, you can show the implicit bias for any initialization. And we also talk about the classification problem, so where you got the max margin. So this is where you get the max margin solution. And we also talk about a lot the noise. So in all these cases, it's kind of like you have something in your optimizer that is only designed for optimizing faster in some sense, but somehow, as a side effect, you get implicit regularization effect. OK. So any questions? OK. So if there's no questions, let me move on to the final part of this lecture-- of this course, which is more about unsupervised learning reputation, and so on and so forth. So in this lecture and the last two lectures, you still have-- so basically, in the next 2.5 lectures, we're going to talk about unsupervised learning. There are not that many theoretical work about unsupervised learning. Of course, there are a lot of very amazing empirical works these days, but not that many are theoretical work. So what I'm going to do is that I'm going to start with somewhat kind of classical approach a little bit. So for this lecture and the beginning of the next lecture, or maybe a good portion of the next lecture, I'm going to talk about the classical approach-- I mean, a classical theoretical approach. So there are many, many approaches before, like, for example, the most empirically, probably-- before deep learning, the best empirical approach would be probably you do latent variable models with EM, expectation-maximization. But I'm going to for those kind EM algorithms, there are very little theoretical analysis. And even their analysis, it's kind of like special case, and it's not clear whether they can be extended to a complex case. So what I'm going to talk about is a different line of research, which uses the so-called moment method. So these kinds of methods don't necessarily work very well empirically, but they have very good-- you can analyze them in a very clean way. And these kinds of mathematical techniques are also useful for many other cases. So I think it's worth spending one lecture to talk about this approach. And it used to be the case that actually, around probably 2012, 2013, at that point, I think the community, the theoretical community, thought that this might be the new thing. This could be the new thing that you can both analyze and empirically work. It turns out that the analysis part developed-- got developed very well, but the empirical part is doing OK, but not good enough to replace the EM algorithms. At least not enough to replace them completely. And then I'm going to talk about some of the more modern work with deep learning-- with deep learning, like, for example, self-training or contrastive learning. So these are basically analyses in the last one or two years about some of the new algorithms in deep learning. So I'm going to spend probably the last lecture-- and the last 1.5 lectures on this. OK, so that's the plan for the next 2.5 lectures. And so today, I'm going to talk about a classical approach, right? And by the way, another kind of general comment is that in my opinion this unsupervised learning seems to be the core for many things, right? So this also relates to, for example, semi-supervised learning, where you have some unlabeled data together with labeled data. And this also relates to unsupervised domain adaptation. And my personal opinion is that all of these questions, what really you care about is really-- in both of these questions, what you really care about is how do you leverage unlabeled data. So in some sense, they all reduce this to unsupervised learning, in my opinion. So now let's get into something more concrete. So let's say-- let's have some setup. So this is the setup. This is with latent variable models-- latent variable models. So we are interested in those conflicting variable models, especially in a classical approach. So the formulation is that you have a distribution, p theta, parameterized by theta. How it's parameterized by theta, that would be-- there are many different ways, which I'm going to introduce a few of them. But every parameter decides the distribution, p of theta. And then you are given unlabeled examples. There's no labels anywhere. So you're given examples x1 up to xn. They are sampled iid from this distribution, p theta. And your goal is to recover theta-- or learn theta from the data. From the data. So that's the formulation. And p theta can be described as a latent variable model, or can be, or typically is described by a latent variable model. So everything that describes a generative model in some sense. So for example, I assume you somehow know roughly speaking what latent variable model is from CS29, but let me give some examples. For example, mixture of Gaussian. This is one of probably-- the most studied executions in machine learning. So the assumption is that in the most general sense, the theta is-- so the parameters describe a bunch of things. So let me write it down first. So you have a bunch of vectors, k vectors, and a probability-- a bunch of probability numbers. So each of these mu i in dimension d is the mean of the component. And p1 up to pk, this is a probability vector in the simplex, right? Let's call it theta k a simplex in k dimension, which is basically a set of vectors with norm one-- sorry, norm equals to 1 non-negative in dimension k, right? So p1 up to pk is a probability vector over k items. And given these parameters, what's the model? How do you generate? So this is my parameter, and how do you generate data? So it's a mixture of functions. So intuitively, you just want to model the case where you have, for example, something like this. You have several clusters of data, something like this. I guess you don't see the color in the data, you just see the raw inputs. The color is just to indicate which Gaussian it comes from. So mathematically, you say that you sample x from p theta by your first sample sum i, the cluster id from a categorical distribution defined by p. So i is between-- i can take values from 1 to k And then given the id, the cluster id, you sample a Gaussian with mu sub i, and then some covariant, let's say, identity. So actually, the covariance can also be a parameter you want to learn. But here, for simplicity, I just assume all the Gaussians have the same covariance just to make everything easier. So this is the latent variable model, so where i is the latent variable. This is something you don't observe in data. You only observe x. But given the latent variable, you can generate the data. So basically, there are two parts, where you first generate a variable and then generate data under the hood. And then in the other approach, which I'm going to define probably mostly when I'm use it-- I'm going to use it. So HMM, the hidden Markov model. If you take some NLP class, probably you have seen these kinds of things. Or ICA, independent component analysis. This is also something, I think, covered in CS229. And so there are many, many other kind of latent viable models, Bayes nets and so forth. So this is the final question we're going to study. And now let's talk about the approach. So the approach-- maybe before that, any questions? OK. So the approach we're going to study is the so-called moment method, which is actually pretty powerful. As an approach, there are some drawbacks, which make it empirically less appealing. But the approach itself, if you don't have a certain kind of aspects, then it's actually pretty powerful. And this is called Moment method. I think this method is proposed by, actually, an economist, or actually a few economists, to understand data from economy-- from economists-- from, I think, I don't know. I don't know, some kind of-- so the original source is definitely not machine learning. But then people use this for machine learning these days, with, actually, a pretty complicated approach. Actually, even though I think-- actually, I think I misspoke. The very original proposal of this moment method actually probably dates back to 19th centuries by some statisticians. And then actually some economists got-- even got the Nobel Prize by generalizing this modern methods to something like what we are discussing right now. Anyway, so let's see how does this work. So I'm going to just only-- I'm going to walk you through this kind of method by showing examples. So let's do the first example. So first example, let's talk about of mixture of two Gaussians. So you just have two Gaussians. And I think that-- and also let's say k is 2, right? And then let's also assume p1 and p2 are just a half. So these two quotients have the same probability. So they have the same marginal density. And also, with the loss of generality, we can assume the min is-- the average of the min is 0. So basically, they are just symmetric around origin. This is, in some sense, [INAUDIBLE] because which point you choose at the origin wouldn't really matter that much. So then mu 1, you can write mu 1. So let mu to be equal to mu 1, and then mu 2 is equal to minus mu. So basically, we only want to learn one parameter vector, which is mu, and the data comes from this mixture of two Gaussians. One Gaussian is min mu and covariance entity. Another Gaussian is min minus mu covariance entity. And the moment method-- so the general approach for the moment method is the following. So first, you estimate moment of x using empirical samples. I'm going to define what exactly a moment really means. Moment really means the-- I guess depending on whether you have any background-- I could always define what moment really means. And then what you do is you recover parameters from moment of x. And by moment, we really mean something like this. So the first moment, this means the average of x of the data. So the first moment-- and let's try to do this for this particular example. So if you do the first moment, then the first moment is the expectation of x. And what is expectation of x? There are two cases. One case is that you have a latent variable, which is 1, and the other case is the latent variable is 2. So you can look at the expectation of x for both of the two Gaussians, right? So with half the chance you come from the first Gaussian, and that's the case when i is 1. There's half the chance you come from the second Gaussian. And when it comes from the first Gaussian, the min is mu. So that's the definition, so you get a half times mu. When you come from the second Gaussian, the min is minus mu, so you get minus mu, which is 0. So this means that there is no information about mu from the first moment. Not so good. So this is not our plan. Our plan is to recover mu from the moments. But from the first moment, we cannot really get anything. So then what you do is you go to the second moment. So the second moment is the expectation, let's call it called M2. Maybe I should call it M1 as well. So a second moment is M2, is defined to be the expectation of the ultra paradox of x with x itself. There's this expectation of x and x transposed. So why is this called the second moment? This is really-- this is a matrix, basically. Basically, you can see that M2ij is the expectation of xixj. So basically, this expectation of the product of two coordinates of the data. And you organize all of this into a matrix and you call it M2. And if you compute the second moment, then you can see, actually, mu is-- you can kind of see mu from it. So how do I compute the second moment? Again, the same thing with half of chance, your x from the same-- from the first Gaussian, with the half of chance your x comes from the second Gaussian. And when it comes from the first Gaussian, so what's the covariance-- what's the second moment of x under the first Gaussian? So this requires a little bit of calculation. So let's do that here. So suppose x come from a Gaussian with min mu and covariance entity. What is the second moment? Maybe let's have a different letter for it so that we don't call it x. Let's call it z. So how do you compute this? So there are several ways. One way is that you just literally look at each of the coordinates and try to compute expectations. That's perfectly fine. So here I'm going to be a little lazy. I'm going to write that this is equal to expectation of z times expectation of z transposed plus the covariance of z. Because covariance of z is equal to the second moment minus the ultra product of the min. And the min is mu. So the mu-mu transposed, and covariance is an identity. So that's where we got mu-mu transposed plus identity. And then for the second-- so basically, you get a half times mu-mu transposed plus identity. And then for the second Gaussian, actually, the moment is the same, just because mu and minus mu is the same if you square it. So you get a half times mu-mu transposed. So eventually, you get mu-mu transposed plus identity. OK? So now it looks good, because at least mu seems to come-- mu can be, in some sense, read out from the moment, right? So if you get the second moment, you subtract i, you can recover mu, right? So basically, what you do is you say-- first, you estimate M-- but you don't necessarily know M2 exactly, right? So you estimate M2 by the empirical samples. So what's the empirical samples? So you define this empirical moment as the empirical second moment. And then you recover mu from M2 hat by pretending M2 is the same as-- M2 hat is the same as M2. So for example, you can recover mu by-- how do you do this? One way to do it is you can subtract i from M2 hat and then try to take the square root of it. And here, I'm going to do one. So basically, the key thing is that-- so how do we recover? So let's do a warm up. So I guess in some sense, to recover it from M2 hat, the first thing you want to make sure is that you can recover it from M2. So this is kind of like a premises, can you recover mu from M2, right? And we have argued that this is actually true because you can just subtract i from M2 and then take the square root. There's another way to do it, which is-- so another way, which is the spectral method. I'm going to introduce this here because it's going to be useful for the future cases. So how do we recover mu from mu-mu transposed plus identity? What you do is you take the top eigenvector of M2 is, actually, equal to mu over the norm of mu. Let's got this mu bar. So the top eigenvector of M2 is actually exactly in the direction of mu bar. And this is something-- and also the eigenvalue is-- the top eigenvalue is mu 2 norm squared plus identity. And this is something you can verify relatively easily. So because eigenvector of mu-mu transpose is mu bar, and then eigenvector of mu-mu transpose plus identity is the same. This is just because if you add identity to any matrix, you don't change the eigensystem. You don't change the eigenvectors. You only change the eigenvalue. The eigenvalue got increment by one. That's what happens when you add identity to any matrix. So you can see that from M2, you can recover mu, either using a simple subtraction and square root, or you can do this eigendecomposition. And this is the case-- actually, this, actually, also corresponds to the infinite data case. Because when you have infinite data, you can literally compute M2. Because the average will be exactly equal to the population. So now, the question becomes, what if you don't have infinite data? You don't have M2, you only have M2 hat. So basically, you need-- recover from M2 hat, basically, using the same algorithm-- using the same algorithm, on M2 hat. So basically you just use the same eigendecomposition on M2 hat, and you need this algorithm to be robust to errors. Robust to errors in the sense that if you have two matrices, M2 and M2 hat, that are similar, then applying this algorithm will give you similar answers. So if that's the case, then you get similar answers as if you computed on M2. So you got to an approximate estimate for mu, right? And it turns out that this robustness thing is often OK, at least in a qualitative sense. For most of the algorithms we're going to discuss, they are robust to some errors. So actually, the most important thing would be this. So we're going to focus mostly on the infinite data case. So we're going to focus on infinite data case because most of the algorithms is robust to ours. So the algorithm analysis part is important if you really publish the paper, but for the core idea, you don't have to really do the algorithm analysis, because most of the algorithms are reasonably robust. Any questions so far? So basically we have completed our discussion about this mixture of two Gaussians. And now let's deal with a mixture of three Gaussians. And you can see that the point will be that you cannot just only use the first moment and second moment. You have to actually go to the third moment, and it will make things a little bit more complicated. So maybe the general approach is to-- is that you compute M1, which is the expectation of x, M2, which is the expectation of xx prime, and M3. What is M3? What's the third moment? M3 is the expectation of x tensor x tensor x. If you are not familiar with this notation, so x tensor x tensor x, this is the third-level tensor of dimension d by d by d. And so let's say this is called T. So then T is the third-level tensor, and the ijk entry of this tensor is equal to xi times xj times xk. So in some sense, if you do the-- so x tensor x is basically just a rewriting of xx transposed, and x tensor x tensor x is defined like this. And you can also have a tensor, b tensor c. So suppose T prime is equal to a tensor b tensor c. Then T prime ijk is equal to-- definition would be ai times bj times ck. So that's why in this sense, if you look at M3, ijk entry then this is the entry of the ijk entry, which is expectation of xi times xj times xk. So basically every entry of this third-order tensor M3 is the expectation of the product of three coordinates of the data. And you can do this even for M4, or-- M4 and M5 so on and so forth. And then you design an algorithm and let's just call it A, script A, that takes in the moment and outputs theta. So you want to recover from the moment the parameter theta. And then if you can do this, then the last step will be that you have to show A is robust to errors, and then apply A to the empirical moment. And how many-- what is the order of the moment? So this is in reality. So this is the final algorithm. So apply A to the empirical moment, that's the final algorithm. All the previous steps are the process of designing the algorithm. So basically, what is the order of the moment you have to use, right? So do you need third-order moment, fourth-order moment? That depends on from how many moments you can recover the parameter theta. If, from the first moment, second moment you can recover, then sure, two moments are fine. If you need three or more moments to recover, then you need M3. Otherwise, you probably even need M4. In fact, in some cases indeed we need M4. I guess in-- yeah, I think even in a case that we're going to discuss, we need M4 for the first tensor-- the first tensor. Any questions? OK. So I think I have only-- less than-- about-- oh, I have about 15 minutes. So I'm going to show that-- let's talk about mixture of high Gaussians, and I'm going to show you that you actually need at least the third moment when the number of components is not just two. And this is very typical. In most of the cases, you need at least a third moment. Actually, it's not very easy to find a case where second moment suffices. I have to think about which case second moment suffices for when I found this two-component mix of Gaussians. In almost all other cases then you need a third moment. So let's assume-- again, let's make it simpler. So let's assume that this is a mixture of Gaussians with a uniform mixture. So all the components have 1/k probability to show up. So basically, you would sample i uniformly from k and then you generate x from Gaussian with min mu i and covariance identity. This is a generative model for our data. And alternatively, you can probably write x is sampled from this, the average of this k distributions. And in all the follow ups, we are going to only do a and b. So we only do a and b, the step a and step b for all examples in the SQL, even including examples in the next lecture. The robustness, you can do that, but it requires too much mathematical jargon, which is not really needed for this course. And A and B is really the gist-- is really the core thing that enables this. So now let's try to compute the moment and see which moment is enough for us to recover. Again, let's compute the first moment. So this is-- we have k possible cases. So each case arises with x probability k-- probability 1/k. So each cluster shows up with probability 1/k. So condition on the cluster i, your min is mu i into some of them-- this is 1/k times the sum of mu y. So clearly from the first-order moment, the first moment, you only know the average of the min. You probably wouldn't be able to recover each of the mins. That sounds reasonable. And now let's look at a second moment. The second moment is-- I guess we still do this total law of expectation. Your condition on the hidden variable i, the latent variable i, and you can set the moment for that Gaussian. And we have shown that for every Gaussian, the moment is-- second moment is mu i, mu i transposed plus identity into the sum of i to k, right? And then this is 1/k times sum of-- basically, this is the average of the outer product of mu i mu i transposed plus identity. So the question becomes-- suppose you just want to use the first moment and the second moment. The question becomes, can we recover from M1 and M2? Or maybe more specifically, from the average of mu i and the average of mu i mu i transposed. And the claim is that this is not possible, at least when k is larger than 3. So there are two arguments. I guess there is one argument. The argument is the following. The reason why this is not possible is that these are just not enough information for you to recover, in some sense. So there's-- so you're still missing some kind of rotations-- missing rotation and likely information. What does that really mean? Let me specify. So suppose you-- just to make the discussion easier, let's define u to be this collection of mins, mu1 up to mu k, which is in dimension d by k. So this is the matrix you want to recover. And I'm claiming that there are going to exist two sets of mus that have the same average-- the same qualities here, these two quantities. Both of M1 and M2 are the same, even though the mus are different. I'm going to construct such a situation. How do I do that? I'm going to take a rotation matrix R, in dimension k by k. I'm going to rotate-- so basically, I'm going to consider u versus u times R. If you rotate on the right-hand side, you got a different type of set of means. So I'm going to claim that u and u times R have the same statistics, have these same two quantities. So first thing is that if you look at the average of the outer product, mu i mu i transposed-- sorry, it's 1/k, then this 1/k times uu transposed in our simplified notation. And this is equal to 1/k times uR times uR transposed. This is just because RR transposed is equal to entity. That's the definition of rotation. So that means that u and UR not distinguishable from this quantity, from the average of the product of the mu i mu i transposed. So now let's look at the first-order moment. So to make the first-order moment also not distinguishable, I also have to take another-- take in addition R, such that R times L1 vector is still equal to L1 vector. So you want a rotation such that you don't-- you want a rotation, but you don't want to rotate the direction of L1 vector. That's easy. You have so many rotations. You can-- you just want to say, I am going-- it's like if you have a globe, you have one direction, which you don't change. But you can rotate still in other dimensions directions. There are still k minus 2 directions, because the dimension is k here. So you still have a lot of degrees of freedom to choose many different Rs that satisfies this, as long as k is somewhat big, like larger than 3. And then suppose you satisfy this. Then uR times L1 vector. Maybe let's write this. So 1/k times sum of mu i. This is one 1/k times mu times L1 vector. I'm claiming that this is equal to 1/k times u times R times L1 vector, just because I designed R like this. So that's why-- from this average column statistic, or from this quantity, you still don't distinguish u and uR. So u and uR are not distinguishable because they exactly match the first moment and second moment. So that's why we need to go to M3, to distinguish-- to uniquely identify the columns of u. OK. I think we are five minutes early, but I think the next thing would be-- probably takes much more than five minutes, so I guess I would just stop here to see whether there's any questions. And tomorrow-- next lecture, we will continue with solving this question with M3. Any questions? Is there a [INAUDIBLE]? Yeah. So the question is, how do you infer the number of Gaussians? So first of all, indeed you are right that in the formulation right now, I am assuming I know exactly the number of Gaussians. I'm even assuming that I know all the probabilities for each Gaussian, right? p1 up to pk are exactly just 1/k. And the question is, how do you kind of infer enough Gaussians. Maybe also another question is, how do we infer the p1 to pk? So there are ways. Of course, there are ways, depending on-- there are various ways depending on what assumptions you make. But definitely it's possible. For example, one somewhat-- one way that would work in certain cases is that you can infer the number of Gaussians by looking at the rank of this matrix. Suppose you believe that all the mu i's are not like degenerate. These are not-- they're all in general positions. So then the rank of this matrix will be k, especially when k is less than b, right? So then you can infer number of Gaussians, k, by looking at the rank of this matrix. But I'm not saying that that's actually really a great method because empirically, we got into other issues because maybe your condition is not exactly matched and so forth. So there are many other ways. And empirically, the most typical way to estimate the number of Gaussians is using non-parametric-based methods, which I guess is not something we will cover here. So for the theoretical setup, we are mostly interested in a clean setting where you know everything, and it's still an open question to recover the mu i's, even with a knowledge of the number of Gaussians. [INAUDIBLE] Right. So as long as that happens, it wouldn't work. So that's why it's probably not a great idea. But actually, typically, if you really got high dimensional data, the mu i's typically they are independent. But they could have some kind of-- one of them could live in approximately the subspace of the others. So then it becomes tricky because whether you are robust to errors so on and so forth. Yes. So I think loosely speaking, it's reasonable. But if you really look at the details, it's not that great. So that's why you need other methods sometimes. I guess if there's no other questions, I'll see you next Monday.
AI_LLM_Stanford_CS229
Stanford_CS330_Deep_MultiTask_Meta_Learning_Bayesian_MetaLearning_l_2022_I_Lecture_12.txt
On Monday, we talked a lot about variational inference and how do you optimize for complex distributions of data? And today, we're going to actually put some of that into practice in the context of meta-learning algorithms. And so, specifically, we'll, again, try to motivate why we might want Bayesian meta-learning algorithms in the first place. Then we'll talk about different classes of Bayesian meta-learning algorithms, including black box meta learning algorithms and optimization-based algorithms. And then, lastly, we'll talk about how to actually evaluate Bayesian meta-learning algorithms and how that differs from the typical kind of few-shot learning evaluation that we've seen previously in class. And so, the goals for-- by the end of the lecture are to try to be able to understand the interpretation of meta-learning as Bayesian inference and also understand techniques for representing uncertainty over parameters, versus over predictions. So I guess, also, one quick disclaimer, like a lot of the class content, this is an active area of research. And so, there are also-- yeah, will be in some cases potentially more questions than answers, in terms of the algorithms that we're looking at. But that's also true for a lot of the content we've seen in the course as well. Cool, so let's start by talking a little bit about a recap of some of the things that we've seen so far, both in terms of meta-learning algorithms and, also, in terms of casting that in the context of Bayesian graphical models. And so, when we covered the last lecture and on meta-learning algorithms, we talked about the properties that we might want from a meta-learning algorithm. And, specifically, we were talking about the properties of the inner loop of that algorithm, like the learning procedure that it gives you. And there were two different properties that we really focused on when talking about this. The first was thinking about the expressive power of the inner loop of that algorithm. Specifically, can the inner loop of that algorithm represent a wide range of learning procedures? And the second was looking at the consistency of that inner loop, which was consistency in the notion of statistical consistency, which is that, if you give the inner loop enough data, can you expect it to solve the task and reach a consistent estimator of-- or give you a consistent estimator of the task specific parameters? But these properties are important. But one property that we haven't talked about much about is the ability to reason about uncertainty. And this is really the focus of Bayesian meta-learning algorithms. And by that, I mean the ability to try to reason about ambiguity that might come up in the learning process. And this is important in active learning settings, in settings where you want calibrated uncertainty estimates, as well as in reinforcement learning settings. And it also gives us, one might argue, kind of approaches that are quite principled from the Bayesian standpoint, insofar as they're going to be maximizing the likelihood under some graphical model. Cool, so now, this will also be a little bit of recap, which is that-- actually, even before we started talking about meta-learning algorithms, we talked about how training and testing tasks should share some degree of structure. And we talked about how this can be thought of as a statistical dependence on some latent information theta. And we brought up this graphical model right here, where we have our task specific parameters phi, i and some shared latent information theta. And we also have the data that we can observe, which includes a support set denoted as x train and x-- and y train here and a query set denoted as x test and y test here. And we also talked about, if we kind of condition on the shared information, that means that the task parameters become independent and are not otherwise independent from one another. And as a result, if you conditioned your-- on that shared information on theta, that gives you information about phi, i. And so, the distribution over phi, i is going to have a lower entropy. It's going to have less randomness because we have some knowledge about the shared structure across the different tasks. And, then, I guess, it's been a while since we talked about this a little bit. So I guess, we can maybe walk through these thought exercises again. So the first thought exercise was that, if you can identify the meta parameters theta, such as with a meta-learning algorithm, when should you expect-- when should learning phi, i be faster than if you were to learn from scratch? Does anyone have any thoughts? When your task is [INAUDIBLE] distribution is the one you used to learn theta? Yeah, so one example is, if your task, the task that you're trying to learn is from the same distribution as the task that you saw during meta training, then, this shared structure should be useful for helping you infer what phi, i should correspond to, any other thoughts? So one of the things that I think we covered last time is that, if the tasks do share some structure, then, this data is going to help you-- is basically going to reduce your uncertainty over phi. Whereas, if the tasks are completely independent from one another and don't have any shared structure, then, you won't actually be reducing entropy when you condition on the theta variable. And then, lastly, we also talked about this other case where, if you condition on theta, what happens if the entropy is 0? And this is the case where theta tells you everything there is to know about the task-specific parameters. And in that case, you don't actually need any support set or any data to infer what the task-specific parameters are. And in this case, you could actually think of this a lot as memorization. You don't actually need the support set in order to infer the task-specific parameters. And so, in this case, your meta-learning algorithm may just memorize all of the tasks and not actually use the support set. Cool, and then jumping into Bayesian meta-learning algorithms, all the algorithms that we've seen so far use some-- give us our task-specific parameters in a fully deterministic way. So they'll give us a-- well, they'll give us a degenerate distribution over phi, i because they'll just give us one parameter vector. They won't give us a distribution that has any amount of support to it. And there are cases where we do actually want to generate multiple hypotheses. And so, this is an example that we saw on Monday, where few-shot learning problems might be ambiguous. So you might have a support set where it's inherently unclear which attributes you should be paying attention to. And if we can learn to generate hypotheses about the underlying function, then, this can tell us if we need more labels or if we should abstain from making a prediction on a new example because it's uncertain. And so, this is important in safety critical settings where you want to decide if you should make a decision, active learning settings, and also exploration settings. In this specific example, in this lecture, we will be talking about algorithms that can actually handle the specific setting, and generate hypotheses, and generate, basically, multiple classifiers, one that pays attention to the smiling attribute, one that pays attention to the wearing hat attribute, and one that pays attention to the young versus old attribute. Cool, so let's get into algorithms for doing this. Now, the first V0 algorithm that we could think about doing is-- we've seen all of these algorithms that give us kind of a label y test given a support set and an input x test. And so, what we could do is, have f output the parameters of a distribution over y test. And in the classification settings that we've done, you've already been doing that. So we're not actually literally outputting one single label. We output the probabilities for each class. And so, what you've been doing so far, like in the homework, for example, is the output probability values over a discrete categorical distribution. But you can also output outputs on other distribution over y tests. So in regression problems, you get output the mean and variance. You could use a mixture density network, like what we talked about on Monday, where you output the means, variances, and weights for a mixture of Gaussians over y test. Or if you have something more multi-dimensional, you could have an auto-- have kind of this output over y test be a sequence of distributions, such as an autoregressive model. And then once you choose your distribution class over y test, then you can just optimize with maximum likelihood. And that would-- maximum likelihood would correspond to the outer loss of your meta-learning algorithm. So this is pretty simple. And, in fact, we've already been doing it. And so, this is nice. You can combine it with a variety of different meta-learning algorithms. But the downside is that you can't actually-- this will allow you to reason about uncertainty over the label. But it won't allow you to reason over-- about the uncertainty of the underlying function. And being able to reason about uncertainty of the function is important, if you want to understand how to reduce uncertainty across a set of data points because, it could be that you're very uncertain about one data point. And you're-- and if you had a label for that data point that would help reduce your uncertainty for a whole host of other data points as well. And so, if you had some notion about your uncertainty of the underlying function, then, you'll understand, basically, how your uncertainty across different data points relates to one another. And, then, another downside of this is that you can really only capture a limited class of distributions over y test. And this is actually a question that came up after lecture on Monday, which is you can output-- I've talked about how you can output a mean variance of a Gaussian over a y test and a categorical distribution. Why can't we just output some kind of crazy distribution over y test that looks something like this? And the challenge that comes up here is, if you want this to be kind of your distribution over your labels, given some input, maybe, also given your support set, then, you need to be able to parameterize this distribution in some way. And parameterizing distributions like this ends up being very difficult. We know how to parameterize Gaussians as a mean and variance. And we have a nice equation for that given those two variables. But once you have more complex distributions, it's very difficult to parameterize that in a way that is differentiable and is a well-formed function. Cool, yeah, and then the last thing that I'll also mention is that, generally, if you train a neural network with maximum likelihood, neural networks tend to give you very poorly calibrated uncertainty estimates. And by that, I mean that, if you have a binary classification problem, and it gives you, say, a 0.9 for one class and a 0.1 for another class, oftentimes, that doesn't mean that it will be correct on this data point with a likelihood of 0.9. Oftentimes, neural networks tend to be-- oftentimes, they will generally tend to be a little bit more overconfident. But even then, even if you scale down these probability values to try to make them less confident, you're often not able to get an estimate that's actually consistent with the probability that it will be correct on the given data point. Cool, any questions up until here? [INAUDIBLE] What is the estimate of calibration rate because it's very subjective that I feel it should be more uncertain or less uncertain. Yeah, so the question is, how do we measure whether or not a neural network is well calibrated? And there's a few different ways to do it. I'll skip ahead, actually, a little bit to one visualization here. And, well, so-- yeah, there's a couple of different metrics. One metric is called the expected calibration error. But something that I think is even more detailed is what's called a reliability diagram. And what this is plotting is-- the x-axis is the confidence that the neural network outputted. And so, over here, it would say that the confidence is 0.9. And the x-axis is showing the accuracy for all the data points that have a confidence of that particular value. And, ideally, you would want this to be a diagonal line, where, if you have zero confidence, you have zero accuracy. If you have 50%, confidence you have 50% accuracy and so forth. And so, basically, this is going to be looking at, how often do your confidence measures actually match the likelihood of getting it correct in practice? And so, the closer to this is to a diagonal line, the better your calibration estimates are. Yeah. [INAUDIBLE] sense of uncertainty, right? Maybe, there's some true aleatoric uncertainty, like, it's impossible to know. And that seems like the kind of accuracy or-- a setting where you make pretty good estimates of your uncertainty. But then, there's just, your model is wrong, right? It's not an aleatoric thing. It's just, you weren't right. And that's-- and in the case, predicting a confidence interval or prediction interval seems like asking for a better model. So does that actually work? Yeah, so there's-- I guess, first to make sure everyone's on the same page. There's actually two kinds of uncertainty. One is uncertainty arising from noise in the data itself, like, inherent-- it-- like, there's this some noise underlying the data generating p of y given x. And that's often referred to as aleatoric uncertainty. Or I often like to use, maybe, something like data uncertainty, which I think is a little bit more clear. And then, there's a second form of uncertainty that's just whether or not your model is making the correct prediction. And that's often referred to as epistemic uncertainty or model uncertainty. And this has to do with, basically, does your model know what it doesn't know? And, generally, getting good estimates of the uncertainty in the data is easier because the-- you can basically just look at the data and look at the frequency at which the label corresponds to a particular value. Whereas, it's much harder to get a sense for what your model doesn't know because that would-- if you could do that, then, you could also often get a better model by-- if you're very confident that you're wrong, then, you should just predict something else, rather than just confidently saying, I know I'm wrong. And here's the zero confidence estimate. And so, generally, this kind of uncertainty is much more difficult to get. But there-- and especially with things like maximum likelihood, there are some ways for getting this kind of uncertainty. And one thing that we'll talk about in a little bit is using ensembles. And in general, this is probably one of the most effective ways to get notions of epistemic uncertainty. But that is-- they're still not very good. And it is a little bit of a paradox because, if you could get really good uncertainty estimates, then, you would also just be able to improve your model. Cool, so-- oh, I had a thought exercise. Yeah, so we've talked about how we can just have our meta-learner output a distribution over y and then train things with maximum likelihood. Now, my question for you is, instead of having it output a distribution over y, can we have it output a distribution over phi, given our training data and, then, just train that with maximum likelihood? Does anyone have any thoughts, yeah? [INAUDIBLE] Yeah, so in order to do maximum likelihood on this distribution, you need the ground truth for phi. And we have access to ground truth labels. But we don't have access to ground truth phi's. And so, for that reason, we can't just do maximum likelihood on this distribution, in order to get an estimate of the distribution. And so, that's why we need algorithms that are more interesting than maximum likelihood algorithms. Cool, and so, to be able to create these kinds of algorithms, we can rely on the probabilistic kind of deep learning toolbox. And we went into depth on one form of tool in the toolbox, which is latent variable models and variational inference, on Monday. And this is-- we'll primarily focus on using this tool for creating Bayesian meta-learning algorithms. But there are also other tools that we could consider using. And one tool that we will briefly cover is using what's called Bayesian ensembles. And the way that ensembles work is, instead of trying to explicitly represent some distribution, like output a mean and the variance, what it tries to do is it basically tries to represent multiple particles that are samples from that distribution. And the way that you do this is very, very simple. Oftentimes, you just train multiple separate models on your data set. And by training multiple models on your data set with maximum likelihood, that will give you multiple copies of your given model. And it turns out that this-- oftentimes, the model that you get will be slightly different from one another. And that will give you, basically, samples from your distribution. There's also ways to make this work even better. And we'll also talk about that a little bit. Beyond these first two things, there's also something called Bayesian neural networks. And the way that these work is, basically, just directly forming a distribution over neural network parameters and, particularly, usually a Gaussian distribution. And so, instead of having a single neural network with weights theta, you will have a neural network with a mean for your weights and a variance for your weights. And that allows you to form a Gaussian distribution over the weights of your neural network. And, of course, if you represented a full covariance matrix over the weights of your neural network, then, that would be quadratic in the number of parameters that you have. And so, that would not be a particularly appealing choice. And so, oftentimes, we just pick a single scalar variance value for each of the weights. And so, you're just going to-- you'll have one vector that's the same dimension as your weights, which is mu and another vector that's also the same dimension as your weights, which is sigma. And that allows you to get a fairly simple Gaussian distribution over weight space. And we'll be-- in this lecture, we'll be starting to see things that look kind of like Bayesian neural networks, where we have-- where-- we are going to be forming Gaussian distributions over neural network weights. Cool and, then, I mentioned this a little bit on Monday. But there's also a couple other distribution classes, such as normalizing flows, energy-based models and GANs. We're not going to really talk about these today. And, for the most part, people haven't used these algorithms for-- in the context of Bayesian meta-learning. But it means that these other two ways of representing distributions could also be useful for developing new Bayesian learning algorithms. Cool, so, now, before we-- I guess one more recap slide before we actually start getting into Bayesian meta-learning algorithms is to try to recap what we covered on Monday. And on Monday, we were talking about trying to represent models, represent distributions using latent variables. And, really, the key idea is we're going to have a simple distribution of our latent variable z. We're going to then transform that into our example, space x. And so, our observed variable is x. Our latent variable is z. And we formulated a lower bound on the log likelihood. And there were a couple of different ways to represent this lower bound. The first was to find-- was, basically, expectation under q of log p so, basically, trying to find a latent variable value that has maximum probability under p but, then, also have an entropy term so that you're covering the distribution accurately and representing that distribution well. And, then, the second interpretation that we looked at was trying to have one term that's basically trying to be able to reconstruct examples from encoded latents, along with a KL divergence between the inferred latent and the prior. And these two are equivalent. And optimizing or maximizing for this objective on the right hand side is also going to maximize the log likelihood in turn. p corresponds to the model. And p of x given z is represented with a neural network. p of z is represented with a standard normal distribution. Although, in practice, you can also learn the prior as well. And, then, q is the inference network or the variational distribution, which is our approximation of the posterior of z given x. And if we want to sample from this model at test time, we often throw away our inference network and only use the model p. And in that sense, it's often primarily used as a tool for doing inference and for training the model. And we often use theta to denote the model parameters and find and denote the variational parameters. And, then, the last thing that we talked about was that to actually optimize for this objective, we need to be able to optimize with respect to the kind of sampling distribution right here. And the way that we do that is with the reparameterization trick, which basically allows us to reparameterized samples from a Gaussian distribution as the sum of the mean, plus the variance, times epsilon, where epsilon is an independent, random variable. And this allows us to actually optimize for the parameters of that distribution without-- by decomposing it into these two terms. Cool, and so, one of the big question is if we can use ideas from here for meta-learning. And so, as we've sort of hinted at, we can. And so, in particular, in the context of meta-learning, we are going to have our observed variable. In this case, the observed variable will now correspond to the data for a particular task. And so, just like when you go from learning to meta-learning, you're treating these data sets as data points. And so, it the analogy continues there. And, then, the latent variable will correspond to the task-specific parameters phi, i. And once we've defined these two things, then, we can basically just reuse everything from variational inference that we learned about before, where we can formulate a lower bound on the likelihood as an expectation under q of phi, of log p of Di, given phi, i, minus the KL divergence between q of phi and our prior over phi. And so, this is just basically rewriting everything on the-- kind of in our evidence lower bound but kind of replacing our observed variable with Di and our latent variable with phi, i. Yeah? Why are we not writing q as phi given d [INAUDIBLE]?? Yeah, so the question was, why are we not writing q as phi, given d? So I wanted to write it first as this because q can really be conditioned on anything. As we talked about on Monday, it just needs to give you some estimate over phi. And so, we have a choice in terms of what we might want to condition q on. And so, we can condition it on Di. But we may also want to condition it on something else. And so, does anyone have thoughts on what we might condition q on? Yeah. [INAUDIBLE] q on the task representation? So the response is we, maybe, want to consider conditioning q on the task or the task representation? So what is the task representation? I feel like it makes sense to condition it on the training set. But I'm not sure. Yeah, so one thing you can do is condition it on the training data set. And this would, basically-- it's always mimic what we saw before and except that now we're actually specifically thinking about Di train here. And the cool thing about this particular choice is that you can think of this as a neural network that takes as input the training data set and outputs a distribution over phi, i. And this starts to look a lot like what we saw in black box meta-learning, where we're training a neural network to take as input a training data set and output a set of parameters that solves that training data set. And so, in many ways, you can think about this kind of posterior inference process as kind of the inner loop of the meta-learning algorithm. Now, one other thing that I'll mention here is that we want to output a distribution over phi, i. And so, what we'll actually do is, we can model this as a Gaussian distribution. And so, this neural network will actually output a mu and a sigma for phi, i. And this will be a lot like a Bayesian neural network, where we're going to be having a-- this will represent a Gaussian distribution over the weights for task i. And this is going to be twice as large as the typical kind of output if we were just outputting a single parameter vector because, basically, the size of mu of phi is going to be equal to the size of our original space. And, similarly, our size over sigma of phi will also be the same as the size of the parameter vector. Cool, and then you can-- I guess, just to catch up a little bit on the slide, so this is what we wrote down before. We thought about what you should condition on. And we can have q condition on our training data. And then we can view q as the inner loop process. And then, here, when we actually are optimizing for the likelihood under our data for that-- for task i, what we can do is we can specifically pick held out data for this, such that this is equal to log p of y test i, given phi, i and xi test. And this will exactly correspond to what we do in the outer loop of the meta-learning algorithm, where we evaluate how good our neural network is, phi, i on making predictions for new data points. And so, this will be-- these will be sampled from the query set. And so, this is written right here where, basically, the training data set is used for the inference process. And the query data set is used to evaluate the likelihood of the data under those parameters. Cool, and then the last question is, where do the meta parameters come into play here? And the natural place for them to come into play, at least insofar as this corresponds to black box meta-learning, is right here, where, basically, the meta parameters-- you can view them as parameterizing this network q. So this neural network has parameters theta. You can also use them in other ways as well. So you can, for example, instead of having this be just like a standard Gaussian distribution over parameters, you could actually learn a prior over your parameters. And so, this would correspond, then, to p of phi given theta. And this could be a good choice because regularizing neural network parameters towards a 0 mean unit Gaussian distribution, that may not correspond to something that's a very useful set of weights for the network. And so, if we introduce the meta parameters in the inference network and the prior, then, the corresponding equation looks like something like this. I should also mention that you could also incorporate the meta parameters into the function that's making predictions as well. In many cases, we haven't done this. But if, for example, you're using an RNN and there's KIND OF some weight sharing between the meta parameters and the task-specific parameters, then, that would come into play here as well. And so, for completeness, the final objective look something like this, where we are-- we now have-- we some kind of sum over your tasks i. You're maximizing this with respect to your meta parameters. And you're basically optimizing for how well your task parameters solve the task and, also, optimizing for your inferred parameters or your kind of inferred distribution over the task-specific parameters matching some prior distribution. Yeah. [INAUDIBLE],, can it choose that the prior over phi as this p of theta because, in something like MAML, we initialize the parameters with theta? And then we see-- when we see examples, we [INAUDIBLE].. And then you are updating a prior version, like the belief [INAUDIBLE]. Or we could think about theta as being a prior for phi. So the question was, can we think of-- can we just have p of phi just be theta? [INAUDIBLE] Yeah, so there's a few different ways that you could parameterize this. You could basically just learn a-- basically have-- learn a mu theta and a sigma theta that you're, basically, trying to regularize towards. I wasn't quite sure how this relates to MAML. In this case, we're purely looking at-- you can think of this as just a pure black box meta-learning approach. And we'll talk about optimization-based meta-learning approaches in a few slides. But yeah. [INAUDIBLE] make you have a prior belief of your [INAUDIBLE]. And then you see data points. So I was thinking that initializing the parameter with theta [INAUDIBLE] initial view of [INAUDIBLE] data points with the labels. And then you tune based on the data points. Or [INAUDIBLE] really makes sense to think of theta as a prior over phi or in a Bayesian sense? Yeah, so there-- in something like MAML, it kind of definitely intuitively makes sense to actually think of theta as a prior, basically, the initial parameters as a prior. And we'll cover that in a couple of slides. There's actually a way to formalize the intuition too, which is pretty cool. Cool, so once we formulate this objective, we can then kind of optimize it, just like you would optimize a variational auto-encoder. But instead of having your inference network be over some representation space, it's actually over weights. And this allows you to represent non-Gaussian distributions over y test because you now have a-- you're ultimately getting a distribution over phi, i and then sampling from that distribution. And so, yeah, it has a number of benefits. This is one benefit. And it gives you a distribution over functions, rather than only a distribution over your labels because, at the end of this, you get some kind of estimates for y test. But you also are able to use your inference network to get a distribution over task-specific parameters. Now, on this note, one thing that I should mention is that, unlike in variational auto-encoders where you might throw away q, in this case, you won't-- you'll actually be using q at test time. If you want to, basically, infer a distribution of your task-specific parameters, that's exactly what q is doing in this case. And so, the inner loop process is-- unlike in variational auto-encoders at test time, you'll first use q to get phi. And then you'll use p to get a distribution over your labels. Now, one of the downsides of this approach is that you can only represent Gaussian distributions over phi. And the reason for this is that the things like the reparameterization trick and KL divergence are primarily applicable to Gaussian distributions. And if you want something that is more expressive than a Gaussian distribution over weights, then, it is difficult to apply this sort of framework to that setting. That said, if you have a large enough neural network, especially a deep enough neural network, then, the neural network can kind of transform Gaussian-- samples of Gaussian weights into something that ends up looking fairly complex. Cool, so that was kind of version one of a Bayesian meta-learning algorithm. And it was a black box algorithm. Now let's talk a little bit about optimization-based approaches. And there's-- in this case, there's really kind of one clear way to apply variational inference to black box methods. But for optimization-based approaches, we're actually going to study three different approaches to Bayesian meta-learning algorithms. And before we actually talk about those approaches, let's talk a little bit about simply just interpreting optimization-based meta-learning as kind of under a variational model. And, in particular, one intuition that came up before is that, if you're running gradient descent starting from some initial set of parameter vectors-- or some initial set of parameters doing something like this, you can kind of intuitively think of the initial parameters as a form of prior about the function that you're trying to solve. And if you randomly initialize, that means you have no prior knowledge. Whereas, if you kind of initialize as something that you think is pretty close to where you might want to go, and you only run a few steps of gradient descent, then, that's going to strongly affect the set of parameters that you end up with. And it turns out that you can actually formalize that a little bit. And so, here's a graphical model that's basically the same as the one that we saw before. It doesn't split up the train set and the test set. And it doesn't separately represent x and y. But, otherwise, it corresponds to the same thing that we saw before. And if we're interested in maximizing the log probability of the data given our meta parameters, you can expand this out following the graphical model to incorporate the latent variable phi, where the log likelihood is equal to the log product of the integral of p of D given phi and p of phi given theta. And, then, the last step is one way that you can try to estimate that integral. So if you have this pretty nasty integral over phi, that means you're integrating over all possible values of your task-specific parameters. So we have p of Di given phi, i and p of phi, i given theta, D phi, i. One way that you could try to approximate this integral is, basically, try to find the phi, i that has maximum probability. That's the thing that's going to have the most weight or, basically, have-- be the largest term in this integral. And you can very crudely estimate the integral as just taking that map estimate, the thing that has the highest probability, and saying that this is roughly equal to the value under the particular maximum a posteriori estimate-- whoops, no d phi. And the reason why this is interesting to think about is that there's a paper that shows, in a very simplified setting, that gradient descent with early stopping corresponds to doing map inference under a Gaussian prior with a mean at the initial parameters and a variance that depends on a number of different factors, including the number of gradient steps that you run. And so, what this means is that, insofar as that last line the map estimate is approximating this integral, you can think of an algorithm that runs gradient descent to get the map estimate as something that's approximating the log likelihood. And so, if, for example, you run an algorithm like MAML to get-- or run an algorithm like gradient descent to get the map estimate and then optimize for the likelihood of the data under that map estimate, which is what MAML does, that corresponds to the last equation here. And so, the thing that's cool about this is it provides kind of a Bayesian interpretation of what the MAML algorithm is doing. Although, the thing that's somewhat unsatisfying about this is it doesn't allow us to actually sample from the distribution over task parameters. It kind of allows us to interpret MAML as sort of a Bayesian approach. But it almost gets rid of a lot of the Bayesian part of Bayesian approaches, because it doesn't allow us to actually think about this distribution over task-specific parameters. And so, the next three algorithms that we'll talk about are algorithms that will allow us to actually sample from this distribution, rather than using a map estimate. Cool, so the first algorithm that we can think about, it will start from the algorithm that we derive in the black box case. And the only thing that will differ is our choice of inference network. And so, before, inference network was just a neural network that took as input the training data set and output some parameters over-- a mean and variance over our task-specific parameters. And remember that the inference network q, this variational distribution, it can really be whatever you want it to be. And so, one thing that you could do is, instead of having it be an inference network like this, you could actually kind of embed gradient descent inside of q. And so, what it could look like is you could take-- start with some set of some mean and variance. I'll call this mean and variance mu theta and sigma theta. And then what q could correspond to is running gradient descent with respect to mu theta and gradient descent with respect to sigma theta in order to get mu phi, i and sigma phi, i. And you could have this kind of gradient descent process correspond to your inference network q. And, in particular, these gradients will be, with respect to mu and sigma, with respect to the loss for the training data D train, i. So q can be an arbitrary function. And instead of having q be a neural network, it could include a gradient operator inside of it. And so, what you can do is have q correspond to SGD speed on the mean and variance of some neural network weights with respect to D train, i. And so, in this case, this will give you a kind of mu and sigma over phi, i. And kind of once you define this inference network like this, you can, kind of, again, reuse all of the same kind of training procedure that we saw before. Yeah? [INAUDIBLE] actually being a distribution or represented in distribution because, previously, [INAUDIBLE]. Now it's just a process that gives you the distribution? Yeah, so instead of being a neural network that outputs a distribution, it will be a process that still yields a distribution. The-- this process does still have some parameters. Specifically, it has the kind of initial mu and sigma. And I called these kind of mu theta and sigma theta, in that they signify the initial set of parameters that we saw in an algorithm like MAML. Any other questions? Cool, so the only thing that changed, we're using the same objective as before. We're just redefining the inference network to have a different form. And this form is analogous to the kind of thing that we saw in optimization-based meta-learning algorithms. We're basically just stuffing gradient descent inside of our inference network and actually having gradient descent correspond to what happens inside. The other thing that's different about-- a standard MAML algorithm is that our meta parameters now are going to correspond to both mu and sigma here. And so, we'll have-- we won't just have a single theta vector. We'll have two of these theta vectors, one that corresponds to the mean and one that corresponds to the variance. Now, the cool thing about this is that, at test time, once we want to do inference to infer our task-specific parameters, we're just going to be running gradient descent at test time. So instead of doing inference by passing it through a neural network to get our task-specific parameters or our distribution over the task-specific parameters, we're going to be running gradient descent. This means that we should be kind of a little bit more robust to tasks that are a little bit out of distribution at test time. The downside of this is that we are, similar to before, going to end up with a Gaussian distribution over our task-specific parameters. And this means that, if we wanted to represent a more complex distribution over our task-specific parameters, we would be out of luck in that case. And it's, again, important for it to be Gaussian for two reasons, one, that it allows us to evaluate the KL divergence in closed form. And second, it allows us to use the reparameterization trick to back prop into this. So would we-- when we sample from q-- when we sample a phi from q, we're going to be sampling a phi from a Gaussian distribution parameterized by mu phi and sigma phi. And we can, again, use the same kind of reparameterization trick and have this be equal to mu phi, plus epsilon, times sigma phi, where epsilon is sampled from a Gaussian with mean 0 and unit variance. And by making this a Gaussian distribution, that allows us to use this reparameterization trick and back prop into mu phi and sigma, i but, also, back prop all the way back into mu theta and sigma theta. Yeah? In general, the meta parameters can be more than [INAUDIBLE] et cetera, et cetera. So that's still fits into this framework, right? That doesn't-- [INAUDIBLE] I just wanted to confirm that for students. Yeah, so the question was, in general, in optimization-based meta-learning, the meta parameters can correspond to not just the initialization but other things, like the learning rate and so forth. And, yeah, that also fits well into this framework. So here, I'm writing mu theta and sigma theta as the main meta parameters. But you can also optimize other things. And so, for example, you could optimize the learning rate here and have that be a part of the meta parameters as well. Cool, so then there's a question of whether we can model a non-Gaussian posterior. And we'll look at two different approaches for doing this. The first is-- will be to use ensembles. And, in particular, what we can do is, if we want to get a distribution over phi, i, we can basically just train an ensemble of MAMLs and, specifically, just train M independent MAML models, train them independently with different mini batches of tasks and so forth, to get an ensemble of MAMLs. And it's also worth noting that you can use ensembles with kind of black box or non-parametric methods as well. And this will also give you some sense of distribution over meta parameters. And so, this will give you a distribution over meta parameters. And then when you run gradient descent starting from those initializations, you'll also get a distribution over phi, i. One challenge with ensembles is that if you just train independently, oftentimes, training will result in a set of parameters or meta parameters that are very similar to one another. And one approach for dealing with this is to try to more actively diversify the weights that you get and try to optimize for a more diverse ensemble of MAMLs. And so, the way this can work is, there's actually a-- rather than just crudely trying to say, oh, the parameter vector should be independent, there's something called Stein Variational Gradient Descent that actually actively pushes particles away from each other with a particular choice of kernel. And so, the way that this method is going to work is, when we run gradient descent in the inner loop, we are not just going to run gradient descent on our support set. So, typically, we'll do something like theta, minus alpha grad theta, L of theta with respect to D train. And, then, this will be-- this will correspond to phi, i. And what we're going to do that's a little bit different is we're going to say that we want phi, i to be kind of different from the other members of our ensemble. And so, we're going to optimize for multiple particles. And so, I'll use m to denote the particle number. And we're going to, additionally, have a term that says that we want to kind of push away the value of our parameters and make it different from other values. And so, we can measure this-- or the distance between our current parameters that we're optimizing and the values, the other particles that are different for, like, m prime not equal to m for that particular task. And so, this is what the inner loop is going to look like. And, then, the outer loop will be, basically, just the same as before. The equations here are just copied from the paper. And so, they use slightly different notation. But the outer loop is just going to correspond to optimizing for the likelihood of phi, i on the test set for task i. And you're going to be doing this for all of the different parameters. And so, you're going to sum over both the tasks i, as well as the ensemble members. And so, the only thing, really, here that's changing is, first, that we're going to be-- we're going to have multiple particles that we're optimizing for in the inner loop. And we're going to push them apart by adding this additional term in our inner loop objective. Yeah. What kind of kernels are most useful here? Yeah, the question is, what kind of kernels are most useful here? I can't actually remember exactly what they used in the paper here. I think that, typically, something just like a-- looking at the Euclidean distance is reasonable here. I do think that distances in parameter space are not always a good function of kind of functional similarity because you can-- if there are some parts of the weight vector that are-- does not used by the network, then, you can push those parts away a lot and just kind of ignore the other parts. And that will lead to two functions that are very different in weight space but are identically functionally. And so, you want your kernel to try to pay attention to all of the parameters in the parameter vector to try to prevent that case. And if you can measure some notion of functionality, for example, looking at something like the Fisher information matrix or something like that, then, that may lead to better performance. But those measures often end up being computationally expensive or crude estimates of something that's computationally expensive. Yeah? [INAUDIBLE] because we won't be doing-- we won't have [INAUDIBLE]? So you're saying that, what happens at meta test time might be different? Or-- [INAUDIBLE] So at meta test time, you actually also run this same inner loop as well. And so, you-- basically, what you want to happen is you'll have a single set of initial parameters. And you'll want to try to have this lead to multiple task-specific parameters for that task. And this will basically represent samples from p, i of theta, given-- sorry, p, i-- or p of phi, i given theta and D train, i. And so, just like before, we were-- on the whiteboard that's behind it, we were getting this sort of distribution with our inference network. In this case, we're representing this distribution with these different kind of samples or these different ensemble members. Cool, so the benefit of this sort of approach is it's pretty simple to implement. It tends to work pretty well. Ensembles, especially, are one of the most effective methods for model uncertainty or epistemic uncertainty and, also, can give you non-Gaussian distributions. The phi, i that you end up with, these could be samples from a distribution that's much more complex than a Gaussian distribution. The downside is that you do need to maintain m different model instances. And this can get pretty expensive at times. But one way to try to mitigate this is to do gradient-based-- to do gradient descent only on the last layer. And, then, you only have to maintain m different copies of the last layer, rather than m different copies of the entire network. Cool, now, from there, we'll cover one more Bayesian meta-learning algorithm which tries to give us a non-Gaussian distribution over all of the parameters in a way that's a bit cheaper than maintaining m different copies. And some of the intuition behind this last approach is to try to sample parameter vectors with a procedure that looks a little bit like Hamiltonian Monte Carlo. And the way that Hamiltonian Monte Carlo works is you typically first add noise to your parameters and then run gradient descent on your parameters in order to-- typically, you actually iterate that process in order to ultimately draw samples from some distribution. And some of the intuition behind this is we're going to try to learn a prior where, if we randomly kind of kick the parameters or randomly add noise to the parameters, that will put us in different modes of the distribution. And so. If we think about the kind of example that we talked about before where different attributes would correspond to different classifiers, for example, if we wanted to learn a classifier for smiling or wearing a hat, versus a classifier for smiling and young, we'd like to learn a prior theta where, if we add noise in one direction, it puts us in kind of one basin. And if we add noise in the other, it kind of puts us in a different mode. And so, ideally, we'd like to be able to learn a theta such that, if we add noise and then run gradient descent, we'll get different samples from this distribution that end up being classifiers that are-- that correspond to these different modes of the distribution. So that's the high-level intuition. In terms of how we might actually do this, first, we're actually going to have a distribution over our meta parameters. This is going to be different from the things that we saw before. Before, we just had a single vector of meta parameters that were basically parameterizing our prior and our inference network. And here, we're going to actually also have a distribution over these meta parameters. And then we're going to also have a distribution of phi, i given theta. And our goal will be to sample from the posterior of phi, i given the support set and the test example. We'd like to also be able to sample from this distribution without the test example. So, in practice, we can observe the test example. And that distribution up there corresponds to inferring phi, i from everything that we can observe. Although, in practice, it'd be nice if we had a single parameter vector that worked well for any test example, not just the test example that is observed at any given point in time. And then from there, we can write down our distribution over phi, i as, again, kind of the product of p of theta, p of phi, i given theta, and then p of y train, given x train and phi, i and, then, of course, in this case, integrating out theta. So this is just the product of the distributions that underlie this graphical model. But, of course, this integral, like many of the other integrals that we've seen in the class, is completely intractable because it has to integrate out over kind of all the parameters in this Gaussian distribution. And so, the last way that we're going to try to deal with an intractable integral, in this case, is trying to crudely estimate this distribution right here of phi given theta and x train, y train. And if we did know this distribution, then, sampling would be much-- inferring phi, i given x train of y train would be much easier because what we could do is we could first sample theta and then sample from this distribution right here. We can just-- and that's what's called ancestral sampling, where you sample one and then sample from the next distribution. And we, basically, transform our graphical model into this one, where you first sample from theta. And then we sample from the distribution that feeds into phi, i. And that would give us a sample from the kind of distribution of p phi, i given x train and y train. Now, of course, we don't actually know this distribution of p of phi, i given theta and x train, y train. But what we could do is we could try to estimate this as a map estimate, just like we saw in the kind of Bayesian interpretation of MAML when we first started talking about optimization-based Bayesian meta-learning algorithms. And the map estimate is a crude estimate of this distribution. It's only giving you the mode of this distribution, rather than a sample from the distribution. But it's also a very convenient approximation to this distribution because, once we have this approximation, then we can just-- then it's very easy to sample from p of phi, i. And, in particular, once we approximate this with map inference and specifically with a few steps of gradient descent, then, at test time, what happens is we will basically first sample from theta. And then, to sample-- that then next we want a sample from p of phi given theta and Di train. And to do this, we'll then take the sample that we took and then very crudely approximate this as running gradient descent starting from the sample. Maybe, I'll denote this as theta prime, just to differentiate it from p of theta. And then we're going to run gradient descent on Di train. And this kind of corresponds exactly to the procedure that we saw on the previous slide, where we first sample and then run gradient descent. And, then, sampling is easy. What about training? You can also actually just train this with amortized variational inference. I'm going to skip the exact training procedure for the sake of time. But if you're interested in learning more, we can talk about it in office hours. Or you can take a look at the paper. And so, it, again-- sampling from p of theta and then running gradient descent looks like what we talked about before. In particular, sampling from p of theta corresponds to, basically, taking mu theta and then adding noise corresponding and multiplying that noise by the variance. And so, that will correspond to, basically, adding noise to mu theta. And then step two corresponds to running gradient descent, which will hopefully get us into these two modes of the distribution or, ideally, more than two modes of the distribution, if we have more ambiguity than that. Do you have a question? [INAUDIBLE] So what do you mean by [INAUDIBLE] into the two [INAUDIBLE]? Does it mean [INAUDIBLE],, or do you just do the process twice? Right, so the question is, does this mean that we're keeping two copies? Or are we doing the process twice? So yeah, so if we-- our goal is really to sample from p of phi, sample from-- be able to sample multiple different task-specific parameters. And the way that we'll do that is we will just repeat this process multiple times, depending on the number of samples that we want. And so, if we want to see-- if we think that there might be-- well, yeah, basically, this corresponds to the number of samples that you want. You might start by collecting five samples, for example, by adding noise, and then running gradient descent, and repeating that five times to get five different classifiers. And if you see that you have a lot of variance among your five different classifiers, then, you could continue to sample other classifiers from there. The upside of this is that it will give you a non-Gaussian posterior. It's fairly simple at test time. You just add noise and run gradient descent. You also only need to train one model instance. The downside is that it leads to a more complex training or meta-training procedure. Yeah? So [INAUDIBLE] sample [INAUDIBLE] because then [INAUDIBLE],, so [INAUDIBLE].. Yeah, so to sample multiple-- to generate multiple samples from phi, you'll sample multiple thetas and run gradient descent from each of those thetas. [INAUDIBLE] only one [INAUDIBLE]?? Oh, sorry, you only need one meta-training model instance. Whereas, when you train an ensemble of MAMLs, you'll need kind of multiple copies of the meta parameters. Cool, so to summarize all the methods that we've talked about, V0 is just to output a distribution over y test, which is simple. But it can't reason about uncertainty over function space. And, then, all the methods that we talked about after V0, we're actually giving us distributions over classifiers or distribution over predictors. We talked about a black box approach that used a latent variable model over phi with amortized variational inference. And this allowed us to represent non-Gaussian distributions over y but was restricted to Gaussian distributions over phi. And then we also talked about three different optimization-based meta-learning algorithms. The first was just to stuff gradient descent into our inference network. And this was pretty simple. It was a very simple modification of the black box approach. But it meant that p of phi, i given theta had to be modeled as a Gaussian. We also talked about an ensemble, which is pretty simple, making model non-Gaussian distributions but requires multiple model instances. And then the more hybrid approach that we talked about that was combining this map estimate and variational inference can give you a non-Gaussian posterior but involves a more complex training procedure. In the last five minutes, I'd like to talk a little bit about evaluation of these approaches. One thing that you could consider doing is to try to use standard benchmarks, like MiniImagenet set or Omniglot. And these are standardized and have real images. And they're a good check that your Bayesian meta-learning approach didn't break the meta-learning algorithm that you had before. But they aren't really the best metric of how good your Bayesian meta-learning algorithm is. First, metrics like accuracy won't evaluate whether your uncertainty estimates are calibrated. And second, the tasks may not actually exhibit that much ambiguity. And so, it may not stress test your ability to actually model distributions over task parameters. And then, lastly, it may also be the uncertainty just isn't useful on those data sets. And it's good to actually evaluate algorithms and settings where they might be practically useful. So what about better problems in metrics? So it really depends on what you might care about. So one thing that you could look at that is-- that I think is kind of nice because you can actually just visualize the functions that it's learning is to look at some ambiguous problems in one or two dimensions and, actually, visualize the functions that it gives you. So this is a few-shot regression problem where the purple triangles correspond to the support examples. And the tasks correspond to either sinusoids or linear functions. And some of the tasks, there's very little ambiguity. And other tasks, there's actually a lot more ambiguity. So the middle function, for example, is something where it could actually correspond to either a linear function or a sinusoid. And you see that the process that samples these functions will actually give you-- sometimes, give you linear functions and, sometimes, give you sinusoidal functions. You could also formulate an ambiguous classification task. This is a setting where all the tasks corresponded to these circular decision boundaries. And the algorithm is actually only given one example in the support set, just one positive example that's indicated by the green plus sign. And that green plus sign may be anywhere within the decision boundary. And you see that these dashed lines are showing the decision boundaries corresponding to the functions phi, i that you're sampling. And you can see visually that it's giving you a fairly diverse sample of these functions. So this is one toy visualization that gives you a lot of-- gives you the ability to interpret what's going on. But it's also very toy. Another thing that you could look at is trying to generate-- trying to look at ambiguous generation tasks. So this is something where the goal is to learn a generative model over different viewpoints of an object. And it's given only one viewpoint in the support set, which is shown on the left. And the goal is to generate lots of other viewpoints, which is shown in the middle of the slide. And here, you can actually just look at the samples in comparison to a C-VAE. This Bayesian meta-learning algorithm called VERSA, which is a lot like the black box approach that we talked about is much better able to kind of generate samples from the distributions. And it also gives you a lower mean squared error and a lower SSIM, which is another measure of reconstruction error-- or sorry, actually, a higher SSIM, which is-- you want to be higher on SSIM metrics. A third thing that you could look at is looking at both accuracy as well as mode coverage and likelihood. And so, you can take this celebA task that we saw earlier where they're kind of purposely ambiguous. Some tasks-- basically, kind of the support set is not enough information to figure out what the positive examples and negative examples should be. And there's a number of metrics that you can look at in combination. One is just accuracy on classifying new examples. But you can also look at-- you can also measure-- there's three possible classifiers for it to consider that are, in this case, visualized in B. And you can try to see, is it learning all three of those classifiers? And we see, in this example, the-- everything with the pink box is showing things that were classified as positive by the classifier. And you can see that it does actually give you three classifiers that, more or less, in this case, cover the three ground truth classifiers underlying that data. And then you could also look at average negative log likelihood. Yeah? [INAUDIBLE] generation task, there's maybe more than one dimension of uncertainty. There are lots of ways that you could be uncertain about whether your image is right. And are there interesting trade offs there, like, do these models have different-- are they uncertain in different ways, basically? So the question was the-- there's multiple possible dimensions of ambiguity in tasks like this. And are there different trade offs that models can make? I guess, I do think that different algorithms will have different properties. I certainly think that there are-- things like VERSA are strictly better than C-VAE in this case. I haven't-- I don't know of any examples off the top of my head of interesting trade offs between algorithms. But I can think about it and-- yeah. Just can you summarize with a number if you collapse all of it into one thing? But if-- Yeah, absolutely, and one of the things that I actually wanted to provide in these slides is that, sometimes, the numbers, they don't actually tell you that much about what's actually going on. And so, actually, making these kinds of visualizations, I think, is really helpful for understanding what's beyond the number, basically. And I guess, it's also, I think, useful to be kind of creative about the numbers that you measure because, when you do look at the data, and kind of notice different things, you can try to actually come up with metrics like coverage. What are the kind of number of classifiers that you're representing that do actually capture the kinds of things that you might want to see? And, I guess, in this example, it is actually possible to get kind of classifiers that have very low coverage but pretty high accuracy and, likewise, things that have really good coverage but slightly lower accuracy. And that is one example of a trade off that you can make, depending on the hyperparameters of the algorithm. Cool, and in the last-- or no, second to last one is reliability diagrams, which we actually already talked about. And then the last thing you can look at is active learning settings, which is, if you give it-- if you allow it to actively query a few additional data points, how much does the accuracy drop? Or sorry, how much does the error drop? And how much does the accuracy increase? And you can see that Bayesian meta-learning algorithms, like some of the ones that we saw today are able to drop the error rate and increase the accuracy faster than an algorithm that chooses data points at random or an algorithm that doesn't have good estimates of uncertainty. Cool, so yeah, that's it for today. We talked about Bayesian meta-learning algorithms and kind of techniques for representing uncertainty over parameters. Next week, we're going to talk about domain adaptation and domain generalization, which is a pretty cool special case of the multitask and meta-learning problem setting. The following week, we'll have our last kind of I don't know-- main technical lecture, I guess, on lifelong learning. And then we'll start to have two guest lectures and a final lecture on open problems and future directions. Before that last week, we're going to have Thanksgiving. So we can eat some turkey. And yeah, as a reminder, homework three is due on Friday.
AI_LLM_Stanford_CS229
Stanford_CS229M_Lecture_13_Neural_Tangent_Kernel.txt
OK, guys, let's get started. So I think last week I spent some time reading the feedback from the survey. I've been going through all of them. So I guess I'm not going to discuss every points there. All the points are well taken. And thanks for all the very helpful feedback. And for some of those, I'm going to improve. I guess there are also some other conflictory requests, which still are very understandable because different people have different preferences. That's completely fine. But I guess I'm just saying that it's not that, oh, I can't address all the possible requests just because sometimes there are some constraints. But of course, sometimes I think even conflictory requests can be addressed if you are creative. I will try to do that as well. I guess there's one thing I want to discuss a little bit, which I think might be useful for you is not trying to find any excuses for the lecture. But I think some people mentioned that it's a little bit hard to follow the notes, well, at least in the lecture. I can completely understand that. I wrote pretty fast, which I'm going to slow down a little bit, at least to make the layout and the format a bit cleaner, easier to read. But I think, in my opinion, this-- of course, I'm not saying that you have to really follow my way to take courses. I typically don't take a lot of notes. I think at least this course I tried to design so that you don't have to take all the notes yourself just because we're going to have Scribe notes later. And some of the Scribe notes are already there. And when I listen to a theoretical lecture, I try to think more so that I can remember them in my head a little bit. Because I feel that, at least for me, it takes too much energy for taking all the notes. I'm not sure that this is useful for everyone. I don't think it can be useful for everyone, but maybe you can try it a little bit just to see whether it's easier if you take a bit less notes and try to remember a little more. Otherwise, I'm going to slow down a little bit at least in terms of the writing for sure. And also, probably I'm going to kind of slow down a little bit in terms of the overall pace a little bit as well, given some of the feedbacks saying that some of the lectures are a little bit too fast. And also, another thing is that I think the homework questions-- you know, indeed, some of the questions, I think I probably made the mistake that a few subquestions are a bit too difficult. They were bonus questions in the past offerings, and this quarter I thought that you have a team of three people. So maybe I can put them as regular points. But still, they are probably a little bit too difficult. They require some kind of fix, as you probably noticed, some [INAUDIBLE]. Right. So, yeah. But I guess-- I checked the last homework. I think there is nothing like that. Most of the questions probably shouldn't require anything super special tricks about common topics. And I guess another thing is that, if you want to take some bonus points, I guess there are other ways, for example, do some Scribe notes, improve existing lectures. If you don't care about A-plus, I think the bonus point-- the bonus point is always the same as the regular points, in some sense, if you look at the grading policy. At least, from your perspective, it's worth the same as the regular points. Basically, the grading policy is that we first decide the cut off before the bonus points. And then the bonus points can only make you have better letter grade. OK. Anyway, so, yeah, I guess there are other very important, very nice feedbacks, which I'm going to incorporate as well in the lecture. I'm not going to discuss all of those points just to save some time. OK, so maybe let's get into the technical part if there's not any other questions, other discussions. So I guess, last Wednesday, I was sick. And we asked you to watch the video online. And roughly speaking, what we did in the video is that we talk about this optimization, like a nonconvex optimization. And I think so the main point there was that, if you have the so-called property of local minima, all local minima global, then you can find global minima. Of course, there are technical things, like there is so-called strict set point, which we discussed in a video. And there are other kind of things that are a little bit subtle, but this is the main point. And so, basically, you only have to show that this is true, and then you can find a global minima of the nonconvex function. This kind of search from a broader kind of point of view is successful. And in some sense, what I'm going to discuss next is another example of this. However, there are some kind of special subtleties. So basically, what we showed last time is that this is really true globally. The statement, all local minima are global, are basically-- this is a true statement for the entire space. And today, what we're going to discuss is that you're only looking at a special part of the space. So in some sense, the function we are going to discuss today looks like something like this. You have some kind of complex part about this function, which you don't know how to characterize. But you identify a small part where this is true. You look at a special region where all local minima are global. And there is actually a good global minimum there, so then you just only work in that region. And that's kind of the connection to the previous lecture. There are other issues with this kind of approach, I guess, we discussed a little bit in one of the outlining lecture. The limitation would be that you identify this region where everything is nice. The landscape is just so nice. But is this the region you really care about, right? So if you really care about finding a global minimum of the tuning loss, then yes. This has to be the region. Because you find a global minima of the tuning loss. But if you care about other properties, like generalization performance, then it might be not the right region that you should focus on. But for today's lecture, we don't care about that. We just say we're going to go through what this works, and then we talk about limitations. And in the future lectures, we're going to talk about ways to, in some sense, improve upon this or kind of like fix the issues of this kind of approach. OK. So that's a very rough kind of a high level overview. And also, by the way, if you haven't seen my notes or announcement on Ed, so there are actually two videos that we ask you to watch for making up the last lecture. So one of them is a full lecture, and the other one has 15 minutes. So they are about this nonconvex optimization, all local minima, global minima kind of phenomena. And this does relate to one of the homework questions. The question itself is still, in some sense, self-contained. But I think it's useful for you to know the basic idea, even the basic proof ideas in those two videos so that you can see better how do you do the homework question. OK. So today, let's talk about the thing we are-- about the special region thing. And this is also often called neural tangent kernel approach. I guess the name doesn't really-- so far, just think of this as a placeholder. I'm going to explain why this is called neural tangent kernel. So the basic idea is that you look at some special place around a neighborhood of your initialization, and you do some Taylor expansion. So Taylor expanding-- and this works for any nonlinear function. So suppose you have you have a nonlinear-- or even linear, but non-linear would be the most interesting case-- a nonlinear model f theta x. And then you do a Taylor expansion around initialization. Say that's 0. And when you Taylor expand the model at the initialization-- so your model is f theta x. You Taylor expand with respect to the parameters, but not input. So the input is fixed, and the parameter is the variable. So say that 0 is the reference point. And then you look at the gradient with vector theta evaluate at theta 0. This is the first order gradient times theta minus 0. So this is the first order Taylor expansion. And then you say you have some higher order terms, which we are going to ignore. And once you do this, you can define maybe this one. Let's call this g theta x. Of course, it also depends on theta 0. But let's say theta is the variable, so that 0 is fixed. So this is a function of theta. So this is a linear function. So if you define this, then g theta x is a linear function in theta. Because where theta shows up-- see, it only shows up here. And it shows up linearly. And basically, you linearize your model. And you can also, I guess, define the other theta, which is the difference between theta and theta 0. I guess, technically, you should call this-- maybe this is a affine function because there is a constant term, affine function. In theta or in the other theta, they are not too different. I guess just want to introduce this notation, delta theta. And so f theta 0, this reference point, this is a constant from this perspective, right? It's a constant that doesn't depend-- constant for fixed x. It doesn't change as you change theta. And in some sense, this is just not that important, so not very important. Because it's a constant. And sometimes for convenience, you choose-- so choose theta 0 such that f theta 0 x is equal to 0 for every x. How do you do it? So you do it-- so if you really want to do this, you need to-- for example, what you can do is you can design network that you split your networking into two parts. So maybe you have-- suppose before you have a network, you have all of these connections. And then for the second layer-- maybe for some layers, split it into two halves, right? So you have something like this and then something like this. And you do the same thing in these two halves, exactly the same thing in these two halves. And then you put plus 1 here and minus 1 here, so that they got canceled. So that you have a still somewhat random initialization, but the initialization has a functionality that the functionality of this initial model is 0. I'm not sure whether my drawing makes any sense. I see some confusion in your face, but this is supposed to be something simple. For example, let's say you have some of linear models, some of-- sorry, two layer networks, sum of ai times sigma of wi transpose x, i from 1 to i. Suppose this is a model. And what you can do is you can say you added to minus ai sigma wi transpose x. So you have 2n neurons. And the wi's the same, and the ai's are paired. So then this becomes 0, right? So they have 2n neurons, and one part is the same as the other part in terms of w. And in terms of a, they are negation of each other. Then you make it 0. And you still have relatively good randomness, right? You can still choose wi to be random as long as these are all wi's and these are ai's. So anyway, this is a not super important point. And also, even you don't do this, you can still somewhat kind of get away from it because this f theta 0 x is a constant. So basically, from now on, we're going to assume this f theta 0 x is 0 in most of the cases. And if you think about this, right-- so basically, this is saying that y prime-- suppose you take y prime to be y minus this constant, which we are going to assume is 0. But so far, I said for this equation I think we can still think of as generic. So then you get this is a linear function, theta transpose. So it's going to be grad theta f theta 0 x times 0 theta. And this becomes a linear function 0 theta. So this you can think of this as the parameter, and this you can think of this as a feature map. So this is the same as the feature map phi of x that we discussed, for example, in CS229 when you have a kernel method. And while this is a feature map, this is something that doesn't depend on the parameter, right? So theta 0 is fixed already. So f theta 0 of x is really just a fixed function of x, right? So this is a fixed let's call this, of x given the architecture and theta 0. But it doesn't depend on the delta theta. So in some sense, it just becomes kernel method. And this-- so you can define-- whoops, what's going on? So I guess, for simplicity, if you assume f theta 0 x is 0, then y and y prime the same. So basically, you are fitting a linear function onto your target. And this becomes kernel method. You can define the kernel k x, x prime to be the inner product of features of the phi of x transpose phi of x prime, which is the inner product of these two gradient. And why this is called neural tangent kernel? The reason is that this is the tangent of the network. It's the gradient of the network. I think then that's why it's called neural tangent kernel because the feature is the gradient of the network. Anyway, the neural tangent kernel is just the name. OK. So suppose we just use model-- we just use g theta x instead of the original model. Then basically you just got kernel method, a linear model on top of the feature, right? And in the loss function, so suppose you believe that theta is close to theta 0, so 1 theta. Then you can also kind of intuitively say, OK, my original loss function, which is a function of the model output and y, probably is approximately close to my new loss function, right, which is g theta of x and y. And this is linear. And the whole thing is convex because l is convex. A convex function composed with linear function is still convex, right? But this is when theta is very close to theta 0. And so, basically, the question-- so the remaining thing is just really that, so how valid is this approximation? Because everything sounds nice. After we did this, everything becomes super easy. But in what cases this can be valid? Go ahead. [INAUDIBLE] the inner product [INAUDIBLE].. Yeah. So the inner product is just the typical inner product. You just take the-- because these two are just vectors, right? [INAUDIBLE] OK, so what's the dimensionality here? So gradient of, say, this thing, is in a dimension that's a P if theta is in RP. So it has the same dimensionality as-- OK, I guess it also depends on what f is. So let's suppose f is from some RD to R. D is the dimension of x. But the point is that the output is one-dimensional. And then you take the gradient with respect to theta. You get a P-dimensional vector, where P is the dimension of the theta. So the gradient with respect to theta has the same dimension as theta. That makes sense, right? So this is a P-dimensional vector, and you just take inner product of two vectors to define a feature. Makes sense? Cool. OK? So I guess to proceed, I'll define two notations just for the simplicity. So let's define L hat f theta to be the empirical loss with the model f theta. This is just a formality so that we can write this easier in this. And L hat g theta is the loss with the model xg theta. OK? So the key idea is that-- so in certain cases, this Taylor expansion makes sense, right? So I guess, the Taylor expansion can make sense, can work for certain cases. Here, we're going to hide-- what for what cases it makes sense, can work, it's going to be a big question that we probably will discuss at the very end. But so far, let's say just let's see how does it work. So the way that they work is the following, so in the following sense. So how do you say it works, right? So you say that there exists a neighborhood of theta 0 such that in this neighborhood-- so let's call this neighborhood b theta, theta 0, such that-- several things happens. So one thing is that you have an accurate approximation in terms of function value. So the f theta is somewhat close to g theta of x. And as a result, L hat f theta is close to L hat g theta for every theta in this neighborhood B theta 0. So that's something you want, which makes sense, right? So this is the point of Taylor expansion. You want to approximate original function. And also, you want that it suffices to optimize in B theta 0. Because if, in this B theta 0, there is no good-- maybe let me draw this again. So basically, what we are saying is there is a neighborhood that's got B theta 0. And this neighborhood, first of all, you have-- say suppose your empirical loss is look like this. And maybe there's something else happening somewhere else. We don't know. So first of all, if you do the quadratic approximation, using Taylor expansion on theta 0. Let's say this is theta 0. You do a quadratic expansion. It looks something like this, very close. This is my drawing. So basically, you can think this red one, this red one, is g theta of x. And the black one is f theta of x. So the quadratic expansion is very close to the original expansion-- sorry, the original function. And second, you want that it suffices to optimize here, right? Because if both the red and the black curve, even though they are close, if they are both very high, it doesn't make sense to zoom into this region, right? You should leave this region. But you can say that it suffices to optimize here in terms of following sense. So there exists an approximate global min theta hat in B theta 0. So I'm using the superscript for the 0, which might be a mistake. But let me consistently use that. I think in some other lectures I use superscript for time. So that's why I keep using superscript for time. Anyway, so you want to have a theta hat such that it's global min. And actually, here, you want that L hat g theta hat to be approximately 0. And this indicates that you are global min because 0 is the minimum. There is no way you can go below 0. So if you are close to 0, it means you have to be close to global min. And this also implies that L hat f theta hat is close to 0. But with these two, we still don't really understand how do we optimize the black curve, right? So you also want to know that optimizing this loss L hat f theta is similar to optimizing L hat g theta. And not only this, but also-- and does not leave B theta 0. Because if you leave B theta 0, then all bets are off your Taylor expansion breaks. So you have to say that, when optimize, either the L hat f or the L hat g, I don't leave this region. So everything is confined to this region. And so this is how we make it work. Of course, you can ask whether this is really reflecting what happens in reality. The answer is no, not always. But so far, we are just trying to make this work under certain cases, so that we can appreciate why we have to improve this way. So in some sense, 3 is kind of a little bit like extension. So 3, to some extent, follows from 1, 2. Because if you have a global minimum in this region, right, and the black and red are close, then why optimizing it? Optimizing probably should converge to that global minimum, and you should stay in that region. To some extent, it follows 1, 2, but not exactly technically, but still requires a formal proof. So what I'm saying is that, if you really just want something somewhat informal to think about the dependency, then probably you only have to first make sure 1, 2 is happening. But if you really want everything, then you need to prove the three. And 1, 2, 3, you can make this work, can be all true in various settings with either over parameterization and/or some particular scaling of the initialization. So if you play with the initialization or you play with the width, and also, you need small stochasticity, even small or even 0 stochasticity. So if you play with the overparameterization and scaling of the initialization and also insist that there's no stochasticity that make you leave or go very far-- because the stochasticity will let you leave the local neighborhood. So that's why you want small stochasticity. So then you can achieve all of this. And how do you get small stochasticity? In a nutshell, you either need by-- so to get small stochasticity, you need to either do smaller linear rate or full batch gradient descent. So in some sense, this is the limitation, right? So this is the limitation because you require this. And this is the also limitation because you have to play with it. You cannot just say-- and what you really eventually get is probably not exactly matching what people do in practice. OK, cool. So now, let's see how do we do 1, 2. But still, regardless of all the limitations, still this is kind of an interesting approach. It's kind of surprising that such a region even exists. Even you think about the 1, two2 right, so you don't care about any limitations of it. It's still kind of interesting that there exists such region that you can basically be close to a convex function, actually a quadratic function if the loss is quadratic, right? And there's still a global minimum. It suggests that there's a lot of flexibilities in this landscape of neural networks, right? So when you have a lot of overparameterizations and nonconvexity, then somewhere you have to have a convex region, right? So that's basically what it's saying, right? So in this landscape globally, it's very nonconvex, very complicated. But at some special places in some neighborhoods, you are really having a convex function. And that convex function has a global minimum, which is 0. So even this is still somewhat surprising. OK. So now, let's try to formalize 1 and 2. And then we talk about 3. So how do we do this? So let's introduce some notation. Let phi i to be the phi of xi, the features for the i-th example, which is really this. And I defined this feature matrix to be phi n transpose. You put all the features in a row in this n by p where p is the number of parameters. So now, we can see that the loss functions with respect to the linear model is just the linear regression problem, which you are probably familiar with. And I'm taking quadratic loss or mean square loss. So this is just the yi minus delta theta transpose times phi of xi-- recall that you have basically linear model in delta theta-- and squared. All right, so I guess maybe it's easier to write in the other way so that it's more consistent with the notation here-- transpose delta theta. So if you write in the matrix notation, this would be 1 over n times the 2-norm of y minus phi times delta theta 2-norm squared where y vec is the concatenation of all the labels, which is the Rn. So this just sounds very familiar with linear regression. It's exactly linear regression where delta theta is your parameter, phi is your design matrix or the feature matrix. And let's assume-- this is just for convenience-- yi is on the order of 1. So that's the 2-norm's y is on the order of square root n. So here's the lemma that kind of characterize what is the-- I guess, so lemma-- this is in sum for two. And sometimes you are trying to see that in what neighborhood you have a global minimum. So suppose p is bigger than n. You have more features and then more theta points. And the rank of this feature matrix, it equals to n. And the minimum singular value is equal to sigma greater than 0. Then let the norm solutions to y hat to this. All right. So you want to fit phi delta theta 2 y vec. And you want to understand what is the nearest global minimum, right? So the other thing is this is the nearest global minimum. This is the nearest global min in some sense, right? Because if you fit it, you are achieving the global min. And you want delta theta hat to be the smallest, so that means you are looking for the nearest one. And if you are looking for the nearest one, then you can have a bound on the nearest global minimum where the bound is something like this, square root and over sigma. So the bound itself, so far, is not that interpretable. But the point here is that this means that, if you take the ball, B theta 0, to have this radius, to be all the theta such that this goes theta 0 plus delta theta such that delta theta 2-norm is less than of square root n sigma over sigma, then this ball, this B theta, will contain a global minimum. Contains a global minimum. OK, so this is characterizing how large the ball needs to be, how large the region needs to be, so that it can contain a global min. And a number here, so far the number is not interpretable. I'm going to compare it with some other things. Because by itself, you know, how large the region? If you just care about 2, then you can just take the region to be as large as possible. You have to compare it with something else. And the proof is also pretty easy. This is really just a simple trivial thing. Like you say you can write delta theta hat to be-- because you are-- the minimum norm solution is the pseudo inverse of phi times y vec. And there are some-- I guess, this is not extremely obvious, but you can invoke this is some relatively basic properties of the pseudo inverse. You know that the operator norm of a pseudo inverse is less than the minimum singular value of phi. Actually, these are-- I think they're exactly the same. And this is equal to 1 over sigma. And then you know that you have a bound on delta theta 2-norm by using the operative form of the pseudo inverse of phi times the 2-norm of y vec. So this becomes 1 over sigma times square root of 2. That's it. I guess I don't even need a big O. I don't know why I have the big O, sorry. Just for me, it's always safe to have big O, so it's just part of my brain. You cannot work with without big O anyway. But here, you don't need anything, any constant. Oh, I guess there's a-- I think I need a big O because I'm only assuming that y is on the order of square root n, sorry. So because here I'm only assuming this is-- y is less than O from square root n. So that's why I need a big O. But anyway, the constant doesn't matter here. You get the points, I guess. OK, so any questions so far? So now, let's see whether this region, whether it's too big or too small. It sounds somewhat big because n is there. But actually, you'll see that the region is not that big because the sigma could be made very big in some sense. Or there are some relative kind of things which you have to compare it with something else, right, because you have to compare this with how good you have approximation in the region. So next one is for the lemma. So this is for one in some sense. So suppose this is beta-Lipschitz. Suppose this gradient of the network is Lipschitz in theta in a sense that, for every x, for every theta and theta prime, you have this. So I think this is 0 because we always only care about the gradient at theta 0. You evaluate it at theta 0. Wait, sorry. My bad, my bad. Sorry, my bad. OK. So here, what I'm writing here, this is a function of theta because I evaluate at some arbitrary theta, let's say. So I want this as a function of theta to be Lipschitz in theta. So that means that, if you choose two different place where either theta or theta prime, the differences between them is L2-norm. I have to use our L2-norm here because they are vectors. And you want to say that L2-norm is bounded by the differences in the theta space. So if you have this, then we know that, after the x minus g theta x, your approximation is less than big O of beta times the difference on the delta theta, 2-norm squared. Because the difference between these two is basically, in some sense, depends on how far you are away from the reference point. The reference point should be exactly the same. And if you are a little bit more away from the reference point, then you're going to incur some loss. And the loss is something on the second order. That's also intuitive. So the important thing is that, for every theta in the B theta 0 that we just defined, we have that f theta x minus g theta x is less than beta n squared over sigma squared. And that's just by plugging in the definition of B theta 0. The B theta 0 has this radius. square root n over sigma. And you plug it in into this here. So you plug in this here. You get that in this region, you have some bound on how good your approximation is. So-- Is that for beta n? Oh, sorry. Let's try this beta n, my bad. It's just a copy pasting error, OK-- yeah, beta n squared. OK, so far this bound-- so I saw a question. By the way, you can feel free to unmute, but I can read the question now. So how do we define phi superscript plus? Oh, this is the-- so what is this phi plus? This is the pseudo inverse of phi. I think this is called-- there's actually-- this is the most common definition of pseudo inverse of phi. I guess you can roughly think of as the inverse of phi with some small caveat. Yes, more tangential to the inverse. Thanks for the comments in the chat. I think this is supposed to be taught in the linear algebra course maybe. I don't know. I'm not sure what I can say about it. What you know about it is that, at least in this case, I think maybe just for the sake of simplicity, just think of the pseudo inverse as the inverse if you are not super familiar with it. And then you can verify this is a good solution to this equation, right? Because if you multiply to the inverse, you get this. So the inverse cancels with phi, and you get delta theta. Sorry. So you plug in this delta theta to this equation. You can cancel phi and phi to the inverse, and you get y vec. That's how you verify this is a solution to the equation. And also, I think another useful thing to know is that the pseudo inverse has exactly the same-- it has the inverse of the spectrum of the original one, right? So suppose the phi has singular value sigma 1 up to sigma k. And then the pseudo inverse has singular value 1 over sigma 1 up to 1 over sigma k. And this, you know, if all the sigmas are positive, right, you ignore the 0 singular values, then this is exactly true. So the singular values are just typically inverted. OK, cool. So I hope that answers the question. OK, going back to the second lemma for number one, so this is saying that, in this neighborhood, how good your approximation is, right? So we got this number. So I'm going to explain this number. That's the important thing. So how small is this? If this is small, that's great. If this is big, that's a problem. But maybe let me just say the proof of this lemma. The proof of the lemma is kind of basically this follows the basic fact that, from the fact that, if you have h theta satisfies gradient of h theta is Lipschitz. And this gradient of h Lipschitz is basically equivalent to the Hessian and operator norm is bound by beta. If everything is differentiable, then you know you can bound the inequality of the Taylor expansion. So you can say that g theta minus g theta-- h theta 0 minus gradient h theta 0 theta theta 0, this is bounded by O of beta theta minus theta 0 2-norm squared. And this h theta will be just f theta x. In our case, if you take h theta to be f theta x, then you get the lemma above. So the point is your approximation error is second order in the order of theta, in the difference between your point and the reference point. OK. And there's a small remark. Another small remark is that, if the f theta involves ReLU, then nabla f theta is not even continuous. So it cannot be Lipschitz everywhere. And this requires some special fix. So it requires special fixes. The fixes is not that surprising just because-- and even though it's not continuous everywhere, it's still continuous almost everywhere. So basically, it's kind of close to be Lipschitz. And in some sense, L f theta x is still-- like, if you look at the average over data points, then you still have some Lipschitzness. But I think let's not discuss that. It's a little bit kind of like low level details which is not important. We can just assume we are dealing with not ReLU. We are dealing with something like sigmoid, then there is no such issue. OK, cool. So now, let's go back to the main thing, right? So the main thing is whether this is a good bound, right? So you say that you have found the B theta 0. And you have showed that, in this B theta 0, you have such an approximation error. So important fact is that what is this beta n sigma squared. Is this small or big? And the important thing is that-- so the interesting thing is that this thing is not scaling invariant. So n is something you cannot change, right? But beta over sigma is not scaling invariant. So what does that mean? I think you can interpret this in some way. But in some sense, that beta-- basically, I think the easiest way to think about is that you have a square, and below it you have beta, the beta on top. So somehow you can play with the scaling to make this going to 0. So there are two cases. Actually, there are more than two cases, but I'm going to discuss two cases. These are different papers, but I'm going to unify them in the following way. So there are two cases where beta over sigma squared can go to 0. So the first way is that you can reparameterize with a scalar. And this is in Chizat and Bach. I think this is '19. And the paper is called Lazy Training of Neural Network, something like that. So I guess the paper title suggests that they're saying that this is a lazy way of training networks. It's not really the final way you should leave it. But nevertheless, the paper is very nice. And what they do is the following. So they say that so let your f theta x, your parameterization, to be the following. You take alpha times, let's say, f theta bar x. And this, let's make this a fixed-- actually, this is a standard neural network and fixed, fixed in the sense that you don't change the architecture, right? You just take whatever standard network with any finite width, with fixed width and depth, so and so forth, right, something that you don't change. And it's for this perspective. And you only change alpha. So for every alpha you define perfect, it's a valid network. It's just you have a different scaling in front of it. So for every alpha, you got a neural network. And then let's see how does everything change as you change alpha. And also, you fix initialization, scheme theta 0. And then let's consider, let's say, sigma bar is the sigma mean of the base network. Let's say the base network is the f bar theta. So it's the sigma mean of this base 1. This is the-- right? And let beta bar be the Lipschitzness also of the base 1. So you can think of sigma bar and beta bar are not changing as you change alpha. And now, let's see how does the alpha change the final sigma and beta of your final network. So sigma is equal to alpha sigma bar. Because once you have considered f theta, you multiply this alpha. So all the features, right, like the gradient, becomes alpha times bigger. And everything becomes alpha times bigger, right? So this is just because, when you take the gradient with respect to-- this is just because of 2, right? So if you take gradient with respect to theta of the f, it's the same as alpha times the gradient of theta with respect to-- the gradient of f bar with vector theta, right? So you have a chain rule. So everything got scaled. And beta also got scaled by alpha just because the gradient got scaled for the same reason. And then you can see that you get for free some factor about alpha in this equation. So beta over sigma squared becomes beta bar over sigma bar squared times 1 over alpha. And this can go to 0 as alpha goes to infinity. So basically, they're saying that whatever network you take, whatever initialization, as long as your sigma bar and beta bar they are reasonable and they are not 0 or something like that-- and now, sigma bar is not 0. So you have some beta bar over sigma bar squared. That might be bad. But you can always rescale, reparameterize it, with a constant in front of it so that this key quantity, beta sigma squared, becomes going to 0. And if this goes to 0, what does it mean? It means that your approximation becomes better and better. And at some point, if you change your alpha large enough, you make this approximation super good, right? So basically, you found the neighborhood such that, in that neighborhood, your approximation is very good if you take alpha to be big. [INAUDIBLE] No, the loss wouldn't change, right? That's a good question. The loss, what is the loss? The loss is something composed with-- composed on top of this network, right? So the loss is L of-- for example, alpha f bar theta x, y, right? So first of all, at initialization, we always try to make the initialization 0, the output at initialization 0. So that wouldn't change. And second, even though seemingly this whole thing is big-- sure, that's true. But we show that you have a global minimum where this B in this neighborhood you have a global minimum. I'm not sure whether that makes sense. So in some sense, I think-- OK, let me try to draw a figure to answer this question. So the question is what happens-- when alpha is big, it sounds like function value becomes big, right? So that's true. But I think what happens is that, for example, suppose you have-- not sure. So how do I visualize this? I think your loss will be-- so if you stretch alpha, your loss will be sharper. So if you look at everything, you look at dependency on alpha, so if you make alpha bigger, you make this neighborhood smaller, right? So you make the neighborhood smaller. So you're going to get something like this, very sharp in the neighborhood. So if alpha is bigger, actually you can find even something that is very close by to make the-- so you have to even move even less from initialization. That's because, if you do a little bit of work, then you actually already kind of already fit the data. I'm not sure whether that makes sense. OK, so there's always one thing which is useful, which is, the f theta 0 x, this is 0. So basically, you always start with this where you don't have any scale, right? So this is just literally 0. And if alpha is big, then this is still 0, right? But when alpha is big, you are more sensitive to theta, right? So that's why, if you change a little bit, then you can already fit your data. So you only have to change very, very little from the theta 0 to fit your data. And when you change very little, then actually your approximation is very good in that neighborhood. I'm not sure whether that makes some sense, but maybe you can discuss. It's a little bit confusing. I agree, right? It's just really because the only thing that happens here is how does this beta and sigma-- the relative difference between beta and sigma, how does that depend on alpha, right? So in some sense, if you have larger alpha, you need to have smaller neighborhood. But the approximation errors scales faster because your function is kind of much kind of more nonsmooth, right? So your function becomes sharper. But actually, the neighborhood shrinks faster than the sharpness grows. So that's why it's working. Yeah. I hope that somewhat answers the question, right? But generally, this is kind of somewhat kind of confusing. And there's another case where we can also see this. So the other case is if you overparameterize. So here, let's say, suppose-- this is actually the original first few papers which invites the NTK approach take. So basically, what you do is you say you have a model y hat, which is equal to 1 over square m times sum of ai sigma wi transpose x. This is a two-layer network with m neurals. And I'm scaling this just in some sense mostly for convenience. Because whatever scale you do, you can also change other scales to compensate. And the convenience come from that, if I choose everything on order of 1, then this will output something on the order of 1, which you will see. But maybe let's discuss that in a moment after I introduce the notation. So I'm going to have this matrix w, which contains all the rows. And W is in m by d. And sigma is ReLU here. I guess, well, maybe let's not say sigma squared. Sigma is something like it's 1-Lipschitz. And it has second order derivative. Second order derivative. Actually, yeah. So you wouldn't see how those come into play explicitly. They're not super important. And what is initialization? This ai 0-- so actually ai is initialized to be plus 1 minus 1 initially and not optimized at all. So they are not even parameters, technically speaking. And wi is a parameter. w0 is initialized from Gaussian. A d-dimensional Gaussian with spherical co-variance. And let's say x has the norm norm of x is on the order of 1. It's on the order of 1. This is just for convenience, so that we have a fixed scaling. And let's say theta-- so the parameter theta is really just a vector version. d times m is just really a vectorized version of w. So vectorized version of w, OK? And so we'll assume m goes to infinity. So m is eventually technically poly, and then d. So a and d are considered to be fixed, and m is something that will become bigger and bigger. And that's the power. So everything comes from the scaling of m. So I guess, just to explain why we want to have this 1 over square root m and a initialization scale like this, so scaling-- and I think the reason, at least one reason, is that, if you look at this, so sigma wi 0 transpose x, this is on order of 1. Because wi is a spherical Gaussian and x has norm 1, a spherical Gaussian times a norm 1 thing will have expectations that will roughly be on order of 1. And then you take some value or kind of something like value sigmoid, then you are going to be on order of 1. And then the sum of this will be on order of square root m, right, because you have m of these things that are somewhat plus 1 minus 1. And because ai is plus 1 minus 1. So you cancel them in some sense, and then you get square root of m. And that means f theta 0x is on the order of 1 because you'd have another 1 over squared m in front. So that's one of the reason why you choose this scaling, OK? So initially, our output is on the order of 1. And now, let's see how does this sigma and beta depends on all of these quantities. So we hope that this key quantity beta over sigma squared to go to 0 as m goes to infinity. So let's first look at a sigma. Sigma is the sigma min of this feature matrix phi. And this is also the same as the sigma min of this phi phi transpose. This is just equality because phi phi transpose, the spectrum, is just the square root of the spectrum of phi. And what is phi phi transpose? Phi phi transpose is basically this empirical kernel matrix, right? The ij, essentially, is just the inner product between two features of two examples. And let's look at what the scaling of this phi phi transpose. So to do that, you have to look at what's the gradient. So let's look at the gradient. So f theta, if you look at the derivative of the output with respect to each of these wi, then you can use chain rule and then you can get something like this times x. So this is the gradient of every neural wi, every vector wi. And that means that, if you look at the gradient, the entire gradient, all the gradient of all the vectors if you look at the norm, then it's 1 over m times the sum over m of the i transpose x times x 2-norm square, which is 1 over m times the 2-norm of x squared times-- and what is this? It's kind of hard to know exactly what is this, but I think you mostly care about what's the dependency on m, right? So what's the dependency on m? Then this, as m goes to infinity by concentration-- so as m goes to infinity, this is really just converging to expectation because this is empirical sum. This is a 1 over m here, right? So sigma prime w transpose x square where w is from the spherical Gaussian times the 2-norm of x square, which is 1 basically, right? And this whole thing will not depend on-- this whole thing will be something like O of 1. So I guess, to see it's O 1 maybe it's some somewhat tricky, but at least you know that this is not depending on m. So m is not in this equation. So basically, this is saying that every quantity here, as m going to infinity, the norm of this is on order of 1, doesn't change that. m goes to infinity. And also, you can do the same thing for the inner product of 2, for example. And the same thing happens that, if you look at the inner product, it's something like this, I transpose-- so this is, I think, technically there should be a 0 here. That is the initialization, prime. And as m goes to infinity, by concentration, this is concentrated around the expectation of it. The expectation is something like sigma prime wi transpose x sigma prime wi transpose x. This I can write the following, w transpose x sigma prime w transpose x prime times x and x prime, where w is from the spherical Gaussian, right. So again, this does not depend on m, OK? So basically, this is saying that this entire matrix phi phi transpose goes to some kind of a constant matrix as m goes to infinity. And I think, this matrix, sometimes people call it K infinity. And this is the neural tangent kernel with m equals to infinity. So this is the fixed matrix. And you can show that this is a matrix that at least is a full rank. So I'm going to skip this part. So it can be shown that this K infinity is full rank. And let's take sigma min to be the sigma min of K infinity, which is larger than 0. Then, basically, you can show that the phi phi transpose for the sigma min of phi phi transpose-- sorry, phi phi. This is larger than, for example, 1/2 times sigma min, if m is sufficiently big, just because phi phi transpose is converging to the constant matrix K infinity. So if m is sufficiently big, then your eigenvalues should also converge. This value, again, is not-- I didn't do it exactly rigorously. But you can expect that, when you converge to some matrix, your eigenvalue, your spectrum should also converge to that matrix. So with all of this, so basically this is saying that your sigma is not-- this is our sigma, right? The sigma is not changing, in some sense, as m goes to infinity. But let's see what beta changes. So now-- how beta changes as m goes to infinity. We will show that the beta goes to 0 as m goes to infinity so that beta over sigma squared, the key quantity, will go to 0. And let's see how much time I have. OK. So now, what we do is that we want to look at the Lipschitzness of beta, which means that you care about these two things, the difference between these two things. And we have computed what the gradient is. The gradient-- both of these are matrices because theta is a matrix, right? And the gradient of each column or each row is something like this. So this is really a matrix with entries times x where i is from 1 to m. So you have each of these is a gradient. So that's why, if you look at the norm between these two, if you look at the Euclidean norm, then it's the sum of the norms of each of the components. So you get 1 over m, which come from this 1 over squared m. And then you look at a norm of each of these components. This is a scalar. This is a vector. So you get x 2-norm times the scalar sigma prime x minus sigma prime wi prime transpose x squared. And then so suppose you want to get-- let's try to get rid of this sigma prime. So let's say this is less than 1 over m times-- just without the sigma prime. And this is you're assuming that sigma prime is 1-Lipschitz O of 1-Lipschitz. Let's put a big O here. And then, of course, this doesn't work for ReLU. As I said, for ReLU, we have to fix it in some way. OK. And then you say that this is you get rid of the x again. So m times sum over i m. I guess the norm of x is 1, as we claimed. And this one we just use Cauchy-Schwarz. Let's say wi minus wi prime 2-norm squared times x 2-norm squared. That's this part. And x 2-norm squared is also 1, so we can just do this. And then this is 1 over m times the distance between theta and theta prime in Euclidean distance. So this is saying that the Lipschitzness is 1 over m. Oh, I guess the Lipschitz is 1 over square root m because we didn't take the square root, right? So x 2-norm is less than 1 over square root m 2. So beta is 1 over square root m. And now, if we look at this key quantity, beta over sigma squared, then this is equals to 1 over square root m over sigma. Sigma is something like sigma bar squared. Sorry, sigma is this, something that doesn't depend on m, right, so sigma min square, right? So this will go to 0 as m goes to infinity. So here I think the radius you need is always the same because sigma is always the same. But your function becomes more and more smooth. Your gradient becomes more and more Lipschitz as you have more and more neurons. So that's why eventually, as you have more neurons, you can get into this regime. Let's see. OK, so, I guess, let me take the next 10 minutes to discuss the outline of the next steps. So any questions so far? So now, suppose I try to establish 3. So recall that 3 is about optimizing g, and optimizing over f are similar. So you can basically do two things. There are a lot of different ways to analyze this. And all the analysis kind of, I think, probably you can think of as two steps implicitly even though the first step probably don't have to write in the paper. But I'm pretty sure many people do that when they derive the analysis. So you first step, it sounds reasonable to say that you first analyze optimization of L hat g theta. And the second step is that you somehow analyze optimization of L hat f theta by somewhat reusing proofs in A in some way. Of course, you cannot re-use exactly, but you can probably re-use most of the ideas. And your intuition is that these two things are similar, so somehow you can reuse the proof to do the actual optimization for the neural artwork f theta. And there are two ways for A. I think, essentially, you can say two ways. Maybe there is a possibility that I missed some of the existing papers. But roughly speaking, there are two ways for A. And, therefore, there are two ways for B in some sense. So the first way, let's say i, is that you leverage the strong convexity of this L hat g theta, and then show exponential convergence. I have to say that the definition of strong convexity, I'm not sure whether I have really given it in this course. This is a stronger notion of convexity if you haven't heard of it. You probably don't. It's not super essential for this course. But if you have heard of it, you know what kind of things I'm talking about. Because this analyzing A, this is analyzing how do you optimize a convex function. It does require a little bit of optimization background. At least on a conceptual level, you can imagine there are many different ways to analyze all optimizations for regression. So strong convexity is the stronger version of convexity. And you can somewhat use that to get the very fast convergence rate. Exponential means, every time, you decay the error by a constant factor so that you get exponential decay of the errors. And another way to do this is that you don't use the strong convexity because sometimes you actually don't have the strong convexity in certain cases. So you don't use the strong convexity, but only use the smoothness. The smoothness means that you have a bounded second order derivative. And again, if you have taken some courses about optimization, then this would make a lot of sense probably because there are different ways to analyze optimization. Sometimes you only have smoothness. You have a different kind of analysis. And based on these two approaches, you can get two different proofs for B as well. And we're only going to talk about A. So we only talk about A-- sorry, talk about i, the first approach. And for this approach, no prior knowledge is required. You probably wouldn't understand exactly what I'm saying about this conceptual thing, but the actual proof doesn't require prior knowledge. And it's actually also pretty intuitive by itself as well. So I think we are going to talk about the approach, the concrete analysis, next week, next lecture. But before ending this lecture, let me make another remark, which I think is useful. And in some sense, it's useful for two, for the second approach, more. But it's also useful for the first approach. So this is an interesting observation, or maybe intuition you can say, and particularly useful for two. So this is saying at any theta t. Suppose you take this Taylor expansion with reference point theta t. So now, we are not taking Taylor expansion at theta 0. We are taking Taylor expansion at theta t. You can define this g t of theta x is a function of theta. And it Taylor expanded at theta t, so the reference point is theta t. And they have gradient f theta t x times theta minus theta t. So this is the linear function. And then you can consider nabla L f theta at theta t, right? So this is the gradient that you actually-- This is the gradient you are taking. Because what you really care about is optimizing f, right? So this is the gradient you are taking. But actually, it's the same as the gradient of this Taylor expansion at a same point, theta t. So these two thing-- there's two t here. This is theta t. And this t is indicating that this is also Taylor expansion at the reference point theta t. So while this is the case, I guess, if you want, you can take the derivative and you can verify it. But fundamentally, this is actually-- it's really just saying that, f theta t, f theta and g theta t agree up to first order at theta t. This is by Taylor expansion. If they agree up to first order at theta t, then anything that's-- so this implies L all of f theta and L of g theta t also agree up to a first order at theta t. So that's why-- so what does this really mean? This really means that gradient descent on f, on this function or maybe technically on L hat f theta, you are taking gradient only with respect to the f. This is the same as taking online gradient descent. I guess I haven't defined online gradient descent, but let me define that in a moment after I write down-- on a sequence of changing objective L g theta 0 up to L g theta t. So what does online gradient descent really mean? It just really means that every time you take the gradient of the new function-- you have a sequence of functions. And every time you get a new function and you take the gradient of that function, you take a one step. So that's online gradient descent. So basically, you are saying that taking gradient descent on this fixed function L hat is the same as taking gradients updates with respect to a sequence of changing functions. And this is actually how the second step, the second approach, really works. So this means that you can use online learning approach. I guess, in this culture, I'm not planning to talk about online learning. But online learning is trying to deal with the case where we have a changing sequence of changing functions. So you are not optimizing a single function. You have a changing-- changing distribution, or changing environment, or changing loss function, whatsoever. So there is a rich literature on how do you analyze optimization when you have a sequence of changing loss functions. And this is exactly what this is about. You are having a sequence of changing loss functions. And if you analyze that, you can analyze the original cases. Now, here there are also spectral structures about these loss functions because they are all somewhat similar to each other, right? So they are all Taylor expansions with respect to reference points that are in a small region. So you can also leverage additional information about that. Yeah, so this is chapter 10 in the lecture notes. But I think, in this quarter, I just don't think we have time to go there. OK, I think I'm already 5 minutes late. And next lecture, we are going to talk about the approach one, which is more self-contained and also kind of cleaner to some extent. OK, maybe just a last comment-- I think there are many different neural tangent kernel papers. I probably am not super comprehensive, but I think most of them basically is a combination of these several things. So one thing is that you have to optimize this, establish this third step of optimization. And you have two ways, two large ways, and maybe some even subtle differences, underlying differences. And also, you have to establish the first two properties. And those are properties not about optimization. They are about your parameterization of your function class or initialization, right? So there, you can also have a bunch of different flexibilities. You can change the reference, the scaling. You can change the width. You can do many different things. Or you can even change, for example, the architecture in certain cases to make it more efficient or less efficient in certain cases. Yeah. So I'm not-- I don't want to have a very comprehensive discussion of this NTK just because there are so many limitations. But I think it's a useful thing to know given that there are so many works in it. And there are, indeed, some nice ideas there. OK, cool. So I guess I'll continue on next Wednesday. Thanks.
AI_LLM_Stanford_CS229
Stanford_CS229M_Lecture_15_Implicit_regularization_effect_of_initialization.txt
OK, let's get started. I guess everything's working now. OK, cool. So last time we talked about the-- we started talking about this so-called implicit regularization effect of the optimizers, and last time we discussed the very basic one, which is that if you use initialization zero and then you see gradient descent and you have a regression problem, a linear regression problem, then what you get is that you get the minimum norm solution. This is from last time, and today we're going to talk about a case where we have nonlinear models. And we'll see similar phenomena, but we're going to have a somewhat different proof. Right. So OK, so I guess so let's dive into the details. So this is the nonlinear model that we-- you know, you will see that this model is nonlinear, but it's not actually that much different from linear model you-- as you will see. There is a paper that can do a little bit more than this, but generally we don't know how to deal with very complex models like deep networks. So this is the nonlinear model we're going to consider. So suppose you have, beta is the parameter and x is the input. And the model is fx is equal to the inner product between beta times O-dot beta and x. So O dot is the Hadamard product, meaning the entry-wise product. So, basically you entry-wise square the parameter, and then you take the inner product with x. So this is still linear in x, but it's not linear in beta. OK, so in terms of-- and still the loss function will be nonconvex because it's nonlinear in beta. And you do the loss function, you take the square, then it becomes nonconvex. So it's not that interesting in terms of the model itself because any way you are doing a linear model, but from the algorithm-- the implicit regularization effect perspective, it's still interesting because you have a nonconvex objective function. And we're going to make this even more interesting by considering a special case where, suppose you have ground truth is that y is equal to beta star O-dot beta star times x, where beta star is r-sparse. So the reason why we want beta star to be r-sparse is that r-sparse means that the 0 norm of beta is less than r, that you only have r nonzero entries. And the reason why we want to have this restriction on beta star is because we want to consider overparameterized models. If you have overparameterized models, meaning-- so we consider the case where n is smaller than d. When n is smaller than d, if beta is fully general, beta star is fully general, then there's no way you can hope to learn anything from less than dimensionality number of data points. So, basically we make sure beta star is sparse, and so we assume-- we're going to assume that n is smaller than d but n is larger than some poly r. That's the setting where we are going to work with. And more specific and for simplicity, with all those generality let's assume also beta star is larger than zero entry-wise, because you can see that the sign of beta star doesn't really matter in terms of the functionality, in terms of the ground truth model. And actually for simplicity of this lecture, let's also assume that beta is just some indicator of some subset of coordinates, where S is a subset of coordinates and the size of S is equal to r. This is only for simplicity of this lecture. OK, and now let's define our data. So I guess we have talked about that we are going to have overparameterized model. So we have n data points. And n is less than d, so these n data points are denoted by x1 up to xn, as they are iid from Gaussian of dimension d-- of spherical covariance. And yi is generated from this model without any error, so the yi is inner product, beta star times beta star-- is this inner product of square of beta star times xi. So-- and n is much, much less than d, but we'll assume that n is bigger than omega tilde of r squared. So n is roughly bigger than r squared. And this, so this amount of data points in principle is-- allows us to recover beta star. Actually, you only need omega of r to recover beta star if you count dimensionality, right, so they're r degree of freedom, approximately. So you only need n to be larger than omega r, but for the theory to work, we have to allow it to be larger than r squared. But still, if r is very small, then you still can make n much, much smaller than d and still bigger than r squared. Let's say r is a constant, OK? That's probably the right way to think about it. Any polynomial dependency on r is fine, and so that n is just something like a big constant. But n could be much less than d. OK, so-- and maybe so far, after we define this you may wonder why we are-- why we have to use this nonlinear model, right? The answer is, no, you don't have to use it if you really want to solve the problem, so the nonlinear model is only introduced to study this effect, right? Suppose you really care about solving the question, then you can use the classical solution, which is called lasso. So-- or in the more kind of, like, terms-- using terms that we used in this lecture, you can use L1 regularization. So, basically-- so to leverage sparsity, I think-- I'm not sure whether you all have this background, but typically people use L1 to, in some sense, encourage sparse vectors. I'm not going to get into detail there, but you can show that if you minimize the L1 norm of theta of the model, then you can reconstruct sparse vectors. So in particular, I think, suppose you have this model f-theta x, which is a linear x. Then this is so-called lasso. This is the L1 regression objective, which is something like this plus lambda times the L1 norm of theta. So-- and the classical version of the theory, I'm not going to go into detail here. In some sense this is-- you know, if you don't know the background, you probably just somewhat memorized it or kind of like treated it as a fact. So the classical theory says that if n is larger than r, say-- I think you need a pair of logarithmic factors here. If n is larger than r, then this objective function recovers the ground truth, right? So objective above recovers the ground truth, theta star. Theta star, which is the-- you can-- I guess you probably already see that theta of theta corresponds to beta up to this square, squaring the thing, so approximately. So, basically, if you just really care about solving this question, you view this as a linear model-- you don't have to care about the quadratic thing-- and then use L1 regularization to recover sparse vector. There's a rich-- you know, a lot of existing kind of theory about this. I'm not going into details, but this is something-- somewhat believable because you are using the sparsity of the vector. And also, another thing to note is that because beta and theta-- the relationship between beta and theta is that theta corresponds to beta O-dot beta, right, the entry-wise square, so-- and then the 1 norm of theta is equals the 2-norm square of beta. The 1 norm of theta is the sum of entries of theta, which is equals the sum of beta-i squares, which is the 2-norm square of beta. So, basically, if you do the quadratic one, you can-- you should regularize L2 norm, right? So, basically this objective 1 corresponds to L2-regularized objective. So if you really want to use the quadratic one, you should do-- the quadratic parameterization, you should do this f-beta x-i square plus L lambda 2-norm beta square-- beta 2-norm square, right? This is the objective if we select beta, right? So in the beta space, you should regularize L2-norm square. In the theta space, you should regularize L1, right? So this is the classical solution, and now when I talk about implicit regularization, I think our goal is essentially, basically saying that if you use small initialization, this is, you know, without regularization, without explicit regularization-- this is basically doing the same thing as, let's call this, 2. So as long as you use small initialization with beta parameterization, you automatically get this L2-norm regularization, for free to some extent. This is not exactly the exact way to phrase-- to state a theorem, but this is the rough-- basically the main idea. So more concretely, where we are interested is the objective L-hat beta. Let's formally define it. I think I normalized by 4 here just because it makes the gradient look cleaner, so I-- but it's just a constant vector. So 1 over 4n times the square loss, the min square error, right? No regularization, right? So this is our objective, and we will-- we are going to do that the optimizer will be that you do GD on L-hat beta with small initialization. And more concretely, so the algorithm is for some very small alpha larger than zero. We initialize beta to be alpha times L1 vector. So you don't know the support of beta, of course, so you need to initialize all the entries by alpha. And then you take a gradient descent update every time, so you say the beta-t plus 1 is equal to beta-t minus beta times the gradient at beta-t. OK, so this is the objective we are going to study, and we will claim that this objective-- sorry, this is the optimizer we're going to study. We're going to claim that this optimizer actually finds the beta star even though there's no explicit regularization. Any questions so far? So here is the theorem. So the theorem is that-- basically, the shorter version of the theorem is that when n is-- so n is omega squared. We can converge to this algorithm, converges to beta star. And let's-- but I think there is a little bit kind of like with small alpha. But there is a little bit of detail, so let me state the main theorem. So I guess, suppose n is bigger than big O r-squared log-squared d, so alpha-- let me see. Maybe let's write in this way, I think, just to avoid confusion. So let c be a sufficiently large constant. Suppose n is bigger than 3 times r squared times log-squared d. I think the dependency of the logarithmic factor is suboptimal. The dependency on the r probably is also suboptimal. And let alpha to be less than some inverse poly and to the c, then when the time t-- t is total number of steps-- is less than 1 over eta times square root d alpha and bigger than log d over alpha over eta, so for this range of time steps, we have-- you can recover beta O-dot beta in L2 norm with our alpha times square root d. OK, so how do we interpret this? So I guess there are a few remarks for interpretations. So the first thing is that-- I guess this is something probably I should have mentioned early, so L-hat beta has many global mins. And why? This is because of overparameterization, because you have-- like, if you count a degree of freedom, you have n data points and d parameters, right? So you have more degree of freedom than number of constraints, so you have many, many minimums, right? So that's one of the reasons why you have implicit bias. If you only have one global minimum, there's no way you can have implicit bias. And second thing, how do we interpret all of these quantites in a bound? So the runtime lower bound depends only on the logarithmic of alpha. So this means that you can choose alpha to be anything inverse polynomial. So alpha can be inverse poly, right? You can choose basically the constancy to be as-- to be a constant, so then the runtime wouldn't be affected too much. And the error depends on alpha. So, basically, if you want a very, very small error, inverse-poly error, you can just take alpha to be inverse poly, and then our runtime is not changed too much. And there's an upper bound on runtime, which means that you need to do a early stopping or come to this bound. So if you really believe in this, you have to do early stopping, but the early stopping is pretty mild, but pretty mild because you can see that the upper bound actually depends on inverse alpha. So if you take something like alpha to be 1 over d to the power 10, then your upper bound is pretty relaxed. You can run stuff for a long time, right? So and you-- and actually, in practice we never observed that you have to. If you really do the-- this synthetic example, really run experiments, you never have to early-stop. And that probably-- I don't believe that you have to early-stop. This is more or less an artifact of the proof. But this artifact is not too restricted anyways because it depends on alpha, on the inverse of the alpha, so you can take alpha to be small to make the bound very relaxed. So we didn't pay the attention to remove that completely even though we believe that it's possible. Anyway, so basically, the right way to use this is that you take alpha to be something super small, and then your error is very small and your runtime is-- your runtime lower bound is the logarithmic in alpha. OK so there is one small thing, is that alpha cannot be zero, so why you don't create alpha to be zero? The only reason why alpha cannot be zero is because 0, beta is 0, is a saddle point, so the nabla L-hat beta, it-- at 0 it's 0. This is the part that comes from the quadratic parameterization, right, so the-- we'll compute the gradient. You will see that because you have the quadratic parametrization, everything, the gradient's always multiplied by beta itself. So if beta is 0, then you just have 0 gradient. So if the gradient's 0 and you don't-- if you-- and we are analyzing gradient descent, no noise, nothing, right? So null 1 is stochasticity, so if you started at 0, you would just stay there forever. That's why you cannot use 0. That's in transition, but anything close to 0 is fine. In some sense, this log 1 over alpha that you'll pay, this log 1 over alpha is what-- how much time you have to pay to leave the saddle point. So-- and leaving the saddle point is actually very fast. That's-- in some sense you can kind of believe it, right, because you have a saddle point-- how do I draw it? Like something like this, right? So leaving it is kind of like you have-- it's optimizing a concave function, and you are going downhill, right? You basically accelerate so fast that eventually you leave it very quickly. So OK, cool. So right, and in some sense you can interpret this as, a gradient descent is preferring, prefers minimum norm solution in L2. So maybe preferring-- sorry, I should-- like, preferring global minimum closest to initialization, all right, because we have kind of claimed that you are-- so actually here, I think we have somewhat alluded to this, but just to be formal in this case, you can prove the following. So you can prove that beta star is actually the argmin of the norm, with the constraint that you'll fit the data. Suppose you try to find a global minimum with the minimum L2 norm, right, so this is the constraint, right? So this means you have a global minimum if it satisfies everything and you minimized the 2 norm, then this is actually equal to beta star. And the reason why this is true is kind of similar to why the L1 norm works, right, just because the minimum-- the 2 norm of beta is the same as the 1 norm of theta. And this one, if we replace this with theta, then this is true. And this part is by the standard theory, which I didn't show, but you know that if you look at all the linear models that fit the data and you're looking at sparsest one, it's going to be the theta. Actually, technically, I think this should be argmin to the O-dot 2 because-- sorry-- so the square root because there is-- there's a translation. But the objective is the same, but the argmin, it has the translation. I'm not sure what that makes. That's right, so like-- so if you-- so maybe I-- let's see. Maybe the easiest way to write this is the following. So these two are exactly the same. This is because you have a translation if you look at the min. All right, and for the first objective, the argmin is beta star, and then you can somehow see that the argmin also transfers just by taking the square root. So-- and this is also the case for linear regression, right? Recall that we also proved that if you start with gradient descent with zero and you do linear regression, you get the minimum norm solution that fits the data. So it's very similar, at least from the-- on the surface, from the formula. Like, you have almost the same guarantee. But I don't necessarily believe that this is always the case for all the-- like, I don't feel like you always find the minimum norm solution, the solution that is closest to initialization that fits the data. I don't think this is always true. It's probably-- I think there's still something special about these examples. You know, we cannot just extrapolate generically. OK, so now we are going to try to prove this. Any questions so far? The proof of this theorem is pretty complicated. I will try to finish it in one lecture, but if we cannot, I think I'm going to refer you to the notes. The notes has a pretty detailed derivation. So to kind of get some preparation, let's try to understand some basic stuff about this loss function. So first of all, let's look at the population risk. This is the population that we had. The population risk is y minus beta O-dot beta times x square. Right, so-- and you can try to get rid of the expectation because this is population. So what you'd do is, you'd plug in a definition of y, so you'd get beta star O-dot beta star minus beta O-dot beta times x square. And then this gives you-- I think I have 1/4 here. That's my population, right, because I know I have an additional 1/4 everywhere, so then this becomes 1/4 times the norm-- the difference in norm. This is just because the expectation of some vector times x squared, if x is Gaussian, this is equal to a norm of v, 2 norm, square of that. So-- and I'm going to claim the following. So you are going to have uniform convergence for sparse beta, so-- but we don't have uniform convergence over the entire space, right, because we have overparameterization. There's the-- if you have uniform convergence for everything, then they wouldn't have an impressive regularization effect. So that would be kind of the classical theory that we discussed in the first part of the course, right? So but we claim that if you look at sparse beta, then you have uniform convergence. So here is the-- I'm going to build towards this. So first there's a claim which is, with high probability over the choice of data-- so for-- if n is bigger than something like O tilde of r over delta square, then for every v such that with-- so support of v is less than r-- oh, I guess, alternatively we can write the 0 norm of v is less than r-- then you have the following, so the empirical average of this kind of thing, right? So why do we care about this? I guess this is-- probably can be seen here. All right, so this is-- the population has this form, something like v dot x squared, and you take this expectation. And this is the empirical one. I'm going to be more explicit in a moment, but this is kind of like a small tool. So we are saying that if you have this empirical version of this v dot x squared and it's going to be very close to the population version-- the population is just the 2 norm of v squared, right, so it's going to be very close to the population, but only for v that is sparse. So this concentration only works-- so if you have enough n's, right, if n is infinite or close to infinite, then you should expect this to work for every v just because this is the law of large number or, like, concentration inequalities, right? But here the concentration inequality is more subtle or more-- kind of like there's a finesse here because you only care about v's that are sparse, and also you only have this many of examples. You don't have a lot of examples. n is not bigger than d, even. n is only bigger than the sparsity of d. So this is-- and actually this is also called so-- and also just for the language, I guess this is something useful to know. You know, we don't really have the-- depend on these kind of properties, but we say that if a vector satisfies this condition-- suppose I call it 3, so suppose xi's satisfy 3. Then we call-- then we say this satisfies the RIP condition. So, basically a 3 is called r,delta-RIP condition. The acronym is a little bit kind of weird, but I think there-- it's standing for Restricted Isometry condition. The reason why it's called restricted is because you are only restricting to vectors v, right, but if you are not restricting to vector v, then this is kind of like isometry condition because you are basically saying that all the xi's are isometric, right? They are kind of spreading the entire-- all the directions equally, right? The xi's have converged that energy. That's pretty much what you are saying. Right, so if you don't-- if you require this for every v, then you are saying that the covariance of-- so what this is really saying, this equation 3 is really just equivalent to sum of xi xi-transpose times v transpose times v, is-- right, this is bounded by v-transpose I times v times 1 plus delta, larger than 1 minus delta times v-transpose I times v. Right, so if you require this for every v, so "suppose require for every v," then what this is saying is that the sum of xi xi-transpose 1 over n is, in PSD sense-- how do I write this? Wait, how do I write in PS-- wait. Oh, OK. Right, we'll put that kind of notation for PSD, yeah, like this, less than 1 plus delta times an entity and larger than 1 minus delta times an entity. So if you require it for every v, you are basically saying the covariance of xi are iso-- I think it's called-- not called isometric. It's called iso-- isometric? Isoparametric-- it's just covariance itself, so entity, right? That's-- I'm blanking on the words. So, basically you are saying the covariance close to an entity. But you are not requiring it for every v, right, and then also this is not true for if you don't have enough data, right? So we only have n is less than d data points, so in our case, this matrix is not even full rank. How come you can expect a full-- this is-- this only has rank r because-- it only has rank n because we only have n data points and it's less than d, so it's not even full-rank matrix. How come this can be close to an entity? There's no way, right? But if we look at the quadratic form, right, so if you look at the quadratic form and you only look at the quadratic form evaluated on sparse vector v, then this matrix becomes effected to look like an entity. That's basically what this condition is saying. Right, OK, so and once you have this lemma, or this claim, then we know that you have the uniform convergence for beta sparse, so a sparse beta. And this is just because L-hat beta is this 1 times 4 times 1 over n times sum of beta O-dot beta minus beta star O-dot beta star times xi square. And this is of this form, right? So you can treat this as v, and then you are in this form, the v dot xi square, right? And this v is sparse if beta is sparse and beta stars are sparse. Beta star is already sparse. That's our assumption. And if beta is sparse, then this thing is also sparse. You pay 2 r-sparse, right? They were r-sparse, and now this whole thing would be probably 2 r-sparse, at most. So then this means that this is close to the norm, right, so 1 times 4 times the norm of this. And this is equal to L beta. Right, so for sparse beta, you have uniform convergence, but you don't have uniform convergence over the entire space. So in some sense-- so and also you can have uniform convergence for the gradient of this if you really care about it. I guess-- I think I will show this later for sparse. Right, so you can even show the gradient concentrates around the expected gradient. The empirical gradient concentrates around the population gradient for sparse beta. So however, on the other hand, there exists dense theta. Such that, for example, L hat beta is 0. But L beta is very much lower than 0. So they are overweighting positions. There are places where you don't have the proper training and test loss are similar. So but those are dense beta. OK. So the question is, why you are finding a sparse one but not a dense one, right? Because the dense one doesn't have the nice property. So the main intuition is the following. So we have done quite some preparation. So the main intuition or what we believe to be happening is that following. So you can think of this maybe different Xr to be the site of vectors that are sparse. So beta such that beta r sparse. Let's see whether I used the-- so and-- so supposed you look at a space. So you have an entire space which is probably something very large. And 0 is somewhere here. It's the origin. And you have some family of, let's call this, Xr. This is the family of sparse vectors. And you know in this Xr, everything behaves so nicely. The training and tests are just basically the same up to some small error, right? So the training on test-- and also in terms of gradients, they are similar. The gradient of L hat and gradient of L are similar. And I think basically what happens is that you start from somewhere close to 0. And the reason you cannot start at 0 just because the setup point, not very important. And you can think of a gradient descent. So you are doing gradient descent on the empirical loss, L hat beta. And because you have uniform convergence, you're basically doing the same thing as gradient descent on L beta as long as you don't leave this at Xr, right? So if you leave it, there is all bets are off. But if you don't leave it, it's fine, right? So it turns out that what happens is that if you do gradient descent, you can consider the alternative world where you do gradient descent on a population. So let's say this is the gradient descent on a population loss L beta. And it turns out, if you do gradient descent on a population, you are going to reach a point, which is beta star, which is on the boundary of this set. And also in this trajectory, you never leave this set Xr. So now, because we believe that the black trajectory is similar to the purple trajectory as long as they are in the set Xr. And the purple trajectory never leaves the set Xr. So that's why the black trajectory also converts to beta star. I'm not sure if that makes sense. So basically, the purple one is the population trajectory. And the black one is the empirical trajectory. So you know that the empirical trajectory and the population trajectory are similar in the set Xr. You don't know anything about outside world, right? And also, the purple trajectory never leave the set Xr. Then the black one probably shouldn't leave as well. And the black one should be similar to the purple one. So for example, suppose the purple trajectory looks like this. Suppose that's what's happening. Then you lose control. Because at the beginning, you are following the proper structure. And then you leave the set. And then all bets are off. You don't have any control anymore. But this turns out to be not what's happening. What's happening is that the purple one actually stays in the set Xr for a long time until it reaches beta star. And then, it's stay at beta star. So that's why this alternative situation doesn't happen. This doesn't happen. This is not what's happening. And inside this Xr, everything behave nicely. There's only a global minimum, which is beta star. There's nothing else. And all set Xr, there are a bunch of different things. So all set Xr you can imagine that there is a-- let's do a different color. So outside Xr there is probably quite a bunch of overfitting solutions. So these are all solutions that makes your protocols 0. There are so many of these solutions. But you never get to actually even go to those places just because your black trajectory is emitting the purple one. And the purple one didn't go to those places. And the black one doesn't go to those places either, right? So that's the intuition why this is working. Any questions? [INAUDIBLE] beta doesn't leave the [INAUDIBLE]?? But why the purple one doesn't leave the-- yeah. So that's not-- I don't have a-- I didn't give a justification either, right? That's something we're going to prove. Yeah. And I don't think this is something about the property of this problem. And if you see the proof is not that surprising. Because you are gradually-- you are in some sense trying to-- it's a local search algorithm, right? So you are trying to search your neighborhood first, right? So you gradually-- you start from 0, somewhere close to 0. You are gradually searching your neighborhood until you find a global minimum. That's why you don't want-- probably wouldn't go this circuitous way. So you're going to go more straight to the closest point. But the real proof has to go through the math, yeah. My other question is the initialization scheme we described before isn't in X bar. It's [INAUDIBLE]? Yeah. That's a great question. So the initialization alpha times 1, right, so this is literally speaking is not in Xr, right, it's this. And I don't ask this question many times. And I think the right way to think about this, I think I have some remarks somewhere else. But you ask me earlier. I should probably should just answer it here. So the question is why the transition is not in Xr? But I think the thing to think about is that, of course, it's not exactly in this sparse set. But it's close. And close in what sense? Close in a sense that alpha 1 is very close to 0. And 0 is in this set. So that's kind of the property we're going to use. So yes, so you are right, that we can never say exactly a set Xr. You are going to say that you are in the neighborhood of Xr with a little bit small error. And the error is very small. It depends on alpha. So that's why we have to choose alpha to be very small. So in some sense, you really want to choose 0. So from all of this discussion, the only thing you want to do is to choose 0. And 0 it just happens to be a saddle point. That's unfortunate. So you have to perturb it a little bit. [INAUDIBLE] The question is whether this particular property has anything to do with the positivity of beta, right? So I don't think so. Because so are you talking about a positive beta start or beta, the variable beta? Beta star. Beta star. Right, so we assume the beta star to be positive. I think, no matter beta star or beta is positive. Sorry. No matter whether beta star is positive, beta star square is always positive. So you can-- if you initialize from this, then you just always go to the positive. It's always keep being positive. So basically you just learn the absolute value of beta star. And if learning the absolute value of beta star is not that different from learning beta star. So basically, suppose you don't restrict beta star to be positive. Then you cannot claim that you recover beta star. You can only say you recover the absolute value of beta star. But the picture, the intuition is still the same after that changes. [INAUDIBLE] Yeah. So the question-- [INAUDIBLE] Yeah. So I guess the question is whether we really have to be exactly alpha times 0,1 vector, right? So and the answer is-- this is a great question-- the answer is no, you don't have to do that. The only thing you have to do is that you initialization beta 0. This is a vector. I think you only need to make sure every entry of it is very small. So you only need to make sure this infinite norm is very small, less than something like alpha. And yeah, and you can even initialize negatively, I think. So if you initialize negatively, then the action will become negative eventually. But the sum doesn't matter that much. So that's why. Yeah, but yeah, so and I'm only doing this just for convenience. Because it makes the proof cleaner. So given this plan, this intuition, so it's natural that we should start analyzing the population trajectory, right, the purple one. So and then we try to say that the black one is close to purple one. So let's start with the population trajectory. So you can sometimes think of this as a warm-up. Or in some sense, this is also a kind of sanity check for this approach, right? So this is-- let me state the theorem formally. But I think you are expecting what the theorem is saying. Where GD on the population loss will converge to beta star. And in, I think, O of log 1 over epsilon alpha over eta iteration with epsilon error in L2 distance, right? So but I guess the formal theorem matters less than the proof. Let's see how the proof goes. The proof is kind of brute force. Because you just really literally control what each of the coordinates is doing. So it's pretty explicit. And you see how the coordinates are changing. But explicitness is actually a weakness in some sense. Because we are doing so explicit derivation, it's great for this problem. But it's harder to be extendable. I think that's a general thing. So if you have a various kind of strong analysis for toy case, then that's not necessarily always the good case. Because if it's too strong, too explicit, then the expandability, the applicability to broader case becomes a problem. And this is, in my opinion, probably the main reason why we cannot extend to more general cases other than this simple quadratic one. There's a small-- there's extension to the matrices case, but not fundamental extension. So you can change all of these to matrices into the vectors, that's still fine. But not beyond that. So but still, anyway, let me do an analysis. So the proof sketch is that you first compute the gradient. We call that L beta. So L beta is equal to 1/4 times beta over the beta minus beta star, over beta star 2 norm square. And you can compute a gradient with respect to beta. It becomes beta over the beta when it's beta star over beta star times O dot beta. I guess you can verify this with pretty much scalars. But the vector version is pretty much taking the sum of the scalar, all the dimensions, right? So here, all the dimensions are separated. So basically, this is just the sum of the objective. And each objective is about one coordinate. And this is just a simple chain rule. And you can see that everything is multiplied by beta always, right? The gradient is always multiplied with beta. And this is why gradient L0 is 0. So and now, let's look at the update. The update will be beta t plus 1 equals to beta t minus eta times this beta t O dot, beta t minus O dot beta star, times O dot beta t. So and this is really-- this is everything is in B-dimension. But really, you can view this as d separate update in d coordinates. Because each coordinates are not doing-- having any kind of correlation with anything else. So this is really just the saying that Bti is equals to Bti minus eta Bti squared minus B star i squared times Bti, OK? So every coordinate are just doing separate things. And maybe it's useful-- but this different coordinates, right, has a little bit differences because this one is different. Otherwise, all the coordinates are doing the same thing. So the target is different. So in some sense, you are basically-- so when i is in a support of beta star, which is denoted to be s, so then your update is basically beta i is update to be beta i minus eta beta i squared minus 1 beta i. And when I guess that moment at t, just for notational simplicity. And if i is not in a support of beta star, then beta i is just update to be beta i minus eta beta i squared cube. So and you can see all of this intuitively makes sense. Because suppose this is the case, then if-- so beta i is supposedly is less than 1 between 0. Let's suppose that's the current beta i. And this number is negative. This is positive. So this whole thing is negative. So you are trying to increase beta i, right? So basically, this update is trying to increase beta i if beta i is not yet reaching 1. And this update is doing the reverse direction, right? So this is trying to say that as long as your beta i is bigger than 0, then you are trying to decrease your beta i. So basically here, this encourages beta i to go to 1, and this encourages beta i to go to 0. And that makes sense, because 1 is the beta 1 star, and 0 is the beta i star in the other case, right? OK. So now, let's try to do a more detailed calculation to see what happens in each of this case. So let's first look at the case one. Let's say this is case one, and this is case two. So case one. So here we are trying to-- the update is trying to increase beta i until it reaches 1. So there are still two separate cases. The first case is that suppose beta i is less than-- at some time t is less than 1/2. So you are only 1/2 done with your work. And then, you can see what's the changes. So beta t plus 1 is equals to beta i t minus eta, beta i t squared minus 1, beta i t. And we argue that this is trying to increase beta i. And we can see how much it increases beta i. It increases beta i by this factor, eta times 1 minus beta i t square. So you have a multiplicative factor. So in some sense, you're multiplying your beta i to make it bigger. But how much you can make it bigger, it depends on the value of beta i itself. But if we know that beta i is not too big, then we can bond this by beta i t times 1 plus eta, 1 minus 1/4, which is bigger than beta, is equal to beta i t 1 plus 3/4 times eta, right? So you have exponential growth. And if beta i is already bigger than half, right, then let's see what happens. So now the growth rate might be slow down. Because if you see if beta i is kind of close to 1, then this constant becomes close to 0. So your growth rate slows down. And that's true. But what you can do is you can analyze how far you are away from 1, from your target. And if you look at how far you are away from 1, then you get the following recursion. Minus eta times 1 minus beta i t square, beta i t. And so, let's try to reorganize this a little bit. So I guess let's say, I think let's assume this. Let's assume this is also less than 1. And then, this is less than-- sorry. Let me think. I guess, I don't necessarily have to assume this. Let's remove this for a moment. This is less than 1 minus beta i t minus eta t squared times 1/2, right? Because beta i t is bigger than 1/2. And now, you can factorize this to get a factor 1 minus beta i t out. And this 1 minus eta, 1 plus beta i t times 1/2. And I guess it might be a little bit you may feel like this is a little bit unnatural. But I think if you see my final target, I guess it's probably actually not super difficult to guess how to do the intermediate steps. So now I'm going to use the fact that beta i t is bigger than 0. So get 1 minus beta i t, 1 minus eta times 1/2. So my point is that if you look at the final outcome, you'll see that now you are not growing exponentially. But you are converging to 1 exponentially fast. So you are decreasing your error-- you are decreasing your distance to 1 in exponential fast rate. So in some sense, this behavior of these dynamics has two regime, right? So when you are small, you are growing very, very fast. And then when it becomes bigger, then your growth rate slows down, but you are converging to 1 exponentially fast. So that's why if you combine these two regime then you are-- and also you can see that this maintains beta I is less than 1, right? So because if you-- before you are less than 1 and later you are going to be also less than 1. So basically, the behavior is that if you summarize-- so basically, in log 1 over alpha over eta iteration, you are in the first regime. So this is beta i t grows to 1/2 exponentially fast. Exponentially. And you only need to use this number of iterations because you-- initially it's alpha. And you want to grow to 1/2. So I guess technically you also have a tool here if you want. So and you have a learning with eta. So basically, this is because 1 over eta is 1/2 to the power of this t1. t1, this is something like at least-- Sorry. This is because this is your growth, the factor of your growth. And you find some power to it. And then you want to grow at least 1/2 over alpha factor. And that's how you solve this number t1. OK, right. But anyway, so this is how-- OK, the first part. And then, in log 1 over epsilon over eta iteration, beta i t converges to 1 minus epsilon. And this is because you want to start from 1/2, the error 1/2 to error epsilon. And each time you decrease by 1 minus eta over 2, so that's why you have to pay this number of iteration. Does this make sense? I guess there is a small thing that I probably escaped in some sense. So this is about how do you derive how many iterations you need. So if you want 1 plus eta to the power t to be bigger than some number R, then this means t needs to be bigger than log R over eta. So this is just something that was burned into my head. But you can derive it yourself as well. OK. Cool. All right. So that's what happens with those coordinates that you want to converge to 1. And you also have case two. And you can do the same thing. I don't necessarily want to bore you with all the derivations. But I guess the derivations here is easy. Because you are just trying to say beta i is decreasing in this feed, right? So and here, let's see. Now here, interestingly, so if you really look at-- literally look at this. This is actually saying that you are decreasing beta i eventually to 0. But somehow we care about something weaker that is more-- let me see. Am I missing a plus here? So I don't know why I'm-- I think I'm not missing a plus. Just one moment. Let me make sure I don't make any mistake here. OK, so I think what I do here, I have some derivation here, some small claim here, which is particularly useful for the empirical case, which I'm not sure I can get into. So I'm going to skip this part. So at least for now it's not trivial to see that beta i is decreasing. If you start with alpha, then you keep being smaller than alpha. That sounds trivial to see. And maybe let's just leave it there. This is enough for us to deal with the population case. So basically, our conclusion is-- you can see that the conclusion is that you converge to something close to 1 in this number of iterations. And the iteration count is something logarithmic times 1 over eta. And you also have this property that you are always less than 1. All the entries are less than 1. And also, the small entries are never growing. So basically, your beta t at any time basically looks like there are a bunch of entries which is growing. So the s and also in the s complement, in the s complement all of these entries is less than alpha forever. And in the s-coordinates, you are growing potentially. So you can see that this is still always approximately R sparse. Because at least you only have our big non-zero entries, and all the other entries are very small. So approximately, this is still always approximately in the Xr, just because the small entries are keep being small. So now, let's try to talk about the empirical case a little bit. I think this full analysis probably wouldn't fit within 15 minutes but I think I can give you some idea about it. And actually, I'm going to only do the case when R is 1. Because when R is more than 1, it's kind of a little bit complicated. So I'm only going to do the R is 1 case. So get that for some delta, and when R is 1, and it's less than omega of 1. So basically, you only have to have logarithm and of examples. And then, gd on beta hat-- this is just a simplification of the theorem we have already stated. I guess now this is also actually weaker. Maybe I should say it's a weakness theorem. Not only weaker in the sense of simplification, but also weaker. So weaker and simplified. So you get this iteration steps. We have-- it's less than O to the form square root of 2. So here, I guess why this is weaker than what we have said before. Before, the error can goes to 0 as long as you take alpha to be small enough. So this is weaker because error doesn't go to 0. So before we can make the error goes to 0 as alpha goes to 0. And now you only prove that it depends on something like the number of examples. This is just a technicality. If you want to prove the case when the error goes to 0, you have to do extra work, which is probably too much for this course. So how do we do this? So in some sense, the proof is trying to-- in some sense maybe let me-- the proof idea in some sense is pretty intuitive. Given that this figure we have to draw. So you are just trying to show that L hat beta is close to L beta. And that's something you can prove very easily. So we try to prove that-- so one you, try to prove that L hat beta is close to L beta for every beta that is-- I guess, technically, I have to say this is approximately sparse. Because you can never be exactly sparse, as we discussed. So that's something that is relatively easy. And the second thing is that you want to say that the beta t are under-- for the empirical case, the empirical trajectory, in the empirical trajectory never leaves this Xr very far-- never leave it significantly. So how do you show two? Basically, you are trying to say that you are staying close to the-- so you basically want an error to not blow up. So what does that mean? So it means that maybe let's draw something here. So you are trying to show two trajectories that are close to each other just forever. So you have a trajectory, which is the purple one. This is the green descent. And what happens is, after you take the first step you have some error here, right? So now, these two trajectories are not doing the same thing anymore. Initially, you are taking the gradient at the same point. And now this purple one is taking gradient at this point. And the black one is taking a gradient at this point. So you have this error in not in terms of the gradient are different but also in terms of the a difference of the points where you are evaluating your gradient. So it's empirical versus population, that's one difference. And the other thing is that you are evaluating the empirical and population gradient at different places. And that could introduce a bigger error. And then it will introduce even bigger error. So if you don't do this carefully, then it's possible that eventually you go this way and the other one goes the other way, just because the error keep blowing up, keep being bigger and bigger. So basically, you have to control how the error control-- how the error changes. So that's the key part. And this boils down to a lot of at least on the surface seemingly very boring calculations. If you really want to do all this calculation well, you have to understand a little bit about what each term means. And it does require some extra work. But the first level thing is that at least this whole thing is a simplification of one of the paper I wrote a few years back. And when we did this thing, the first thing we tried is that we just try to do the calculation. So and you try to understand which term is problematic, which term may cause a bigger blow-up, and then you focus more on that term, and then try to understand a little bit better, and then maybe use some-- devise some inequalities. But basically, below this level it becomes quite technical. So I think I'm going to probably spend another five minutes to do a little bit a thing. So I think the thing for you to control this error is that one thing we realize is useful is that actually this is actually important conceptual-- in a semi-conceptual kind of thing. That we realize, so to control how this error is good to represent your iterate in a convenient way. So what does that mean? So it means that the beta star, we will assume R is 1 already. So Let's assume beta star is just e1 So it's just 1, 0, 0, 0, right? So you just want to say that it converges to this vector. And one of the useful thing we did is that we take beta t to be-- we write beta t to be rt times e1 plus an arrow vector zeta t. So explicitly, you write it is a multiplication of beta star and some error. So in some sense, beta star is here. And you are starting from 0. And you try to say how much you are different from this line. So this is our zeta t. And this is the rt times e1. That's how you represent where you are at time t. And what we did is that we want to say that rt, the plan is that you want to say rt is going to 1. Because eventually you want to go to e1. And zeta t, the error term, is always small. I think we prove it to be smaller than O of alpha for any t. That's the next level, right? So and then, you basically what you have to do, you have to try to derive a recursion for rt and zeta t. And when we remove the recursion for both of these two things you can always keep in mind that what happens with the population, right? So the precursor for rt, and you can have the same recursion for the population case. So basically you're going to have-- so suppose there's, for example, talk about-- let me see which one I can talk about easily. Let's see. How do I quickly simplify these notes? I think I had some backup plan. Yes, here. So basically, for example, if you look at the recursion for rt, it looks like rt plus rt is equal to rt minus eta rt squared minus 1, times t, minus some term that depends on zeta t. So if you do-- I have all of these formulas written here. But I don't want to show all the details. So and if you look at this, this is very similar to the thing that we had before. I guess, I don't know why I'm-- let me also change the superscript. I think I should probably have-- yeah. In my notes it's also superscript. OK. I guess let me not change everything. Just you know the superscript is the same as the here. So if you look at this one, this part, this is the same as the update for the beta we had. So where is the update for beta? Here, right? So this is the case when you are looking at a coordinate where you have a NG1 in a beta star. So and this is the update. So and if you just replace beta i to the rt, you get the same formula. So rt has the same formula. So basically, this part is what the population gradient does. And you already analyzed this part already. So basically, the only thing you have to deal with is, how does the error term affect you. And you inductively show the error is small. So under the assumption that error is small, then you can show that the update for rt is basically doing the same thing as the update for the beta t before. So that's how you deal with rt. But how do we know the zeta t is small? That becomes even more complicated. Because zeta t also has a derivative-- has a recursion, right? So zeta t has a recursion, which is-- I think I don't even see a simple way to write it. Actually, I have something like this. So zeta t is equal to zeta t minus something like some matrix mt times zeta t, some vector, sorry. Some vector low t times zeta t, something like this. I'm not going to define low t. And what you do is that this is somewhat similar to the case to those i not in s, the beta i t recursion. So it's not-- so what's the beta i t recursion? The beta i t recursion was something like beta i t plus 1 is equals to beta i t and it's eta beta i t cubed. So I think this, if you really look at the derivation, I think this is something like beta i t squared minus 0 times beta i t. So if you really do the match the terms, I think these two matches and these two matches. And something here, which also matches if you look at the details, to some extent, not exactly. But you somehow have a-- there's no way to match everything exactly. But you just use this as-- use the beta update as a reference for you. And you know that something already matches. And what doesn't match is this rho t, which I didn't define to this beta t square. And you do some kind of concentration to show that they are similar. And what exactly concentration you show, it's also actually up to the exact terms. And somehow, sometimes you relate the zeta t recursion to the beta t recursion under the hood so that you can show that t doesn't grow eventually. Because you knew beta t doesn't grow. That's what we proved easily. And once you can relate zeta t to beta t, then you can also try to show zeta t doesn't grow eventually. I think that's pretty much the best thing I can do in a short amount of time. And the details are in the notes. Any questions? The [INAUDIBLE]? Sorry. Yeah. Yes. I think it was rt plus 1 when I changed the superscript to a subscript. I forgot. Yeah. Thanks. [INAUDIBLE] So the question is that-- [INAUDIBLE] So I guess the question is that at least in this lecture, in last lecture, we saw two examples where gradient designs converges to a solution that is closest to the-- in tradition. And but why empirically you still have to use the explicit regularization of weight decay. So I think I would like to argue that empirically, the weight decay is actually not very strong. So it's not even clear whether the weight decay is really doing a regularization. Because with the same weight decay, actually you can memorize the training data. You can even memorize-- sorry. You can even memorize training data with random labels. So suppose you permute your label arbitrarily. So that there is no pattern. This is random label. You can still use the same weight decay and train your network with the same weight decay to find a zero error solution. So that seems like that's the weight decay is not really doing that much of a regularization, at least not as strong as the theoretical setting will say. Well, for example, in this case, suppose-- or in this case or the previous case, we're supposed to use weight decay. You usually find the minimum norm solution. Then use a strong regularizer to say you want to find a solution with small norm. Then you cannot fit random labels anymore. So and also, another kind of tricky thing is that the weight decay in practice also has some other effect that regulates, for example, how the batch normalization is working. And for example, I think if you have rationalization, then the model becomes scale environment. So it becomes if you multiply all the weights by 2, technically you don't change anything. But somehow you want to regularize that. You want to kind of regularize that in some way. Because in certain cases, it changes the optimization. So basically, I guess I don't have a very concrete-- this is a good question. I don't have a very concrete answer. But I think the thing we believe is that weight decay is not actually doing a strong work in terms of the standard normalization of a norm, like regularization of a norm. And also, we somewhat suspect weight decay has some other effect to some extent. And also, sometimes the weight decay is not even important. So if you remove the weight decay, you still get pretty good results in certain cases. So I guess that's the best we know for now. yeah. Any other questions? OK, sounds good. I guess I will see you on Wednesday.
AI_LLM_Stanford_CS229
Stanford_CS330_Deep_MultiTask_Meta_Learning_Domain_Generalization_l_2022_I_Lecture_14.txt
OK. Hello, everyone. Welcome to this lecture. So this is Huaxiu I'm a postdoc in Chelsea's lab. Today, I will give a lecture about domain generalization. So before we start, I will first go through some logistical things. So the project milestone is on Wednesday, and the homework four, which is a optional homework, is due on Monday. Here is the plan for today. Today, I will introduce a new concept called the domain generalization. I will first do some problem statement and problem formulation, and then introduce two kinds of representative algorithms as the first one is to adding explicit regularizers to solve this domain generalization problem, and the second one is we can leverage some data augmentation to handle this problem. And the goal for this lecture is that I will first let you know how to understand the intuition behind the domain generalization and this problem formulation, and I will also see that you can be familiar with the mainstream domain generalization approach. So the regularization-based approach or our augmentation-based approach. So let's first do some recap about the domain augmentation that we learned from our last lecture. So in the domain adaptation, we aim to use some training data from the source domains and to make it perform well on the target domain. This is a kind of a form of transfer learning, but we can also access the target domain data during the training process, so it is transductive learning. So basically, we have the trained domain data and the unlabeled target domain data, and we want to perform well on the target domain. So there are two assumptions for the domain adaptation, and the first one is a source domain and a target domain-- only differ in domain of the function. So it means the condition distribution py given x is the same between the source and target domain. And there are also exist a single hypothesis with a low error on both source and target domains. We also do some revisiting to see that a domain is a special case of task. So a task is composed of three components. The first one is pi x. It's for the margin distribution or feature distribution, and then y is for the distribution of the domain of the function, the py given x and also the loss function. So for one task that we learned from the multitask learning or meta-learning-- all these three components can be changed from different tasks. But in the domain, only the px can change over different domains. So before we start-- so then we will mention that can we always access unlabeled data from the target domain? However, in some real world applications, because of the following two reasons that we cannot always assess this target domain. The first one is sometimes we want to do real time deployment, and we do not have time to collect enough target domain data and to do the adaptation, including domain adaptation or collect the label data to do some meta-learning or fusion learning adaptation, and in the case that obtaining the target data may be restricted by the privacy policy. And I would provide some examples for each of these reasons. The first one is for the real-time deployment, and in this case, for example, if we want to change an autonomous driving system, and we want to let these models trained on three types of roads, and then we want to deploy this model to a new road-- for example, we want to deploy this on the night, and in this case, we want to do some real-time deployment, and we do not have sufficient time to collect enough data. And another case is about the-- another example is about the privacy concerns. So there are a lot of policies about the privacy concerns. So for example, the most famous one is about General Data Protection Regulation, which is from Europe. And in this case, we cannot share the data between different institutions or hospitals or any other stuff. And for example, if we want to build a disease predictive model-- so this disease predictive model we trained on the three hospitals. For example, hospital one, two, three, and then we did want to deploy this model to a new hospital. And in this case, to deploy this model to a new hospital, we cannot access the training data due to some privacy concerns. Based on these two things, our first-- to mention why we need domain generalization and first to give us some formal problem formulations for this. In domain generalization, so assume we have a bunch of source domains. For example, there are three domains-- a clipart, painting, and a sketch. We want to recognize different objects for each domain, and we will train a model. And to get such a neural network to see that we will train the model. And then we will deploy this model to some unseen target domains. Here is a domain for the real image. And we want to generalize some common knowledge from these source domains and to make those models perform well on this target domain. Mathematically, the domain generalization problem could be formulated as we given a bunch of source domains-- p1 x1 to pn x1. We're aiming to solve unseen target domain, pt xy, without accessing the data from it. So they are-- similarly, there are also two common assumptions. The first one is all domains only differ in domain of the function. It means that the constitutional distribution py given x is the same across different domains. In our previous definition of domain adaptation, we will make the conditional distributions to be the same between the source and target domain. And but here, it would be-- is exactly the same across all domains, so only px can change. So another assumption is that they are is a single hypothesis with low error in all domains. So that-- which guarantees that we can learn a model that can perform well across different domains. OK. So this is also elevating for the domain-- the special case our task. So it's also how [INAUDIBLE] domain generalization problem. Based on this definition, I will try to do some comparison between the meta-learning that we learned in a lot of lectures before and the domain generalization. And in meta-learning, it is like transfer learning, and we want to transfer some knowledge with many source tasks. So given the data from task 1 to n, we want to solve a new task t more quickly, proficiently, and stably. And in domain generalization, this is a special case of the meta-learning that we learned before. So given the data from domain d1 to dn, we aim to perform well under the new domain dt. So but there are two kinds of difference between the domain generalization and the meta-learning. The first one is only pi x changes across different tasks, but in the meta-learning, we can change a lot of things-- so about three components. And in domain generalization, it's we want to directly generalize to the new domain, instead of doing any kinds of adaptation. Yeah. This is basically the comparison between meta-learning and domain generalization. The second comparison is between the domain adaptation that we learned from our last lecture and domain generalization. In the domain adaptation, we can access some-- we can access label data from all-source domains, and we can also use unlabeled data from the target domain, so we aim to make the model perform well on the target domain. And in this case, the targeted data-- we can access the target data during the training process. So we can access the unlabeled data, and which also there is still some information of the target domain. And in domain adaptation, there are only one source domains typically, but people can also use more source domains, but we can only rely on one source domains to achieve success domain adaptation. And the model trained for the adaptation is only specialized for the target domain. So it means, we only care about the performance at the specific target domains that we can access, instead of considering this problem on a more general case on a bunch of domains. So domain adaptation is a one kind of transductive learning setting. In domain generalization, these things would be a little bit different. So in domain generalization-- so given labeled data from a set of source terms, py x1-- p1 xy to pn xy. And we aim to make the model perform well on the target domain-- a bunch of target domain actually. So we cannot access the test data during the training process, so this is the first difference. And the second one is domain translation usually needs more than one source domain. So if we only-- when you have a bunch of source domains, you can capture the common knowledge behind these source domains and then to generalize this common knowledge to benefit the performance of the target domain. And the third difference is that these models can be applied to all domains, including the source domains and target domains and even some domains that we didn't captured. So this is why we needed to lend some well-generalized knowledge and can be generalized to a bunch of domains. Yes. This is the key difference between the domain adaptation and the domain generalization. In this lecture, we want to mention-- domain generalization is an inductive setting. OK. So based on this definition and based on some comparison, I would like to show you some real world applications of domain generalization. So first application is about sustainability and the two. We want to do some wildlife recognition. We would like to recognize different animals on different locations. So here, we will use-- for them here, we have 245 locations, and we want to train a model for these locations and to generalize these models to new locations. Under the exam policy, you may familiar with because in the last lecture, we also see the adaptation version of this tissue classification example, so we aim to classify one tissue image, whether it's normal or is tumor. So basically, we learn a model from a bunch of hospitals, and then we train this model to some new hospitals. Here is hospital 4 and hospital 5. And in the domain adaptation, instead, we probably only have two hospitals, and we want to learn some common knowledge from these two hospitals. The third application is molecule property prediction. The molecule property prediction is quite important in the drug discovery field, so they want to predict some toxic [AUDIO OUT] of giving a small molecule that predicts the toxic. And we train a model on different states of these molecules and different scale force. And then we aim to generalize some common knowledge and to make it work on some unseen scale force. And the last one is about code competition. This is also very important applications for implications in the field of programming language. So we have a bunch of reports, and then we train a model to predict the next tokens in the context of source code, and then we aim to generalize this model to some test distribution. OK. So this is the basic introduction of what is domain generalization and do some comparison and give some applications. Next, I will introduce some specific algorithms for domain generalization. The first one is we aim to add some explicit regularizers to handle this problem. So before we dive into the specific algorithm, so let's rethink one problem-- how to learn such agendas about representations. And to answer this question, a natural way is we want to think another problem is about why do machine learning models fail to generalize? And here is one very simple example. So our goal is to classify dog versus cat. So there are two domains. So domain one is water and then domain two is grass. And we actually have four groups. So the first one is dog in the water and a cat in the water, and dog in grass and a cat in grass. So for dog in the water and the cat in grass, two majority classes-- two majority groups. And otherwise, cat in the water and dog in grass are minority groups. Based on this, we train a model on this two source domains. And the cat has trained the model, and then they train the model to a new domain-- for example, as a dog in the forest. And our question is, is this a dog? A human can easily recognize this as dog, but for computer it is very hard. So usually in combination, it's very easy to recognize, but in this case, it's hard to recognize this is a dog. And the computer will make the wrong prediction. So why is this happened? So there are some similarities in the training data. We can see the dog. In the training data, the dog is usually in the water, and the cat usually in grass. And then it sees some similar environments compared with grass, and then-- so grass with spurious information. It means that these models will spuriously associate the cat information and the grass information. And that's when the computer see some similar environments, it will make wrong prediction. So our goal is to cancel out such spurious information here. And to do this, we aim to train a neural network to learn some domain environments. So this is another concept I want to mention here. So the domain environment is that we want to learn a bunch of features via the neural network that don't change across different domains. In case that we can learn such domain environment information, for example, we can associate the animals with the labels, and the computer can make the right prediction. OK. Yes, this is something about domain environments, and based on this definition, I will send details of regularization based method. The key idea of regularization based methods is that we want to use a regularizer to align representations across different domains. And the two cats domain environment representation. Let's go back to this example. So we have two domains and the two classes and the two classifiers of cat and dog. And in this case, we can get the representations, for example, for domain one. And for domain one, we can cancel representations, which is composed of two information. The first one is animal, and the second one is water. So we only cover some major information in these images. And similarly, we can get the representation for domain two, so it's also animal and grass. And I hope we can align these two representations to reinforce the time to be very similar. The simplest way to get a small loss is that finally, this new network can only learn the animal information because the animal information is shared across different domains. OK. So based on this example, I would then mathematically define some generic loss function for that. So this is a typical loss function for the regulation-based method. In the first time, it's for label classification loss. For example, in this, we classify dog versus cat hair, and then we will average the loss over all training examples. And then we will define explicit regularizer to learn the domain value interpretation. This is the key part for this regularization based method-- so how to define such a regularizer. OK. So before we dive into this to give some specific algorithms, I will first recap the domain adversarial training in domain adaptation that we learned from the last lecture. The key idea is that the prediction must be made based on the features that cannot be discriminated between the domains. For example, given one image, this input x, we will feed this image to our feature extractor and catch the features. And then we have two branches, and the first branch is to do the label predictor, and we aim to make accurate label prediction. So in this case, because in my adaptation we only have the labels for the source domains-- so only source domain-- only data from source domain are feeding to this branch. Second branch is our domain classifier. So domain classifier, we aim to make this image cannot-- we cannot predict the domain for this image based on these features. So in this case, the features-- we can learn some domain invariant features. So both data from source and target domain can feed into this branch because we want to do some domain classification here to classify whether these features is from the source domain or from the target domain. OK. I now have one question. So this is a very simple version for the adversarial training in domain adaptation. So does anyone have ideas on how to use the domain adversarial training in the domain generalization setting? So any volunteers? One [INAUDIBLE] to predict which domain are real? Are real? Or-- for each image? Yeah. You want to predict its corresponding domains, you mean? You want to predict if this is the one or not, for example. Oh, yeah, yeah, yeah. Yeah. Yeah, that's right. So yeah, something that we will mention here is we have the label predictor and the domain classifier. And instead of we only feed the data from the source domain to the label predictor. In the domain generalization setting, we will fit data from all source domains into this labor predictor so to predict its corresponding labels. And similarly, the data from all source domains will be feed into the domain classifier. And instead of to classify whether it's from source domain or target domain because there are no such high definition in the domain generalization settings, well, to predict those domain labels for every image. Based on this definition-- so I will first mathematically give some definitions for this label predictor and the domain classifier and its corresponding losses. And in the label prediction, we aim to do-- such a label prediction gives an input x, and we will extract some features from f theta x, and then we will do the label predictor to get its predictive value with the function g, and we will submit for every examples, finally, to optimize the function-- the parameters of label predictor and feature extractor. And here, the smaller is better because we want to make accurate prediction in the source domains. For domain prediction, instead, we want to predict these domains, but we actually want to maximize the loss. So the larger loss is better because the larger loss means it is harder to distinguish domains for every input examples. In case where we cannot distinguish domains, we can learn some domain invariant features and to make it generalizable. OK. And then we will bridge from the generic formulation and as a new formulation for the domain adversarial training in domain generalization. This is a loss for the end loss in domain generalization. So the first term would be label classification loss, and the second term would be we design one regularizer to learn such a domain environment representation. And the full algorithm has four steps. The first one is that we will randomly initialize the encoder, the label classifier, and the domain classifier. Then we will try to use the domain classifier loss, the ld, to optimize the domain classifier. And we will-- based on these results, we will update the label classifier and the encoder by considering both label classification loss and the domain classification loss. We will finally repeat step two and step three until convergence. In the second step. Are we trying to optimize the domain classifier to perform very well or perform very poor? Perform very poor. I'm thinking about you want to do the [INAUDIBLE] training right, you should have a very good domain classifier, so that you will force the representation to be domain [INAUDIBLE]. Yes. Then in this case should then we like make the classifier very good, so that it forces the-- You could. It's like in case you cannot force a classifier-- yeah. Yeah. Yes, this is good, yeah. So in case you can minimize the loss, so you can learn a good domain classifier, and its image can force a domain classifier. [INAUDIBLE] If we want to maximize this. So this is to minimize, yeah. Any other questions? Do you think this would be more expensive or less expensive than say learning features for a domain and then so like a multitask setting, and then conditioning on the domain with the feature extraction for the label? So your question is you want to learn a separate directions or? So you learn some features for each domain, and then you condition on those features for the label classification. Would that be equally expressive? OK. I see. So you mean you will use separate encoders and every domains have one encoder, right? Yes. Sort of like extracting features for the domain, and then creating the label classifier. Possibly-- in some cases, it's more expressive-- in case where you have a lot of data from every domains. So you can learn about the encoder without sharing encoder with other domains. But in some cases, you cannot collect so many data for every domain. So in case you would like to share some representations and to have a shadow encoder, yeah. And what happens if you don't know the-- don't have information about the domains? Like you have a data set, and it's not labeled all the domains? Yes, this is a very good question. So this is something that we also try to learn without considering the domain information. There are a few solutions. For instance, one is you can try to predict the domain information based on the data. And in some cases, if you change EIM, empirical recommendation, you can also just use some domain information to see whether you can predict it well. Other cases, you can find there are some misclassified examples. So this may be from the minority domains, and you can upweight these examples to make it work. Thank you. OK. Yeah. So my question was asking people to try to learn shared written representation between the tasks [INAUDIBLE]. And so you learn task specific, like domain specific, and the domain [INAUDIBLE] shared feature space. So how does it relate to the-- so would that perform better, or would this kind of algorithm perform better? To repeat the question, so you mean we-- Separate domains. Different domains, each domain has its own encoder? Yeah. So you learn by domain-specific parameters, and those shared parameters across the domains. By maximizing information between the different domains present between the samples that you have. So you can learn the common [INAUDIBLE] presentation, and also you have two branches. Yeah, two branches to learn the-- two samples to have some domain-specific layers. Yeah. Yeah, I think this is possibly more expressive. It's the same question and-- I think-- wouldn't it give you more expressive power in the testing time? Like let's say if you just want to output a particular feature or particular domain? But in the test time, you will definitely get one new domain, so you do not have its corresponding branch. We could use the commonly written representation-- Yes. Yes. Yes. In case you mean you want to [INAUDIBLE] the domain-specific branches-- We could use the common-- [INTERPOSING VOICES] --common ones, and how to do it for the domain-specific ones-- I just want to clarify your question. So probably I can use some whiteboard. So you mean there are one shared representations. This is shared, and then you have two branches for two domains, and how to do it for the test. So in the testing, you definitely need to go out some branches. Yeah, but you do not have the information for the test time, which branch do you want to go. With [INAUDIBLE]---- I wasn't thinking about the-- I was talking about learning common written representation for the samples themselves. For this one? Yeah. So assuming that that's a latent space, we could learn common latent space and the shared latent space. And then we could use the common latent space to then generate for any domains. Yeah, I think this is what we learned right now to use our shared encoder for every domain. So let's go back to this one. So we have a shared feature encoder and to go to every domain. [INAUDIBLE] or the latents [INAUDIBLE].. Oh, we currently do not add any conditions, but possibly add some conditions to enforce. This would be helpful. Well, maybe that's plenty helpful. For the multitask leaning, the key point is that multitask leaning only solves the-- when the training domain is exactly the same from the test domains, right, but we want to solve some unseen domains. OK. So any other questions? What if you don't have a-- sorry. What if you don't have a bunch of domain labeled data? I mean, maybe with the cat dog example earlier, you don't know that grass and water are really two different domains that are going to be your problem until you are-- until after you try to do it? Yeah. I think, this problem, as I mentioned, so probably you can see, try to distinguish the domain information and to do that domain prediction. And then in other cases, you can see that how to, for example, how to see that some miss-- have a train-- feeding model into [INAUDIBLE] once, and to identify some misclassified [INAUDIBLE] ones. These ones may be from the majority-- oh, sorry, from the minority domains or from the minority groups. Yeah. OK, OK. So let's continue. So the virtual training, so leverage the adversarial optimization to learn the domain invariant features. So you may ask, are there any other ways to do that? Then I will introduce one alternative approach called the CORAL. In the CORAL, so the key idea is that it can directly align the representations between the different domains with some similarity metrics. And in CORAL, so which is called the Correlation Alignment for Domain Adaptation, although the name is from the auto adaptation. And this method is originally from the domain adaptation. But is usually also used in domain generation in recent years. So here I would only choose the dimension regime version of this algorithm. So in this one, assume we have two domains. For them, here, we want to recognize different objects. We have some shared layers. And this could be the correlation feature extractor. And then we will use features to feed into some classifier to get the classification loss for every domain. So for the domain one we have one classification loss. And the domain two has another classification loss.. Then we will have the CORAL loss. In the CORAL loss, we want to directly makes aligns these both representations. Before we dive into the CORAL loss, I will first give some notations. So the notation X1 is a feature metric. And for the representation and this is with dimension n1 times k for the domain one. And the exterior, similarly, for the domain two is n2 times k. And the k is the number of features here. And we get the mean value with this formulation and with dimension 1 times k and similarly to the mean value of the alignment domain over all examples. Then we will try to calculate some covariance metrics in CORAL. The goal for CORAL is to make this covariance matrix exactly similar to contrast the similarity between different covariance matrix. So in the covariance matrix, this is something that we learned from some math courses. So to get this covariance matrix. And finally, to find one CORAL laws to make this covariance of our features would be very exactly the same. And this is also the loss for the CORAL algorithm. And we will combine the classification laws for the first time over all different examples. And we will also design one CORAL laws to explain it to regularize to learn the domain-invariant representation. So the key idea for the CORAL-- behind the CORAL is to make the covariance matrix similar for every domains. Is this extendable to more than two domains? Yeah. It's extendable. So you can directly add the more domains, for example, domains 3, 4, 5 with one classification loss and the use pairwise CORAL loss to do that. So do we have individual encoders for different domains? Typically, people will share the encoders. Yeah. Any other questions? So one question was in general about how much does model architecture and our mathematical formulation of the problem matter in this kind of-- in approaching this kind of problems? So you mean for the generic version of regularization? [INAUDIBLE] for domain generation for-- or any particular this kind of problems, while approaching how this choosing the model architecture and mathematical formulation that the problem has? So it depends. So you know we can increase the model architecture from RESNet or even for visual transformer, you will have more parameters. And these models will be more expressed-- have strong express power and to represent more information. And the design of some regularizers is also very sensitive in these specific domain generation problems. In case you have very large models-- in case you have the same models, same background, and we can compare different kinds of regularizers. And with the same regularizers we can compare different backgrounds. Maybe I want to understand the limits of each of these. So how much the model architecture selection affects your performance and how much can the mathematical formulation of loss term or any of the term would affect the performance in general? Or how much power do they have to while [INAUDIBLE].. So for each of them, so for model architectures, for the comparison between model architecture and the loss function. For different data sets, I think increasing the model-- the express power or model architecture is not a bad choice. So you can definitely increase-- improve the results. But typically, we try to add one regularizer. And this regularizer, chosen regularizer can affect the problem-- the performance more. If we found that bad one, so it even hurts the performance. Yeah. Because if you increase the model, express power, so the performance of ERM will also increase. Yeah. [INAUDIBLE] In case if we can't align different kinds of different order of the information, it would be helpful. So the simplest one, you can directly align the representations. And in these papers, they consider the second order versions of covariance. So if you can do more. But if you could do more, you will have more cost-- computational cost to do more. Yeah. [LAUGHS] So we can continue. And therefore, the results, I just choose some results from the OfficeHome DomainNet, and iWildCAM. And in the OfficeHome, we have four domains. And we will hold one domains as a test and use another three training-- another three domains as training, and then to generalize one model to the test ones. And we will repeat the progress for fourth. So for every domains to do some evaluation. And also, we will do the similar things for the DomainNet. And for the iWildCAM, as I mentioned before, is basically we want to generalize the model to new locations to do the wildlife recognition. And we can see the CORAL performance well compared with the ERM. And then sometimes hurt the performance compared with the ERM. I think this is always a question that designing a suitable regularizer is very important. Finally, for these kinds of methods, I would mention some pros and cons. And the good things is that the first one is, this method can generalize well to all kinds of data in the networks. So for example, if we want to translate to graph data or text data, we only need to change the shared layers, the background from bird and to graph neural network or any other layers. And there are also some theoretical guarantees. Here, in these kinds of methods, I will not dive into this one. But if anyone are interested in any theoretical results, please email me and I can send you some papers to discuss this. But as a bad thing is that this regularizers sometimes being too harsh or too constraining on the representations. And we can see that the DN cannot work very well in some data sets. And let's also go back to see why regularizer are being too harsh. For example, you see, in this one, we can go back to this dog versus cat classification. And this is the loss function. And if we directly add one explicit regularizer, that it will encourage the internal representations to only contains the no info about the background. But sometimes, it's in the mixture. So we also need some informations to do this classification, or to-- if we do not have any backgrounds, it even hurt us to learn a better domain image representations, or even hurt the expressed power of the neural network. Any questions? OK, so based on this, we can see that in some real-world applications, like iWildCAM, the color improves the performance. But in some other data sets, like R1 they say the medical image classification task. So the color even hurt the performance. So they also hurts the performance, as I mentioned before. You may ask, are there any approaches to relax the dependency of the regularizer? Then we'll let's move to our next part of the algorithm design. So to do the data augmentation. Before we're detailing the data augmentation, let's first recap the spurious correlation. In the spurious correlation, our goal is to classify the dog and cat. And the background is sometimes spuriously correlated with the animal, and though some models cannot make a right prediction. However, there is one question. In case we can collect more data, so we have the grass data, the water data, and the data-- the car domain and the keyboard domain from both dog and cat. So we want to recognize-- still recognize this dog. So the question would be, will the network still associate dog with water background in some source domains? So any guys want to answer this question? The [INAUDIBLE] cannot associate. [INAUDIBLE] If you still mainly still just have dogs in the water background, then it might still just associate droplet water background, even after you collect the data. Yeah. If we do not consider this scenario. Yeah. I would consider whether there is uniform distribution across a background. Across different backgrounds? In case we consider all backgrounds with a similar weight or-- Then maybe [INAUDIBLE]. Yeah. That's [INAUDIBLE]. So because there are many more backgrounds, so we cannot recognize dogs only with the grass background. And in this case, some models can produce the right prediction. However, there is one challenge. In case that we cannot always collect model data from the different domains. So what we want to do? We can aim to generate more datum and to train this model. And here we want to introduce some methods for data augmentation. For example, there are some simple operators that can help us to generate more data. And in the conditioning for example, we are given one image. This is the original image. We can use different operators to generate the data, like flipping, rotating, cropping, or like a PCA, or edge enhancement, or any other methods that we can generate some image data based on one image. And in the augmented domain, we can do some back-translation. In the back-translation, we have one sentence, originally from English. And then we want to translate it to French and then translate it back. So that augmented example would be slightly different from the original one. And this data augmentations can typically help us to do the domain translation can benefit the performance. However, this simple operators requires a knowledge of the public domain. For example, we need to see how this is the image data or from-- or text data, or to adopt different strategies to do the data augmentation. Our question would be, any general approaches? And I will introduce one general approach for data augmentation in this lecture. It's called the Mixup. In the Mixup, the keep idea is to interpret training examples. Assuming we have a learned model with our training data. So the training data has any examples with the input and the labeled pair, so xi, yi, and then from i to-- equal to n to-- equal to 1 to n. And we will train classifier based on this training data. And then, Mixup aim to replace this original training data from a mixed version. So in this case, we will have two examples, xi, yi, and xj, yj. And then, we will do some linear interpolation. So also like a convex combination between the xi, yi-- ah, between xi and the xj and the yi and the yj. So in this case, we can generate some virtual examples called xi tilde and the y tilde for both features and the labels-- for both input and the labels. And then, Mixup will use this generated examples to replace the original examples and use the new training data to build a classifier. So here's one example for the Mixup. We can generate some virtual examples between two classes. So the first image in the left-hand side is for the cat. And then next one is for dog. And then we can combine them to generate some images in between. So from cat and dog. So this image has around the 70% probability to be classified as a cat and the 30% to be classified as a dog. So this is a very common and a useful ways to do data augmentation. Any questions about the process for the Mixup? So in domain generation, Mixup itself can improve the performance of domain generations. So let's see. We here, we want to do the tissue classification from Camelyon and also to do some learned type prediction. In the learned type prediction example we call-- this data set here we call the FMoW. And we want to given one satellite image, we want to classify as land types. And we want to generalize from some year and the region combination to other year region combinations. And compared with the ERM, and the Mixup can improve the performance. However, it is not always good. For them it's we will still use these R1 examples, these are quite challenging data set. So Mixup even hurts the performance compared with the ERM. The key job here for the original Mixup is that this method only focuses on the data augmentation instead of trying to learn some domain invariance. So the question is, how to improve the original Mixup. To introduce the next algorithm, I will first give a simple examples with spurious correlation. So this example can help you to understand what happens in the next algorithm. And the example is quite similar to the examples from the real-world images. So this is a toy case. In the toy case, we have a lot of twos and red fives. And our goal is the label one is to see this digit is smaller than five, and the label two is this digit is larger or equal to five. And then we have a target domain with similar colors, or with the green five, but as the green five is from a different ways, its background is green. And the feeding to our neural network, and this neural network cannot produce the correct prediction. Because in the training part, the models spuriously associated the color with the labels. And so, these examples are very similar to the examples I gave before for the dog and the cat classification. OK. So based on this example, I would like to introduce LISA to improve the original Mixup. The key idea behind LISA is-- LISA want to selectively interpret examples and to emphasize some invariant information, not only on the data augmentation, like Mixup did. So this is the examples. We called it colored MNIST. So we have some two majority groups and the two minority groups. And based on the build upon the Mixup, there is one variant called the intra-label LISA. In the intra-label LISA we aim to interpret examples with the same label but from different domains. We just add one simple constraint on the original Mixup. So to make it the domain quite different. So from di not equal to dj. But we will make sure the labels are same. So yi equal to yj. And we can see in these five images, so first one and the last one. I mean, the left one and the right one is from the original data set. So lambda equal to zero or lambda equal to one. And then, we can share some images with different backgrounds. We can see these three images in between. And all images are associated with the same label. So with digit larger or equal to five. And we can see that even when we generate a lot of different backgrounds, all of them are associated with the same label. And in this case, the models will finally ignore this color information and only focus on the digit information and to capture this invariance. Any questions for this? I think I have a Mixup question in general. So is it possible-- have people been trying to apply Mixup to the text domain? Yeah. Yeah. Yeah. I will mention it later. Yeah. Any other questions? [INAUDIBLE] --certain of the labels of the triangle domain? You mean domain labels? Yeah. Yeah. I mean, if we do not have domain labels, it's a little bit hard to apply this algorithm. So but one simple solution is that we can try to see whether we can only mix the examples with the same label without considering the domain information. In this case, you can also generate some similar things for this variant. Could you do that if the model is, let's say, confident on some target samples? Could we use the tools as a label control? You mean we want to train our model on the target samples? Yeah. And so we do inference on the target samples-- Yeah. --for which we don't have labels. But the predictions, if those have good certainty, if the model is much more confident on the samples-- of some samples-- On test domain? On the test domain, yeah. Could we use them as a ground truth-- [INTERPOSING VOICES] The question is, we can use labels from-- you can some examples with some pseudo-labels in the target domain. Yeah. This is another question would be close to some domain adaptation or semi supervise the linear settings. Yeah. We will not consider these settings here. Is there data on how this performs on, I mean, out of either domain distribution? So if you then showed a blue five and a blue two. Does it work better on those than if you had not done any red-green augmentation? Your question is that if we have any blue ones or any auto-distribution, something like that? Yeah, it definitely works. So for example, we can put this example here, is we change the background from green to blue. So it can still work very well. So because in case you will learn some digit information and ignore this domain information. So another variant of LISA is called intra-domain LISA. Intra-domain LISA, it may be we add some constraint on the domain level. So we want to interpret examples with a different label but same domain. So di is equal to dj, but yi is not equal to yj. And in this case, the left and the right ones are also from the original data set. All these are red two and red five. And we can generate a lot of images in between but with different labels. So in this case, we can see even the domain information is the same, but all of them are either associated with the different labels. So the domain information is not the reason for the label changes. So we can also make these models exactly ignore this domain information and only focus on the environments-- focus on the digit information. Any questions for this variant? Let's move on. In practice, LISA aim to combine this both of them because each of them has its own applicable scopes. The first one, for the intra-label, it works better when there are more domains. Especially if you have more domains, you can have more combination to apply the intra-label LISA. And another thing is the spurious correlation are not very strong. In our practice, that intra-label LISA works better. And instead, the intra-domain LISA works better when the domain information is highly spuriously correlated with the label. So in case, for example, this data distribution is quite imbalanced. And there are much more even 99% of green fives and red twos. So in this case, such intra-domain LISA works better. And in practice, LISA use one type of parameter P select to determine when to use intra-label LISA or when to use intra-domain LISA at each iteration. What happens if you say you cannot apply either of these? So supposing you only have red twos and green fives? Yeah. This is a very interesting question. We already investigate in our other settings. This is, in this case, you can see that we call it under specification things. Because these models can infer the labels based on the background information only, or in informing the labels based on the digit information. We typically want to have-- build a models with two heads. And each head represent one kinds of things. So first head maybe represent the domain information, I mean, the color here. And also, another head would be the label information by sharing some layers. And we will use some unlabeled data to make two heads produce quite a different output. And in this case, we can-- when we have new domains and the models aim to classify the labels, we can try to collect a few examples to pick which heads that we want to use. Yeah. If you want to see the paper, please email me, and I can send our paper to you. So from the previous lecture we saw cycle there. Would something like that be useful here for data augmentation? A CycleGAN you could possibly use here. It is a powerful data augmentation method. You can even use some to disentangle the representations and to shift something between different domains and can even improve the performance. Yeah. But in applying CycleGAN would be a little bit complex. So and need more efforts to train all the hyper parameters. Is there a reason that you choose either intra-label or intra-domain [INAUDIBLE] in the same iteration? We choose either ones, because each one have its own pros and cons. So in intra-- we actually, in practice, which chose intra-label LISA more. But in some cases, intra-domain LISA also works very well. Any other questions? So for intra-domain, does it matter which label there you choose? Yes. Right? Yes. Yeah. So how do you-- how do we decide? To choose which labels? Yeah. You randomly pick some examples. I will choose a full algorithm here. So maybe you can take a look what is the process. So in the full algorithm of LISA is at first stage we randomly initialize the model parameter theta. And then we will sample one strategy from a Bernoulli distribution and with the high parameter P select. And we will sample a batch of examples, b. And here, this stage we will-- in this stage we will choose either intra-label LISA or intra-domain LISA. So if s is equal to zero, and we will use intra-label LISA. And for every examples within this batch, we will sample another example that satisfies yi equal to yj and the di not equal to dj. And similarly, in intra-domain LISA, for every example in the batch, we will sample one example that satisfies the label is different but the domain the same. Then we will use some interpreted-- we will interpolate these examples and use the interpolated examples to update the model. Finally, we will repeat the steps three and four several times until converge. So for every iterations, we will choose either intra-label LISA or intra-domain LISA. Could this be extended from, say, two label-- two classes or two domains? Yeah. We can use in a bunch of domains or classes. [INAUDIBLE] for one label you take all domains and average all that? You mean for the intra-label one? For the Intra-Domain ones. Oh, for the intra-domain ones, we will use-- if we have more than one or two domains, and we will, for each examples, we will only pick examples with the same domain but without considering the labels only because the labels are different. Yeah. Any other questions for the process of the-- for algorithm? Let's move on. And here we choose some results to compare ERM, empirical risk minimization. And the CORAL we mentioned as our regularization-based approach. And also LISA we mentioned here the augmentation-based approach. So the Camelyon is a two class classification, is a binary class classification. And with the solution in domains. And the FMoW is around the 32 classes. And a bunch of domains, R1 is around the 33 classes, if I remember correctly. And the Amazon has-- is also a binary classification. Amazon is a text data. And in these four data sets, LISA works better than the both CORAL and the ERM. And in the iWildCAM, the CORAL works better than other methods. Also in the OGB to use a molecule property predictions, the ERM even works best compared with other methods. So in this case, we can see different algorithms have its own advantages and disadvantages. And you may choose which one you want to use based on the experimental results in your practice. So you may also see that LISA do works well, and the image data also works well, and the text data. So how to apply Mixup in the text data? Because it's a little bit weird if you apply the Mixup on the input level, so on the original text data. So here, I will briefly mention the Manifold Mixup. In the original Mixup, we apply the Mixup on the input, and then catch the mixed image, and feeding into the feature extractor to get the feature representations, and then feed this feature representation to the classifier to do the classification. And in the Manifold Mixup, we instead doing-- apply the Mixup on the input, we use a Mixup on the feature level. For example, in the text domains, we typically do it on the top output, the output output. So we will change the feature extractor at bird and apply the Mixup on the features on the output output. And but in image domain I use this example here. We have the dog features and the cat features. And we apply Mixup here and get the mixed features, and then feed these mixed features to the classifier. Any questions for this one? So the final results I would like to show you some invariance analysis. So how to measure the invariance. So the direct way to measure the invariance is that when we get the representations-- the environmental representations-- we can try to use this representation to build a model, and like a logistic regression or any other models, to predict the domains. And we will use the accuracy of the domain prediction to measure whether we can find a better invariance or not. Another metrics that we want to show is that we will measure the divergence of the predictions among the domains. So we will leverage the representations of the predictors. The predictor means the logits to see that whether for every classes we can see that if divergence between every domains we will have one distribution about the output, the representation distribution. And we will try to do some pairwise comparison between different-- among different domains. And then, we will sum the results for every classes. And so, we compare LISA with ERM, also with the Vanilla Mixup. The IRM, IB-IRM, REx is other methods-- other kinds of methods for regularization-based methods. And we do have-- see that LISA can lead to greater domain invariance than prior methods with some explicit regularizers. So both metrics, smaller value represents greater domain invariance. Any questions for this? And finally, I would like to also do some comparison between regularizer-based versus augmentation-based methods. And in the regularizer-based methods, I will recap the pros and cons. The first advantage here is that it can generalize to all kinds of data networks and has some theoretical guarantees. And but it relies on the design regularizers. Also, sometimes the regularizers are too harsh. And in the augmentation-based method, it is easier to understand and simple to implement. And we do not need to worry about how to design very well generalizable regularizers. And but sometimes, it's largely limited to the classification problems, also some problem types if we want to apply some simple augmentation operators for text or the image data. Any other questions? Yeah. This is the plan for today. So we introduce the domain translation problem and gave the definition for this. And we introduce two kinds of algorithms. The first one to adding explicit regularizers in the loss function to align the representations. And the second one, we do some data augmentation to generate more datum and to learn domain invariance. So hope we can reach the goal and to understand what the domain generalization is, and also to be familiar with the major domain generation approaches. Yeah. Any other questions for the entire lecture, or? What about domain generalization for generative problems? For generative problems, there are not too much domain generation problems. But in cases you have multiple domains, you want to generate, for example, some images in one domains and some image in other domains, you can also apply some domain generation problems directly to change the background of the generating models. But they do not have too many advanced technologies to apply this. Yeah. Any other questions? So I'm not sure if I missed this. But how do the interpolation techniques generalize to text data? Yeah. We can go back to this here. So this is for the image data? Yeah. For the text data, if you want to apply the Mixup, we will change it to text. Oh, right. The input to text. And the feature extractor would be sometimes bird. And you look at the feature representations. You can choose this feature representation and other kinds of input. Yeah. And that to apply interpolation here. That makes sense. Any other questions? OK. If we do not have any other questions, so there are also some reminders. So the project milestones are on Wednesday. And that the next time is the lifelong learning. Lifelong learning is something to combine the techniques that we learned before. So hope you can try it for the next time.
AI_LLM_Stanford_CS229
How_AI_Could_Empower_Any_Business_Andrew_Ng_TED.txt
When I think about the rise of AI, I'm reminded by the rise of literacy. A few hundred years ago, many people in society thought that maybe not everyone needed to be able to read and write. Back then, many people were tending fields or herding sheep, so maybe there was less need for written communication. And all that was needed was for the high priests and priestesses and monks to be able to read the Holy Book, and the rest of us could just go to the temple or church or the holy building and sit and listen to the high priest and priestesses read to us. Fortunately, it was since figured out that we can build a much richer society if lots of people can read and write. Today, AI is in the hands of the high priests and priestesses. These are the highly skilled AI engineers, many of whom work in the big tech companies. And most people have access only to the AI that they build for them. I think that we can build a much richer society if we can enable everyone to help to write the future. But why is AI largely concentrated in the big tech companies? Because many of these AI projects have been expensive to build. They may require dozens of highly skilled engineers, and they may cost millions or tens of millions of dollars to build an AI system. And the large tech companies, particularly the ones with hundreds of millions or even billions of users, have been better than anyone else at making these investments pay off because, for them, a one-size-fits-all AI system, such as one that improves web search or that recommends better products for online shopping, can be applied to [these] very large numbers of users to generate a massive amount of revenue. But this recipe for AI does not work once you go outside the tech and internet sectors to other places where, for the most part, there are hardly any projects that apply to 100 million people or that generate comparable economics. Let me illustrate an example. Many weekends, I drive a few minutes from my house to a local pizza store to buy a slice of Hawaiian pizza from the gentleman that owns this pizza store. And his pizza is great, but he always has a lot of cold pizzas sitting around, and every weekend some different flavor of pizza is out of stock. But when I watch him operate his store, I get excited, because by selling pizza, he is generating data. And this is data that he can take advantage of if he had access to AI. AI systems are good at spotting patterns when given access to the right data, and perhaps an AI system could spot if Mediterranean pizzas sell really well on a Friday night, maybe it could suggest to him to make more of it on a Friday afternoon. Now you might say to me, "Hey, Andrew, this is a small pizza store. What's the big deal?" And I say, to the gentleman that owns this pizza store, something that could help him improve his revenues by a few thousand dollars a year, that will be a huge deal to him. I know that there is a lot of hype about AI's need for massive data sets, and having more data does help. But contrary to the hype, AI can often work just fine even on modest amounts of data, such as the data generated by a single pizza store. So the real problem is not that there isn’t enough data from the pizza store. The real problem is that the small pizza store could never serve enough customers to justify the cost of hiring an AI team. I know that in the United States there are about half a million independent restaurants. And collectively, these restaurants do serve tens of millions of customers. But every restaurant is different with a different menu, different customers, different ways of recording sales that no one-size-fits-all AI would work for all of them. What would it be like if we could enable small businesses and especially local businesses to use AI? Let's take a look at what it might look like at a company that makes and sells T-shirts. I would love if an accountant working for the T-shirt company can use AI for demand forecasting. Say, figure out what funny memes to prints on T-shirts that would drive sales, by looking at what's trending on social media. Or for product placement, why can’t a front-of-store manager take pictures of what the store looks like and show it to an AI and have an AI recommend where to place products to improve sales? Supply chain. Can an AI recommend to a buyer whether or not they should pay 20 dollars per yard for a piece of fabric now, or if they should keep looking because they might be able to find it cheaper elsewhere? Or quality control. A quality inspector should be able to use AI to automatically scan pictures of the fabric they use to make T-shirts to check if there are any tears or discolorations in the cloth. Today, large tech companies routinely use AI to solve problems like these and to great effect. But a typical T-shirt company or a typical auto mechanic or retailer or school or local farm will be using AI for exactly zero of these applications today. Every T-shirt maker is sufficiently different from every other T-shirt maker that there is no one-size-fits-all AI that will work for all of them. And in fact, once you go outside the internet and tech sectors in other industries, even large companies such as the pharmaceutical companies, the car makers, the hospitals, also struggle with this. This is the long-tail problem of AI. If you were to take all current and potential AI projects and sort them in decreasing order of value and plot them, you get a graph that looks like this. Maybe the single most valuable AI system is something that decides what ads to show people on the internet. Maybe the second most valuable is a web search engine, maybe the third most valuable is an online shopping product recommendation system. But when you go to the right of this curve, you then get projects like T-shirt product placement or T-shirt demand forecasting or pizzeria demand forecasting. And each of these is a unique project that needs to be custom-built. Even T-shirt demand forecasting, if it depends on trending memes on social media, is a very different project than pizzeria demand forecasting, if that depends on the pizzeria sales data. So today there are millions of projects sitting on the tail of this distribution that no one is working on, but whose aggregate value is massive. So how can we enable small businesses and individuals to build AI systems that matter to them? For most of the last few decades, if you wanted to build an AI system, this is what you have to do. You have to write pages and pages of code. And while I would love for everyone to learn to code, and in fact, online education and also offline education are helping more people than ever learn to code, unfortunately, not everyone has the time to do this. But there is an emerging new way to build AI systems that will let more people participate. Just as pen and paper, which are a vastly superior technology to stone tablet and chisel, were instrumental to widespread literacy, there are emerging new AI development platforms that shift the focus from asking you to write lots of code to asking you to focus on providing data. And this turns out to be much easier for a lot of people to do. Today, there are multiple companies working on platforms like these. Let me illustrate a few of the concepts using one that my team has been building. Take the example of an inspector wanting AI to help detect defects in fabric. An inspector can take pictures of the fabric and upload it to a platform like this, and they can go in to show the AI what tears in the fabric look like by drawing rectangles. And they can also go in to show the AI what discoloration on the fabric looks like by drawing rectangles. So these pictures, together with the green and pink rectangles that the inspector's drawn, are data created by the inspector to explain to AI how to find tears and discoloration. After the AI examines this data, we may find that it has seen enough pictures of tears, but not yet enough pictures of discolorations. This is akin to if a junior inspector had learned to reliably spot tears, but still needs to further hone their judgment about discolorations. So the inspector can go back and take more pictures of discolorations to show to the AI, to help it deepen this understanding. By adjusting the data you give to the AI, you can help the AI get smarter. So an inspector using an accessible platform like this can, in a few hours to a few days, and with purchasing a suitable camera set up, be able to build a custom AI system to detect defects, tears and discolorations in all the fabric being used to make T-shirts throughout the factory. And once again, you may say, "Hey, Andrew, this is one factory. Why is this a big deal?" And I say to you, this is a big deal to that inspector whose life this makes easier and equally, this type of technology can empower a baker to use AI to check for the quality of the cakes they're making, or an organic farmer to check the quality of the vegetables, or a furniture maker to check the quality of the wood they're using. Platforms like these will probably still need a few more years before they're easy enough to use for every pizzeria owner. But many of these platforms are coming along, and some of them are getting to be quite useful to someone that is tech savvy today, with just a bit of training. But what this means is that, rather than relying on the high priests and priestesses to write AI systems for everyone else, we can start to empower every accountant, every store manager, every buyer and every quality inspector to build their own AI systems. I hope that the pizzeria owner and many other small business owners like him will also take advantage of this technology because AI is creating tremendous wealth and will continue to create tremendous wealth. And it's only by democratizing access to AI that we can ensure that this wealth is spread far and wide across society. Hundreds of years ago. I think hardly anyone understood the impact that widespread literacy will have. Today, I think hardly anyone understands the impact that democratizing access to AI will have. Building AI systems has been out of reach for most people, but that does not have to be the case. In the coming era for AI, we’ll empower everyone to build AI systems for themselves, and I think that will be incredibly exciting future. Thank you very much. (Applause)
AI_LLM_Stanford_CS229
Stanford_CS330_Deep_MultiTask_Meta_Learning_Black_Box_Meta_Learning_l_2022_I_Lecture_4.txt
So the plan for today, We're going to be talking about meta-learning. And first I'm going to recap a little bit of what we talked about on Monday with regard to the problem formulation and the general recipe of meta-learning algorithms, and then we're going to actually get into approaches for solving few-shot learning problems. And so this will cover what you'll be implementing in Homework 1, although actually, I think I saw a couple of people already turned in Homework 1, so maybe you've already implemented it. And by the end of the lecture, you'll be able to implement few-shot learning algorithms and allow neural networks to do the kind of few-shot learning problem that you did on the very first day of class. Cool. So let's get into a little bit of the recap. So on Monday, we talked about the meta-learning problem, where our goal was that we were given some data from some set of tasks and our goal was to solve a new task with less data at a higher level of accuracy or more stablly. And in the context of this course, we'll mostly be considering trying to learn tasks more quickly, that is with less data than if we were training from scratch. One of the key assumptions that I mentioned is that the tasks should be drawn from the same task distribution. This is an assumption that's needed to show that these algorithms might generalize and be able to learn new tasks. Although, of course, in practice, it's a little bit difficult to sometimes actually realize this assumption in practice. This is something that's basically analogous to the standard IID assumption that you see in machine learning where you assume that your training data points and your test data points are drawn from the same distribution. And so you might have some distribution over tasks. You're given some training tasks from that distribution and a test task from that distribution. Like before, we want the task to share some structure. And the task could correspond to lots of different things. So in Homework 1, the task will correspond to recognizing handwritten digits from different languages, shown here. We'll also look at an example where different tasks correspond to giving feedback to students on different exam problems or different assignments. We'll look at one example where the goal is different tasks correspond to different regions of the world. And you might want to classify species in those different regions of the world. And the task could also correspond to something like robots performing different tasks. Now a natural question to ask from here is, we need a set of tasks in order to learn a new task-- how many tasks do you need in order to be able to quickly learn a new task? There isn't really any cut-and-dry answer here, but in general, the more the better. In machine learning, the more data you have, the better off you'll be. In meta-learning, the more tasks you have, the better off you'll be. So essentially, in meta-learning, we're treating data points as-- we're essentially treating tasks as data points. Yeah. [INAUDIBLE] Yeah. So I guess we can draw an analogy to machine learning, where maybe we have a lot of images taken from the internet. But then we ultimately will want to pass on new images into our machine learning model. And so those images don't match any of the images that are in the training data set. Here maybe as one example, maybe different tasks correspond to different users. So you want to build a spam classifier for different users and then the test task will be a new person. And it will be a person that wasn't in the training data set. And it will be trying to classify spam versus real email for that particular new user. Does that make sense? So it'll be something that is, it should be generally similar to the kinds of things that you've seen before because we want to-- that's going to allow us to, that's going to let us generalize to that new task with a small amount of data. But it is going to be-- it can be something that's new. That's fundamentally not seen in the training data set. Yeah. Can meta learning be a part of two tasks which are seemingly unrelated in case but maybe like potentially exploit some sort of chain structure to use one task to help you do the other? Yeah. So the question was can meta learning be used for two tasks that have kind of seemingly very little in common but perhaps maybe there is something that it can discover that they have in common? So first, if you only train on two tasks, then generalizing to a third task will be very difficult. But I'm not sure that's exactly what you're asking. If some of the tasks in your distribution are very different from one another and you have more than two tasks, then meta learning will try to find the common structure between them in a way that allows it to very quickly solve those tasks. And if there is shared structure, then it's explicitly going to optimize for trying to find the things that are in common between those tasks. Yeah. So in the case if there really isn't much structure could it be detrimental to actually use meta learning? So if there isn't that much shared structure, would it be detrimental to use meta learning? So I don't know if it necessarily be detrimental in the sense that it may have a hard time finding that shared structure. In the worst, well, I guess we'll get into this when we get into the algorithms, but there are some algorithms where they will essentially revert to learning from scratch. And so you won't do worse than learning from scratch whereas there are some algorithms, including the ones that we'll talk about today where they could do worse than learning from scratch. Yeah. Are the data sets for each of the tasks in the tree set about as large as you would need and like a normal [? dictionary ?] class or can you plus data per task? Yeah. So one of the things that's really cool about this is actually you can get away with much less data per task than if you were to train completely from scratch. And so we'll see this actually in one of the data sets, including actually the data set that you'll use in your homework. Cool so one of the things that we talked about on Monday is that there's a couple of different ways that you can view meta learning algorithms, one from a mechanistic standpoint and one from a more probabilistic standpoint. And from the mechanistic standpoint, you can think of meta learning as basically trying to train a neural network to read as input data and give you predictions for new data points. And so you can think of it as implementing a learning procedure whereas the more probabilistic view is thinking about how we may try to learn a prior-- extract prior knowledge from your set of training tasks and impose that prior test time when trying to learn so that you can learn with less data. In the next few lectures, we're going to really be focusing on the mechanistic view which will make it easier to think about how to actually implement these algorithms in practice, but we'll come back to the probabilistic view in a few weeks. So and then lastly, the last thing that we talked about on Monday was looking at an example, a few short classification example, where ultimately we want it to be able to classify new examples shown on the right here, given a very small training data set. So the train data set only has five data points. And we want to be able to use that training data set and previous experience from other image classes in order to effectively solve and make predictions for new test examples. And so the way that we can do this is construct tasks that look a lot like this test task. So construct training sets and test sets from other image classes, from other experience or data that we might have. Run kind of meta training on these tasks such that when we see a new task at meta test time, we can learn from this type of data set and make predictions. Now-- and this is kind of an image classification example, we can consider this for other machine learning problems as well. Now what I'd like to talk about next is first getting a little bit into some of the terminology that's used in meta learning in the context of this example. And then we'll dive a little bit more into the setup. So we have the setup where we have a number of different training tasks. For each of those tasks we have a training data set and a test data set. These are sometimes also called the support set and the query set, the support set you can think of as kind of providing support for the learning process and then the query data set will be used to query the predictions of the model after learning on the support set. Using these latter terminology can be somewhat helpful to differentiate between meta training and meta testing. So we've talked about this notion of few shot learning. And so you can think of this as k short learning, where you have k examples per class. Or if you're in a regression scenario, you have k examples total. So k short learning means you essentially have k examples to learn from. And then we'll consider-- we'll use n typically to denote the number of classes that we're trying to classify between, the number of classes that we're choosing between. So with that in mind I have a question for you. So maybe I want to ask you what is k and n for this example. And so first in terms of k, maybe you can kind of raise your fingers and, kind of, say what you think. Actually, let's actually start with n. So maybe raise your hand with fingers to denote what you think n is for this example. Cool. So I'm seeing a mixed mix of fives and twos. But mostly fives. Cool. So in this example, we were doing n way classification. I guess-- so yeah, the mix of fives and twos is-- I can see where you're coming from. So this is a five way example because we have five different image classes shown in the left training data set. I'm only showing two test examples here just in terms of space on the slides. But there are kind of five classes underlying these-- underlying these training data sets. And then can you put up a show of numbers for what k is. Cool. I'm seeing ones which is good. So yeah, we're giving it basically one example of five different classes. So this is a one shot learning problem where you have one example per class. And we're trying to solve a five way classification problem. Cool. So that's a bit of the terminology. Now in terms of the kind of general recipe or maybe another view on the learning problem, is if you think about supervised learning, our goal is to map from inputs to outputs and to learn a function that predicts the input-- predicts the output given the input. And we do this with a set of input/output pairs, x, y pairs. Now in meta learning, we can also actually frame it as somewhat of a supervised learning problem where our inputs are going to be a little bit different. So our inputs are actually going to include the training data set for a task as well as in that training data set, we'll have k examples or k examples per class. And in addition to a training data set for that task, we're also going to have a new test input that we want to be able to make predictions for. So the training data set is essentially what was in the green box on the previous slide and the test examples or the test example is one of the things that was in the blue box on the right. And then we want to be able to predict the label for this test example. And so you can think of this green arrow here as the process of learning from that data set to make a prediction for a new example. And from this standpoint, we could actually, again, view this as a supervised learning problem where we want to train a neural network to take as input a data set, learn from that data set, and make a prediction on a new test example. Now how do we go about learning this kind of function, well, instead of having a data set of input/output pairs, we're actually going to have a data set of data sets. And each of these data sets has to have at least-- have some more than k examples so that you can sample k examples to be used for the training data set and at least one example to be used for the test set. Yeah. I have [INAUDIBLE] an audition that we're using. So we're [INAUDIBLE] classification, generally in the testing since we have an example of the classes that we're going to test it on. So you have one let's say environment classifier between cats and dogs and I have one example of cat and one example of dog. And that is what will be testing on. But what you are using, that seems like zero shot learning, where are not using-- you have five classes in this set of five different classes and actually what you are actually classifying for are two different classes. Yeah. So going back to the slide here or this visualization here, so the number of examples in this red box here doesn't matter too much. You need there to be at least one example and you need to be able to-- and in general your test is going to be larger than two. You'll probably have at least five examples so that you're actually evaluating its ability to make predictions for all five classes in your training data set. Three to five support classes there for the testing one, and one for classification. [INAUDIBLE] Like the model itself and classes are there when you classify them, but you don't want to classify, [INAUDIBLE] why do we need other classes and functions. Oh, so I guess what I'm visualizing here is a five shot-- or sorry a five way classification problem. And so you have one example for five different classes. And so it's a one shot learning problem. And then technically if all you cared about was to classify these two images, then it's actually a two way classification problem. I would have shown all five examples if I had a little bit more space on the slides. But yeah, I'm trying to frame this as a five way problem. So you can sort of just think of this as having-- like you could imagine if there's like five examples here. Yeah. Cool and here I'm actually only showing one test example in terms of what this function needs to produce. But then when you actually train this function, you'll actually sample more like different test examples to evaluate its ability to generalize given a training example or given a training set. OK and so we can learn this function with a data set of data sets. So a set of data sets where each data set is for a particular task. And again this data set, the data set for a given task i should have more than k examples so that you can use k to be used in the training data set and some other examples to be used in the test set to measure generalization. Yeah. There are two different-- there are two different subcategories that I [INAUDIBLE] So I corresponds to the task j is the-- Number of examples. Yeah. So here are the i is indexing the number of tasks and j is indexing the examples that we have within a task data set. Cool. And so one thing that's nice about this kind of view is it means that now in order to implement a meta learning algorithm, all we have to do is design this function f that can read as input a data set and then optimize that function f. And so essentially in this lecture in the next two lectures, we're going to be talking about different ways that you can design and optimize this function f. And so in particular, kind of the general recipe that you can think of for designing a meta learning algorithm is choosing a form of this function. In this lecture, we'll see a form of this function where it's just represented by a neural network. But we'll actually see other forms of this function in the later lectures where f can correspond to running gradient descent or f could correspond to an algorithm like nearest neighbors. And then once we have that function that we're going to optimize the free parameters of that function, which we'll refer to as the meta parameters using the meta training data. Yeah. The speed of the testers code is a completely [INAUDIBLE] X test is the test data for the task that is specified by D train. So D train and x test are both from the same task. And so that could be if you're doing meta training time, that's going to be one of the media training tasks. If you're at meta test time, that's going to be one of the new tasks that you haven't seen before. Yeah. In previous slides, what was in the k? So k is the number of examples-- in k shot learning, it's the number of examples that you're going to be learning from. [INAUDIBLE] or on our meta supervised learning, each iteration we send for tuple k from the training? Yeah, actually. So there's a question of how do you choose k? Typically, if you have a sense for how many-- if you have a sense for want to be able to do one shot learning, then you'll just set k to be-- essentially to be one per class. And then at test time, you'll do one shot learning. You could also have kb variable. So you could actually sample different k for different tasks and have some task be one shot learning task some tasks be five shot learning tasks, such that you're preparing your network to be able to do one shot learning and five shot learning at meta test time. Yeah. Shouldn't the index be nk good instead of k [INAUDIBLE] Yeah, so the question was shouldn't the index here be n times k rather than k? So if it is a classification problem, then it should be n times k. If it's a regression problem, then the standard notation is to just use k. And so I used k here but it kind of depends if it's a regression or classification problem. And the reason-- the convention for that is one short learning, you typically think is learning from one example. And a regression-- in regression problems, you can actually-- learning from one example in regression, probably you won't get very far. But that one shot learning is learning from one example there whereas in classification, if you have only one example, it's fundamentally impossible to solve like a five way problem with just one example. And so the kind of convention is to use-- to have a b per class there. And so if this was classification, the k would be n times k. Yeah. [INAUDIBLE] If you have data set for these tasks which are between the different [INAUDIBLE] So we'll get to the methods in a second. But it basically corresponds to how many-- like what is the size of the data set you're passing into your function and-- [INAUDIBLE] Well. So basically, you can train this function to take as input, variable size data sets. And that's useful because at test time. If you're not sure what size your data set is going to be, then your function should be able to handle different sizes. Yeah. During the last class when we were talking about the probabilistic view of meta learning, we kind of talked about data between the parameters representing the shared structure and then also i which [? rates ?] the task-specific parameters. So in this case, is f representing both theta and phi or is it just representing the theta? Yeah so the parameters of f here are theta. And that is kind of representing the theta that we saw in the previous lecture. Here, you don't see phi appear here and that's because phi is-- we'll see in some algorithms phi will come out explicitly, whereas in other algorithms we won't be explicitly representing phi. And so it's useful to set up this notation in a way that can either have it or-- that doesn't actually explicitly represent it because some of the algorithms we'll look at next week don't actually explicitly represent what phi is. If it's not explicitly represented, does that just mean that it's a part of theta in some form or a-- Sort of. Basically, next week we'll cover non-parametric meta learning methods and so those don't actually have explicit parameters and so phi won't ever appear. But we'll get into that on Wednesday next week. Yeah. Just a quick clarification, [INAUDIBLE] should be x here so that the task [INAUDIBLE] So x test is a new example, a new input for that task. So it's given a data set-- given a task data set here, it'll basically just correspond to one of the xs in the task data set. Cool. So I think that we should start running into like approaches because I think that will also clarify the setup as well. And so we're going to start with a running example, which is going to use the Omniglot data set. And you'll be actually using this data set in your homework. It's actually a pretty cool data set. It has a data set of 50 different alphabets written alphabet. And it has a total of 1623 characters across all of those alphabets. And they're all written by hand by people. And here are some examples of some of the alphabets represented in that data set. And per character there's actually only 20 examples. So this is getting into one of the questions which is mentioned before, which is that typically in a machine learning data set you wouldn't have only 20 examples per class in your data set. For example, in the [? m nest ?] data set, I think it has on the order of thousands of examples, maybe tens of thousands of examples, maybe like 6,000 examples per class I think. So you can think of this as the transpose of a data set like m nest where instead of having many examples in a small number of classes, you actually have a large number of classes and a very small number of examples per class. And I also think that the statistics of this kind of data is that are a little bit more reflective of the real world because in the real world, we don't see like 1,000 forks and 1,000 notebooks and 1,000 pens for only a small number of things, we actually have a really massive number of objects and we only kind of run into those objects a small number of times in general. So we're going to be using this data set. And in this data set to start off, we can think of different tasks as different alphabets. And let's think about how to actually set up the meta learning thing here. So we're to be looking at a three way classification problem just because then I don't have to draw as many images. And we're going to be looking at a one shot learning problem. And it's worth mentioning that there are two different versions of black box meta learning algorithms and so we'll cover both of them. And first we'll talk about the meta training process. So the first thing to do during training is to sample a task. And so this will correspond to sampling an alphabet and specifically sampling three characters from that alphabet. And the reason why it's three is because we're just going to be considering a three-way classification problem. And then once we've sampled a task and sampled the characters that we're going to be classifying between, we're going to sample two images per character. And we're then going to break these examples into a train set and a test set. And so for example, maybe the alphabet that we sampled is the alphabet that we use in English and the three characters that we sampled were a, b, and c. And we're going to sample two different images per character so that we can use one for a training data set and one for a test data set for that task. So these are maybe yeah, two. This will be our d train for task i, maybe we sampled task i. And this is going to be d test for task i. Now one thing that we're missing here is we're also missing labels. We need labels in our training set and test set. And so what we're going to do is we're also going to assign labels to each one of these. So we can say labels zero, one, two. It's important to have consistent labels across our characters. OK. And now we get to the fun part. So what we're going to do is we are going to pass our training data set into a neural network that will implement the learning process. And so in terms of passing this into a neural network, these data sets may have a number of different examples. And so things like a recurrent neural network are typically a good choice or a transformer, if you want to be a little more into the times. And so you'll pass this into-- your training data points into a recurrent neural network. And at this point, we're going to get to the two different versions. So one version of black box meta learning will take as input these training data points and output a set of parameters. And we'll then use those set of parameters to classify new examples from our test set. And so in particular, we will-- I want to work on my space management. So we'll take these set of parameters. We'll then pass as input one of our test examples into a neural network with those parameters to get a prediction for that test example. So this will be x test i, this will be y hat test i. And then we'll compare this to the corresponding label for that example. And then what we can do is we can, once we compare our prediction for this example to the label for that example, we can back propagate through this entire neural network into the parameters to train this neural network so that it can learn how to learn from this data set. Cool. And then once we then kind of back propagate into the neural network, update the parameters of this network here, which will be the parameters theta, well then, this is step three, then we'll go back to step one sample another task and iterate the process. So this is the meta training process. We can talk a little bit about the meta test process before we move on to-- before we move on to the second version. But, yeah. [INAUDIBLE] class named a. [INAUDIBLE] Does it go through the neural network or does it go through the neural network and the classification? [INAUDIBLE] So it's basically going to update all of the parameters of this network and one actually-- one really, really important thing to note is that it's not going to update phi. So phi, you can think of more as an activation of this neural network. It is only going to update the meta parameters of this network. So we're going to differentiate through phi into the parameters of this thing right here. I really should have brought some more colors today, but you can think of the parameters-- the parameters of this will include both the encoder as well as the kind of recurrent parameters as well. Yeah. [INAUDIBLE] Yes the question is why does it make sense to use a recurrent neural network? You can use other networks as well. And we'll talk about some other choices. You could use something like a deep set architecture or a transformer. One thing that is convenient about sequence models in general is that they can handle variable length sequences. And so you can pass in variable length data sets for example. And it should be able to handle that. In general, architectures for handling sets and sequences are pretty good choices. Yeah. I'm not sure if [INAUDIBLE] Yeah. The question is, if you do meta training with three classes and then evaluate this ability to learn with four classes, then how well does that work? In general, probably not very well. It depends on the architecture choice that you choose. But I mean, you're basically training a recurrent neural network. And if you train a recurring neural network on a sequence like the three and then test it on a sequence like the four, you're basically testing it on something that it hasn't been trained to do. Yeah. Do you need [? task data ?] for your training tasks, or can you do this in an unsupervied way where you're just hoping to learn how to cluster them? Yeah. So the question is, do you need test data? And can you do this in an unsupervised way? There's lots of different variations that you could consider here. You could consider something where the train data set is maybe unlabeled and it's trying to-- but the test set is labeled, for example, and it's trying to cluster and so forth. We'll get into some more advanced kind of variations of this in some of the coming lectures. But in the well, yeah, in the basic setup, it's good to have-- labels are helpful. The other thing that I'll say is it is important for these to be held out examples and not just the same examples as what you pass in because if you pass it in exactly the same example-- this example here, instead of learning to learn, it will learn to memorize. And it will learn to just exactly only be able to make predictions for the examples that you passed in. One other note that I should mention here is that here, note that we're passing in both x and y. We're passing in both the input in the label into the recurrent neural network. Here, it's important not to pass in the label because if you pass on the label, then it will just not look at the image and predict the label. And this is something that you'll run into in your homework. It's a kind of common mistake to accidentally pass in the label into the network here. Yeah. So in the prediction, [? could each sample ?] be independent irrespective of the orders? [INAUDIBLE] because if I were like CNN for each image and you learn predicting independently of the sequence. Is that the two [INAUDIBLE] Yeah, so one downside of using an RNN, is that the ordering of these data points matters. And in practice, this is a set. And so the order of examples doesn't matter. We'll get to this in a few slides. But the architecture is that like deep sets that are permutation and variant are actually, can be a better choice than an RNN for that reason because they'll encode the fact that the ordering of the data points doesn't matter. What exactly are we predicting the parameters, which is the [? point ?] to make? If so how many parameters do we predict or what are we predicting? Yeah so here it is actually-- here it's predicting parameters that are being used to make predictions. And so this is, in general, a pretty high dimensional vector. And so it may correspond to the parameters of a neural network. I guess I'll get into version two. So that was like-- this was version one. Version two is something that is a little bit-- little bit less unwieldy. So predicting like millions of parameters with the neural network can be rather expensive. And so what you can do instead is let me write these out again. You don't have the label for this. So what you can do instead is you can simply use something like a recurrent neural network and actually just continue running recurrent neural network here. And in this case, the phi i is more implicit. And here you'll again be asking it to predict y i test for this example right here, which is x i test. And this is something where you don't actually have to have your Rnn output a huge parameter vector. You can just have it output say, this is like the hidden state of your RNN after like three examples. And you can think of phi i as like the set of-- the set of these hidden parameters as well as any parameters that are in this network right here. So we can maybe refer to this as like theta G in the sense that if this is like a function G, those are the parameters of that function. And so this is what phi i would be in that case. In this case, this is a lot nicer because-- and this is going to be used more in practice than this version because then you don't have to output like millions of parameters of a network. You can actually-- Yeah you can actually only represent a much smaller context here. Yeah. So are the task-specific parameters an output of the shared model data or are they the result of adapting data [INAUDIBLE] Yeah so in this lecture, the task-specific parameters are the output of this neural network model here or here. I guess we can call this h i in the sense that this is kind of a context vector. In the lecture on Monday, we'll see examples where the task-specific parameters are the result of something else like running an optimization like gradient descent. Yeah. Why is it not important to update phi i in model one? Why is it important to-- [INAUDIBLE] Yeah, so the question is, why is it important to not update phi i? So I think that maybe-- so in this example, it may be a little bit more clear in the sense that when we run gradient descent on to kind of test generalization, in this case, we're going to back up into all of the parameters of this RNN, but we're not going to hi because hi is an activation and when you have activations of a neural network, you don't update the activations. You only update the weights. And so analogously in this example, you can think of phi i as the activations of your neural network. And so you're not going to be updating that, you're only going to be updating the weights of this neural network. Yeah. Could you could say one more time what the difference is between one and two? Yeah. So the difference between version one and version two is-- they are very similar. The main difference is that in this case, this neural network right here, the only parameters making predictions from this example to y test are going to be phi i and those are going to be outputted by this. And so those are going to be the activations of this RNN whereas in this case some of the parameters are actually going to be-- some of the parameters here are actually going to be optimized as part of the meta optimization. And so for example, the encoder of this RNN may actually be shared across the timestamps. And so you basically have more parameter sharing in this version. Yeah. [INAUDIBLE] So you're asking-- [INAUDIBLE] Yeah. So in this case theta G will be optimized with back prop as well. But hi will not be-- [INAUDIBLE] training leg sequence. It'll be optimized with respect to the loss here. But are you using the test scribbles to optimize their test settlements? You can basically think of all of this as the learning process and we're optimizing all of the parameters of this learning process with respect to how well it generalizes on a new test example. The one thing that you may be noticing here is that it could be that the neural network could just ignore all of this and just learn a classifier for it. And that's actually a problem that can come up in meta learning. And so one thing that I'll mention here is that the-- here we kind of assigned labels somewhat arbitrarily like zero, one, two, like in the order of the alphabet. But really that was somewhat arbitrary and when you're given a new alphabet you don't know what is the first letter of the alphabet versus the second and versus the third. And so what we'll do in practice is when we assign these labels, we'll assign them randomly. And so when we sample a task, in this case I assign this labeling. But when you sample this task the next time, you might sample a different labeling where you have, for example one, two, zero one, two, zero. And as a result when you actually randomize that, that prevents the neural network from just memorizing a mapping from the input to the label and ignoring the context. As a follow up are you also allowed to optimize phi i on the [? latest assessment ?] [INAUDIBLE]. Are you allowed to do that? [INAUDIBLE] Similar to hi here, phi i is an activation of the neural network rather than some of the weights. [INAUDIBLE] So it's the output of this it is the parameters of a neural network. But it's the output of our recurrent neural network. And so we're setting it to be the output of this neural network. And so if you updated it one round, then the next round it would just be wiped by a forward pass of the [? Anand. ?] Yeah. Just answering this question. Could it be because in my [? other ?] training, the regime of training is very different from the normal training that is in machine learning. So meta training and even in training, we have two subsets-- for each class there are subsets of example [INAUDIBLE] like training and testing. So although it is said to be testing, [INAUDIBLE] testing data. It's not actually testing we use it to train [? on them. ?] Yeah. That's a great point. So to kind of reiterate, when I'm talking about this, this is like our task test set. for task i. And we are going to be training the parameters of our RNN with these test examples. And that may sound like a really bad thing. And it, sort of, should sound like a really bad thing because we're going to be training on these test sets. But meta learning actually, the real test is new tasks. So this is the test that of the training task or of one of the training tasks. And then we're going to be given new test tasks and those are going to be really the real test for these models. Yeah. So is there of catastrophic forgetting in this sort of set up when we're assembling tasks and then feeding the same parameters across tasks? How would we prevent it from happening? [INAUDIBLE] Yeah. So the question was, is there a risk of catastrophic forgetting in this case of past tasks that you've seen? So in practice when we sample tasks, we'll sample the sample them from a set of tasks. And as long as you keep on sampling from that task IID, then you should be able to remember those tasks. I should also mention here, I said that we're just going to sample one task. In practice you can sample a mini batch of tasks, so multiple tasks. And that will give you a lower variance gradient. And so similar to like in machine learning, if you are sampling IID from a training data set, you don't have to worry too much about forgetting. We also don't have to worry about that too much here. But if we did have a sequence-- like a non-stationary sequence of tasks, we might start forgetting some of the older tasks if they don't keep on reoccurring. Yeah. [INAUDIBLE] neural network rather than another kind of network. Because these [INAUDIBLE] cannot be related, right? And there's not a direct relationship when you try to translate something. So why are we using direct network instead of the [INAUDIBLE]? So you want to have some network-- the question is why are we using an RNN versus some other network? You do want this network to be able to take as input a data set. And data sets, do you want to be able to process some set of examples. And so things like RNNs are a good choice for modeling. Well, they're actually not-- in practice, they're not a great choice, but they're the simplest choice for modeling sequences. And things like transformers and deep set architectures and 1D convolutions could also be used as well. You wouldn't want to use a feedforward well. So one thing you could do is you could use a feedforward fully connected network and basically concatenate the embeddings-- concatenate all these images together and pass it through that way. And that might not be a great choice, compared to something that explicitly models these separate entities. Yeah? In practice, does version 1 won't work with a very high dimensional phi? So yeah, in practice, this version 1 work with very high dimensional phi. So there are some papers that have actually gotten it to work pretty well. In practice, people typically use some form of sharing across layers so that you're not outputting the entire parameter vector all at once. You might be outputting one layer at a time, maybe telling the network, which layer that it might be outputting. One paper or one thing that this is often referred to as a hyper network. And a hyper network is basically any neural network that's outputting the weights of another neural network. And it can be used for meta learning, but it can also be used for other things. And so there are some examples of it working well. But also, in general for future learning problems, this is the more practical solution. Cool. I have a couple of examples of other data sets on the slides if you're interested in exploring that for your project. But for now, I just want to-- oh, actually, I want to recap some of the stuff on the whiteboard. But also we've talked about meta training. We haven't explicitly gone through what meta-test time looks like. So let's also quickly do that first. So at meta-test time, you're actually just given a test task. And you're also given a training data set for that test task. And so, for example, maybe your data set is of the Greek alphabet. And so you have some examples like this. And then you're probably also given some test examples. So maybe you're given an example that looks like this. And you want to be able to classify this example, given your few training examples. And so does anyone want to say what we might do at meta-test time? Yeah. We put that training data set out of the images that we have the information. And we use our [INAUDIBLE]. And also we've been [? good ?] the testing image, [? great ?] image. And then we see [? what ?] [? could ?] be the highest this score so that we can classify. Yeah. So what we can do is we can pass our training data set into recurrent neural network. Also passes and put our test example and ask it what the corresponding label is. So we'll probably be given corresponding labels for this like, 0, 1, and 2. And we will pass these into our neural network. Doesn't have to be an RNN. And then also, pass this input this example. And then, hopefully, the neural network will output zero, insofar as that looks like an alpha. Also for creating certain [? recipes, ?] we helped them make multiple passes or did we [? do ?] [? you ?] [? know ?] [? when ?] [? the ?] [? form ?] [INAUDIBLE]?? This pass, I will probably believe will zero try to predict out of the [? thousands ?] [? and ?] try on the second XPS thing? So in Black-box meta-learning, what you'll do is you'll always pass on the whole training data set as input and then pass in the test example. One version, another method that we'll see next week is something that actually makes explicit comparisons between the test example and the training examples. Does that answer your question? I want to know if this is purely happening because of the training set that we're [? passing ?] in this? Sorry, can you repeat that? Sorry. Is there any tutoring [INAUDIBLE] or any fine tuning that's happening because of the [? post ?] [? op ?] [? in ?] [? the ?] gamma? Yeah. I understand now. So the question was, is there any parameter updates or any tuning that's happening when we do this? And the answer is no. So we're just doing a forward pass through this RNN. And so we're not actually-- you can think of this RNN as kind of implementing the learning process. And if you give it a big enough neural network, these neural networks can learn in that way. And so there isn't actually any parameter updating other than, perhaps, you can think of the hidden state as being updated as you pass it through the recurrent neural network. Yeah? Why don't we implement-- or why don't we run any algorithm which doesn't do the [? finding? ?] So you're asking why not have an algorithm that does do some amount of-- It'll just do something [INAUDIBLE].. I know we are already using alpha, beta, gamma to compare our results, but more than that, is there any algorithm? We'll see it on Monday, next week. Yeah? So here the RNN is predicting the parameters on one without actually knowing about it. What does the architecture-- is this some kind of model that is easier to run with the parameters that we're predicting? That would happen on the kind of model that we have, right? The parameters [? you're accompanying, ?] so. Generally, for version 1, the architecture of this function right here will be always the same. It will be fixed. Yeah. But is it some kinds of applications are [? easy for ?] phi i-- to learn phi i. Some kinds architectures are easier to learn, predict phi i for or something like that? Yeah. If you're asking are there some architectures for which it would be easier to predict phi i than others? Generally, the smaller the architecture, the easier. Yeah. And if you only have to predict some of the parameters of the architecture, which is similar to here. For example, you could have it only output-- each could correspond to the last layer of this network. And then you only have to predict some part of it, for example. Yeah? Actually, I have two questions. First question is, you're referring to this as black-box. What about this is black-box? Yeah. So I'm calling this black-box meta-learning because the learning process itself is somewhat of a black-box. It's just this big neural network. I guess, it's multiple boxes on the board. But you can think of this as just one kind of monolithic neural network. And we don't have a lot of-- we don't see a lot of what happens inside this neural network to actually interpret how it's actually adapting or learning from these examples. And then my second question is, once the hypernetwork produces phi i, does phi i get updated in all phi? Does this always happens over here or is it just static? It's static. So it outputs it once. And then we make predictions with that. And so we can predict this example and also make predictions for the other test examples. And it will only be changed once we pass in a new training data set. Yeah? One thing that I'm kind of confused about is that the labels of your dash [? in ?] [? this, ?] they were just kind of random, right? Where can this [? meta ?] [? facing ?] setup, when we're trying to make these labels, then how is it [? therefore ?] I cannot start chucking and bouncing them to-- OK, both images are equal. So is it kind of it's just marking those two while we will [? fall ?] [? apart? ?] So you're asking what is the neural network doing? [INAUDIBLE] So I don't-- I guess, I don't have a-- because this is black-box, I don't have a great sense for exactly how, in practice, the neural network is implementing it, is choosing to implement this. It does have to store information about how you map images to labels and what is zero-- what does label zero mean from an image. Or it could be, I mean-- there's a lot of things that it could be doing. It could be-- actually, if it's a really huge neural network, it could be mimicking something kind of like gradient descent, for example. So it's a little bit hard to understand in general. Yeah? I think my question comes from the multi-task learning. So here, we know that we use some classes to train our neural network. And the [? best ?] [? is ?] [? on ?] [? process. ?] Is there any way, again, to know which classes would actually help during treatment? Which classes would actually help to use during training so you get the better result? Yeah. So the question is, is there a way to tell if certain character classes or certain tasks would be helpful for a particular test task? Not in general. So for languages, we do not. Languages, we have differences like a common set. But when there's a huge bunch of data in different variation of classes, how would we know? Yeah. So if you have tons of tasks then, there's going to be more heterogeneity in terms of when one task might be helpful for the test task. But yeah. In general, we don't really know. It's kind of similar to fine tuning in multi-tasking. And maybe you should figure it out for your project and then tell us all how to do it. Cool. So I went through a lot of that on the whiteboard because I felt like that would be easier and kind of more intuitive to walk through. The slides have basically everything that we went through on the whiteboard. So version 1 we're going to be training a neural network to output these parameters. And then we're going to be predicting test data points using a neural network parameterized by those parameters by phi i. And you can think of the first network as kind of the learner, it's learning from these training data points. And the second network as something that's actually making predictions using those parameters. Here's the RNN that we drew on the board. This is representing f theta. And then here's the second part of the network that we draw on the board as well from the test examples. And then we can train the parameters of theta with standard supervised learning. In terms of what the loss function looks like, negative log likelihood, which is cross entropy loss or mean squared error, is the loss function that's typical to use for supervised learning. And so that's basically the loss function that will be applied right here to back propagate through. And you can-- if you think about this as the loss function for task I, then you can basically view this overall loss function as making a prediction for task I, and then evaluating that on the test data points for task I, and then averaging that across the tasks. I want to go through this somewhat quickly because this is basically stuff that we all went through on the board. We sample a task. We can also sample a mini batch of tasks rather than just one task. We can then sample disjoint data sets. So this would be like our training set and our test data set from all of the examples that we have for a particular task. I should mention maybe here that it's worthwhile to kind of mix and match these. So you shouldn't always use this as your test examples and always use this as your training examples. You can get a little bit more data for meta-training by randomly sampling what you use as a training example and what you use as a test example. And so kind of visually what this looks like is, maybe these are all the examples you have for one of your tasks. You want to allocate this into training examples and test examples. And so you'll kind of just randomly assign them to a training set and do a test set. Cool. And then once you have your training set and test set, you can compute the parameters and then update-- compute the parameters with the forward pass and then update the meta parameters with the backward pass. And then repeat. Cool. We talked about how outputting all the neural network parameters isn't very scalable. And so instead of doing that, you can just only output the sufficient statistics and use another basically replaced phi i with h i and have this be a lower dimensional vector. And how that can be combined with some other meta parameters in order to make a prediction. One thing that's kind of intuitively nice about this low dimensional vector is you can think of it as representing contextual task information. And so remember, back to multi-task learning where we had this task descriptor z i. You could sort of think of h i as a form of task descriptor that we're inferring from data. And then we're just going to be passing that h i into a neural network in order to make predictions for that task. Yeah? How many tasks in the test set do you need to test on to make sure it's effective at working on [? top? ?] Because they might be quite different. So how do you know that your test [? task ?] is a good representation of how well [? it ?] [? will ?] [? do ?] on another random task? Yeah. So the question is how many test tasks should we use when we're evaluating a meta-learning algorithm? And so yeah. In general, you don't want to just have one target task. When you're evaluating, you want to have at least a few. And one thing that you can do to get a sense for how many you need is you can look at the variance of the accuracy across tasks. And so if one task you're seeing has 80% accuracy, another task has 100% accuracy, and another task is like 0% accuracy, then your variance is pretty large. Whereas if they seem to be more or less consistent across tasks, so that maybe is a good indication that you may have enough tasks to get a good reading on it. Yeah? How do we learn the big theta g for phi i? Yeah. So the question was, how do we learn the big theta G for phi i. So you can think of theta g as part of the meta parameters. And so that's actually why I'm using theta here. So we're going to optimize theta g alongside all of the rest of the parameters of theta. And so that's how we allow-- that's how we make it possible to have h i be only a low dimensional vector. Likewise, theta g may have some parameters that are shared with the other part of the recurrent neural network. Because, for example, you may have the encoder of the image be the same for these training examples and test examples. Yeah? Is there a ballpark for how much less data do you need for each task with meta-learning versus training an individual mental? Yeah. So is there a ballpark for how much data you need. So with meta-training and as you'll see in the assignment, you can go down to as few as one example per class if you have a lot of tasks. And so, per task, you can go down to something very, very small if you have a lot of task and if they have shared structure. That said, if you only have one example per class, you're still going to-- your total amount of data may still be somewhat similar to kind of single task learning in that you may have a lot of tasks. And so you may have a small amount of examples per task but a lot of different tasks. So your total data may be somewhat similar. And then, of course, the amount of data you need will depend a little bit on the complexity of the problem you're solving. Cool. And then, I thought I would talk a little bit about architectures. So we talked a little bit about RNNs before. One of the first papers that introduced this, at least, more modern notion of meta-learning with Black-box neural networks is this paper right here. And this is from 2016. This is actually sort of in response to a challenge-- basically, the Omniglot challenge being released. And some cognitive scientists saying, neural networks can't do few-shot learning. And this paper showed that, oh, neural networks can actually do few-shot learning. And it was able to do pretty well on the Omniglot data set. They used LSTMs and neural turing machines. I wouldn't necessarily recommend using neural turing machines. They're, I think, a little bit of a relic of the past. But yeah. It's worth mentioning this is one of the first papers. And in your homework, I believe, we have using LSTMs in the homework. But if you want to, you can also play around a little bit with transformers. Another architecture, which is called the deep set architecture, is to-- instead of having a recurrent neural network, instead pass all of your examples through a feedforward neural network to get an embedding of each of your images and then average those embeddings to get a summary vector. I already answered this earlier, but I guess I can see if you're paying attention which is, why would something like feedforward and then average be better than a recurrent neural network? [? Because it is ?] [? permutation ?] [? invariant. ?] Yeah. So it's permutation invariant. So yeah. So averaging is agnostic to the order of the things that you're averaging. And so this sort of architecture is permutation invariant. This is an architecture called deep sets. One thing that's actually pretty cool about deep sets is, there's a result that shows that-- for some conditions on the width and depth of the network, these kinds of architectures can represent any permutation invariant function. And so while it may seem actually somewhat limiting, these architectures are actually very expressive as long as you only care about permutation invariant functions. There's also other external memory mechanisms that have been proposed. And then this-- there's also papers that have proposed things that use attention and convolution. This paper came out before, I think, transformers. Or certainly before transformers were popular. And so they probably would have called it a transformer or something. And then there's probably more recent papers that actually use things like transformers as well. And then these methods can do pretty well. So on the Omniglot data set, if you use this architecture which uses attention and convolutions interleaved, on Omniglot, on five-way one-shot Omniglot, you can get 99% accuracy. If you have five examples, you can get even higher 99.78% accuracy. If you make it a little bit harder and you're now doing a 20-way classification problem, then you can still get accuracies in the high 90s. So Omniglot is not too difficult. And then if you look at something like Mini-ImageNet which is a smaller version of ImageNet with smaller images, you can start to get somewhat reasonable one-shot and 5-shot accuracy but these numbers are a lot lower than Omniglot numbers. And they're also certainly below state of the art. Yeah. But we'll see some things that are closer to state of the art in the next two lectures. Yeah? Are there [? any ?] [? that ?] [? would ?] make viable external memory systems are being used for meta-learning? How will they help in meta-learning? Yeah. I certainly don't think that they are necessarily-- things like neural training machines and these other external memory mechanisms, I guess, I wouldn't necessarily recommend them. I think that they're possibly interesting from a neuroscience standpoint because-- I guess, I'm not a neuroscientist so maybe I shouldn't talk about this. But I think that there are aspects of the brain that kind of store memories. And so if you want to think about key value storage and kind of storing things in that way, these things may be interesting. They also, perhaps, have some benefits from the standpoint of-- from the standpoint of maybe kind of memory footprint. Because if you can very efficiently store that information, then maybe you don't need this really big transformer. But we haven't seen-- that's somewhat speculative and we really haven't seen a lot of benefits empirically from using them. Cool. So in homework 1, there's going to be some key things that you're-- the two key things that you'll do is, implement the data processing. So implement, how do you actually like split things up into tasks and examples per tasks. And then also implement a pretty simple Black-box meta-learner and train that on Omniglot and get some results. Cool. So in terms of, just to also sum things up and talk about some of the pros and cons of this approach. One thing that's really nice about these this kind of approach to meta-learning is that these big neural networks are quite expressive and they can represent lots of different learning procedures. RNNs can represent-- if they're large enough, they can represent really any function. They're also really easy to combine with a variety of problems. Today we talked about supervised learning where you back propagate the loss with a supervised objective. But it's also very easy to plug this into reinforcement learning objectives as well. Because maybe it's outputting not the label, but an action or a [? q ?] value. And you can train this just like you would train a recurrent neural network with kind of reinforcement learning objectives. In terms of the downsides, these big networks are somewhat complex and they have to learn something pretty difficult, which is learning from data points. And this can lead to somewhat of a challenging optimization problem. And as we'll start to see in some of the next lectures, trying to learn how to learn from scratch, from kind of randomly initialized RNN weights can be pretty difficult in comparison to giving it some of the structure of learning algorithms that we already know. And so, as a result, because it's a hard optimization, these can be less data efficient than other meta-learning approaches. Cool. So there are other ways that we can represent f theta. And so next time on Monday, we'll talk about what if we treat f an optimization problem and actually embed optimization into f rather than having this big RNN. Cool. So now I'd like to talk a little bit about large language models and things like GPT-3, Chinchilla and the latest Gopher, PaLM, the biggest language models these days. And we're going to focus on GPT-3 just because it's a fairly canonical example. And they also have lots of really cool qualitative examples in the paper that we can draw from. And things like GPT-3 are a lot like a Black-box neural network. But one thing that's a little bit different from here that we'll talk about is that they have-- few-shot learning kind of emerges in a way that wasn't explicitly-- I think that when you train a language model with a language modeling objective, you're not kind of setting out for a few-shot learning. You're not setting up the data to train it to do few-shot learning. It's something that kind of emerges from the data in a somewhat more surprising way. And so we'll talk a little bit about emergent versus the kind of few-shot learning we see here. So GPT-3 is a language model. You can think of it as a Black-box meta-learner, or a kind of a big RNN, well, transformer neural network that's trained on language generation tasks. And you could think of it as where D train corresponds to a sequence of characters. And D test corresponds to the following sequence of characters. And so D train is what the language model is being conditioned on and D test is what it's being trained to generate. The data set corresponds to crawled data from the internet, English language Wikipedia, and to book corporas. And the architecture is a very large-- well, I guess, by these standards-- by these days it might be considered somewhat small. But in my opinion, a somewhat large transformer neural network. And So it has 175 billion parameters, 96 layers, and a pretty large batch size. What do the different tasks correspond to? So this is something that-- this is where kind of emergent few-shot learning comes into play. So you can see it do tasks like spelling correction, simple math problems, translating between languages. And these tasks are, in some ways, somewhat emergent. But also, in some ways reflective of the kind of data that you see on the internet. And the cool thing about things like language models is that these seem like-- on the surface these seem like very different tasks, like math, versus machine translation, versus spelling. And we could actually have all these tasks solved by a single architecture. So the way that you can do that is to put them all in the form of text. Put them all in kind of a common language for the network. And that's a good idea because it's also very easy to get a lot of data of text on the internet. And so what this looks like is you have an inner loop and an outer loop. So in here, kind of a forward pass of our RNN was an inner loop. And training that RNN to learn was our outer loop. And so, likewise, here in context learning, kind of running the RNN on text is your inner loop. And then across. When you see-- across tasks and optimizing the neural network across to these different tasks is the outer loop. And that's kind of the learning process of optimizing the parameters of your big transformer model. And so these are kind of the example tasks that we saw before all in text form. Cool. And so they trained this on data from the internet, and Wikipedia, and so forth. And you can get some pretty cool results. So you can see something like one-shot learning where you give it a dictionary definition saying, to screech something as to swing a sword at it. An example of a sentence that use the word screech is, and then it gives you a sentence. We screeched at each other for several minutes and then we went outside and ate ice cream. So this is one example of one-shot learning. You could also do few-shot language editing. Where here you give it 5-ish examples of poor English versus good English. And once you give it those examples, and then give an example of poor English input, I'd be more than happy to work with you in another project. The good English output that corresponds to that, that it gives you is, I'd be more than happy to work with you on another project. And likewise, you see another example for the same prompt at the bottom. And then, it can also do things that aren't really considered few-shot learning tasks. So you can ask it to write an article for you given a title and a subtitle. Cool. So some things about GPT-3 is that the results are really impressive. And also, since GPT-3 has come out, there's actually some models that do even better by a pretty significant margin. GPT-3 and other more recent models are also far from perfect. They make mistakes. So GPT-3 itself, even the largest model has an accuracy around 50% to 60% on few-shot learning tasks. I think that some of-- I think that things like Chinchilla are maybe closer to high 70s. But still, have a lot of room for improvement. They also fail in somewhat unintuitive ways. So if you ask how many eyes does a giraffe have? Giraffe has two eyes. How many eyes does my foot have? It starts telling you things like your foot has two eyes, and a spider has eight eyes, and so forth. So yeah. It doesn't fail in the same way that humans fail. And the last note here is that the choice of D train, basically how you prompt the model, how you give it these examples is also quite important. And we'll see an example of this. You'll get to play around with this a little bit in homework 3. And then the last thing I wanted to mention about these models is that there's some recent research on actually trying to understand when will few-shot learning emerge from these data-- from these kinds of models that are trained on text data. And the research has shown that both aspects of the data and the model is important. So it has shown that having temporal correlation in the data is important. And for example, if D train and D test are completely independent from one another, they won't actually learn to use that context. And it will just ignore that context. And so they think of this as kind of having a kind of bursty data in time versus non-bursty data. They also find that having a dynamic meaning of words is helpful. If there's a word that kind of means different things in different contexts, like the word wicked might mean something like evil in one context and something like really cool in another context. And that can help with few-shot learning because that encourages the model to actually pay attention to the context when making future predictions. And then, also there's some work suggesting that different aspects of the model are important. The biggest number one thing is that the model has a high capacity. People have found that transformers are much better at giving you capacity than things like LSTMs and RNNs. And so here's a plot of few-shot learning accuracy for these different architectures. And then also larger transformers are better than smaller transformers. In the GPT-3 paper, they showed that a 175 billion parameter model was much better at few-shot learning than some of these smaller models. Cool. So yeah. That was it for today. We went over Black-box meta-learning and saw how we can train neural networks to do few-shot learning. As a couple reminders, your project group form is due on Monday and homework 1 is due on Wednesday next week.
AI_LLM_Stanford_CS229
5분_전_새로운_AI_로봇_인텔리전스_방법_공개_MIT_스탠포드.txt
Introducing diffusion KSP, a brand new AI framework that just unleashed robots decision making skills to reach unprecedented heights in manipulation and generalization. Now layer this on top of the next frontier in computer vision with AI doppelgangers, and you don't have just another incremental improvement, but a breakthrough that's already outshining baseline methods in several challenging domains. But what makes this AI robot model such a big deal? For starters, have you ever wondered how robots can learn to do anything that humans can? In most existing methods, robots usually consider a single constraint type, such as object stability or collision avoidance when planning actions. This leads to a series of separate specialized algorithms developed for each situation. But what if a robot could generalize across multiple constraints enabling more intelligent and adaptable decision making? Well, this is precisely where diffusion KSP comes into play. By employing constraint graphs, the MIT and Stanford teams have laid the foundation for a more versatile and robust system. In essence, these graphs map out a network of decision variables like a robot's gripping stance, placement, pose, or trajectory that must satisfy various constraints. But it doesn't stop there. The research employs constraint solvers based on diffusion models which can be trained to handle any class of constraint, such as avoiding collisions or maintaining object stability. But what does all this technical jargon mean in practice? Simply put, diffusion. Ksp offers a generalized framework for training robots. It combines a suite of pre-trained diffusion models that are skilled at dealing with particular types of constraints during real world tasks. These models work in tandem to provide a comprehensive solution that satisfies multiple constraints simultaneously. Imagine a robot that can not only determine the best grip for lifting an object, but also the most effective route to move it all, while avoiding collisions and conserving energy. That's the kind of efficiency diffusion KSP aims to achieve. But what's most impressive is that its ability to generalize across different scenarios isn't just theoretical. It's been practically proven. The researchers tested the model in four demanding domains, including two dimensional triangle packing and three dimensional item packing by robots. Across the board diffusion, KSP outclassed existing methods in both speed and adaptability to new challenges. The results. A robot that can think on its feet, adjusting to a myriad of situations without skipping a beat. Yet the researchers are not resting on their laurels. The current research only scratches the surface, focusing on constraints with a fixed number of variables. The next logical step is to explore this variable arity, adding another layer of complexity and adaptability. Moreover, integrating natural language instructions into the model is another avenue the team is keen to explore. Such advances could make robots not just more efficient, but also more intuitive and user friendly. And what about real world applications from automated manufacturing to health care? Diffusion. Ksp has the potential to transform the way robots interact with their environment. Its generalization capabilities could pave the way for robots that can adapt to a wide range of tasks from setting a dinner table to intricate surgical procedures. It's a leap from specialized algorithms to a cloud of intelligent, adaptable, robotic solutions. Therefore, this will result in huge progress in the fields of both robotics and artificial intelligence. With its cutting edge capabilities, it promises to redefine the way we think about robotic reasoning and planning. The horizon for robots is broader and brighter than ever. Meanwhile, hot on the heels of MIT and Stanford's game changing diffusion, KSP framework, researchers from Cornell and Tel Aviv universities have unveiled a compelling breakthrough of their own in the realm of computer vision. Called doppelgangers. This novel approach targets one of the most challenging problems in 3D image, reconstruction and geometric vision tasks. Visual Disambiguation. If diffusion KSP is set to revolutionise how robots think, then Doppelgangers aims to radically sharpen how computers see. Imagine trying to distinguish between two nearly identical twins right down to the freckles and hairstyles. That's precisely the kind of challenge computers often face when interpreting images of remarkably similar 3D surfaces. Failures in this task can result in inaccurate 3D models, significantly impacting a wide range of applications from virtual reality to autonomous vehicles. Now enter Doppelganger. Is the Cornell and Tel Aviv research team tackled this problem by creating a unique dataset. That's aptly named Doppelgangers because it features pairs of images that are either duplicates or else strikingly similar, yet distinct. The dataset also leverages existing image annotations from the Wikimedia Commons database, allowing for the automatic generation of a vast set of labeled image pairs. But creating the dataset was just the first hurdle. The real innovation lies in the specialized network architecture designed to interpret it. When given a pair of images, the system first employs feature matching methods to identify key points and matches between the two. These points are then masked, aligned and fed into a specially designed deep learning classifier. This classifier then produces a probability score indicating the likelihood that the given pair of images represents the same 3D surface where conventional models failed. The doppelganger's architecture shines in tests. It outclassed baseline methods and alternative designs by a significant margin. The team also explored the utility of their method as a pre-processing filter for structure from motion pipelines like Colmap, demonstrating its far reaching applications in enhancing the reliability and precision of 3D reconstructions. So what does this mean for the future of computer vision? The implications are monumental from enhancing facial recognition systems to improving the navigation of autonomous vehicles in intricate environments. Doppelgangers brings a new level of nuance and accuracy, much like the MIT Stanford Research in Robotic decision making. This breakthrough expands the realm of what's possible in artificial intelligence, adding another layer of complexity and adaptability in a rapidly evolving landscape where robots are becoming smarter and computer vision sharper. These parallel advances from MIT, Stanford, Cornell and Tel Aviv universities are shaping up to be key milestones in the AI revolution. One makes robots think more efficiently, the other helps machines see more clearly. Both point toward a future where artificial intelligence isn't just mimicking human abilities, but also enhancing them in ways we've yet to fully comprehend. So get ready, because the next frontier in AI isn't just about more data or faster computations, but instead it's about developing intelligent systems that adapt, interpret and understand the complexities of the world as never before. This technology will enable robotic surgeons to perform intricate operations with far greater precision, and self-driving cars will navigate through the most challenging environments with ease. In this future, the line between what's humanly possible and what's achievable through AI will blur raising ethical, societal and technical questions that we have yet to even imagine.
AI_LLM_Stanford_CS229
Stanford_CS229M_Lecture_4_Advanced_concentration_inequalities.txt
So last time, in the last three lectures, we have talked about the basics of uniform convergence. I guess just a very quick review. So I think we have proved that the excess risk, this is lecture 2, is bounded by this. This is a difference between empirical and population. Can I share your screen to the Zoom? Oh, right. Thanks. Sorry, I forgot. Thanks for reminding me. It's going to be a problem if I forgot to do that. I'll do that. I didn't join a Zoom meeting here. Sorry. Cool. I guess now probably it's working. Thanks for reminding me. And so we have shown this. So this is what we saw in one of the claim in the lecture 2. So basically, this is saying that you only have to bound the difference between the population and the empirical for all theta, right? So the most important thing is the second term because the first term, we have shown that it's close to-- it's something bounded by 1 over square root of n. So the goal is to show the second term. And we have discussed how to do it for finite hypothesis class and also how to do it for infinite hypothesis class to get with a relatively brute force this quantization technique. And so in the next few lectures, I guess, we are going to-- as I mentioned before, so we're going to have some other techniques to deal with the second term so that we can have more informative parts. And today, we are going to take a small-- in some sense, a small digression, or in some sense, a small preparation for some of the tools that we're going to use for the next lecture. So in the next lecture, what we're going to do is that we're going to bound the expectation of this. So this is the next lecture. And this is expectation over the randomness of the data, right? So this quantity itself is a random variable, right, because it depends on the data, the training data you have because L hat depends on the training data. And next time, we're going to upper bound this by some quantity which is called Rademacher complexity. And so today, we're going to do something that is useful for-- it's a useful preparation for doing this. So I guess here is the plan. So next lecture, we're going to do this. And then next lecture, we also going to deal with the difference between this and this expectation. So that's the plan for the next lecture. And today, what we're going to do is we're going to have some tools that prepare us for proving quantities like this, and so that the next time we don't have to have a small section dealing with the tool in the middle. So I'm trying to prepare us with the right tool for the next lecture. So a more concrete overview is the following. So the goal for this lecture is the following. So suppose you have some random variable x1 up to xn. So they are independent and random variables. So we're going to show two type of inequalities. So the first type of inequality is to show that if you take the sum of these kind of random variables, they are concentrated around the expectation. Basically, Hoeffding inequality is one type of this inequality. We're going to extent Hoeffding inequality to something more general. And the second thing is that we're going to show that for certain type of function F, if you look at a general function, not necessarily just the sum of this random variable, of course, you have to have some restrictions on what the functions F will look like. But suppose you have the right restriction, then you can show that even if you have a function of x1 to xn, it's still concentrated around the expectation of this function. And this will be particularly useful for showing this inequality. Maybe let's call this I here. So because you can-- in some sense, maybe just assume one, this corresponds to L hat theta is close to L theta because L hat theta is of the form like x1 up to-- plus x2 up to xn. L theta is the expectation of L hat. And the second type of inequality will be useful for proving what I said, this equality 1. So because-- if you care about something like this, it's roughly equal to expectation. This is L hat. So then you can view this entire thing as a function. We would add a function of your training data-- of your IID training data. So this is a function of x1 up to xn, where these are the training data. So basically-- so these kind of inequalities are called concentration inequality. The key kind of idea is that if you have a family of IID random variables, then-- first of all, if you take the sum of them, they become like Gaussian and they become concentrated-- kind of like Gaussian like and they concentrate around the mean of this sum. And the same thing also happens if you apply certain kind of functions on x1 up to xn. I will tell you what kind of functions will have these properties. And this kind of inequality is not only useful for what we are going to do next, but also generally pretty useful for machine learning like for statistical learning theory. Because in some sense, if you think about what happens in learning theory, in many cases, basically you are trying to deal with the difference between the empirical distribution and a population distribution, right? So these things will show up in many, many different cases. And that's also one of the reason why I kind of isolate this part as a single lecture to talk about technique. If it's just some tool that is only useful for one lecture, then we can just invoke that as a lemma. But here, I think it's more useful than that. So that's why I want to kind of also show you how to prove some of these things and also what kind of a-- I'm not going to prove all the inequalities I'm going to show today, but I'm going to talk about some of the advanced version of inequalities so that you know that they exist. And then when you need to use them, you can kind of find the right tools. So that's the overview for the lecture. So I guess-- so let's start with the simple version, right, where we're going to have a sum of independent random variable. And we have discussed this before about the-- in the context of Hoeffding inequality. I'm going to have kind of a more comprehensive discussion about this. So let's consider you have a random variable Z, which is equals to sum of x1 up to xn, right? xi's are independent. And so a warm-up is that what if you don't ignore what the structure of Z is, right? So obviously you know that Z is a sum of independent random variable. What if you ignore the structure? So what if we ignore the structure? So you still have something that you can show-- you can still have some inequality that can show that Z is close to the expectation. So here is the inequality, which is called Chebyshev's inequality. I think probably you've heard of this in some of the probability class. So the inequality is saying that the probability that z deviate from the expectation of z by some amount t is less than this thing, the variance of z over t square. So it's pretty intuitive, right? So if the variance of z is small, then you have less deviation from the expectation. And of course, if t is bigger, right, if you look at a bigger window, then there's a small probability-- smaller probability outside the window, right? So in some sense, if you draw this, it's something like-- suppose you have a distribution that looks like maybe this and the mean is here, expectation of z. And what this is saying is that if you look at the standard deviation, z-- right-- so suppose you-- and you look at this, standard deviation of z. And suppose you take t to be something like standard deviation of z times 1 over square root of delta, you plug into this inequality, what you get is that the probability that you would deviate at this t is less than-- maybe let's just write expressively. Standard division of z over square root delta is less than delta, right? So this is saying that if you multiply standard deviation z by some quantity, by something like here-- so suppose this is standard deviation of z times 1 over square root delta, the other is less than 1. Then the probability in this tail is less than delta, right? So this is, in some sense, the weakest form of concentration that you always have without using any structure about the random variable z. However, this is not very strong as we will see. Because if you think about what happens with Gaussian, right? so let's see where I'm missing constant here. So if you think about-- let's see. So if you think about a Gaussian distribution, suppose you know z is Gaussian. So Gaussian z. Then what you know is that-- So suppose z is something like from 0, 1, right? It doesn't matter whether the mean is 0. Let's say, suppose the mean is mu. Then what you know is that z minus expectation of z is less than standard deviation of z times square root log 1 over delta. I guess maybe, let's say, this is just a general Gaussian distribution where the standard deviation sigma. And so with probability, at least 1 minus theta, you have this. So basically, if you have a Gaussian distribution, then what you have is that for the same field of probability delta, here you have a stronger bound. It's log 1 over delta instead of 1 over delta squared. So in some sense, what-- I guess I'm not showing this-- I haven't proved this for you, like you can do the calculation. So in some sense, this is saying that the tail-- you can-- the tail decays faster for Gaussian. So basically, for Gaussian, you only have to multiply a little bit. So suppose this is Gaussian, you only have to consider standard deviation z times log 1 over delta square root. Then you know that the rest of the part has probability less than delta. But if you don't need to know it's Gaussian, then you have to be a little bit more generous in terms of the interval that you draw, OK? So in some sense, the goal that we're going to have is that we're going to show that if your z is a sum of random variable, that it's more like Gaussian, instead of a general-- the worst case z. Or you have a better bond like this instead of the bond like this from the Chebyshev's inequality. And also if you see the-- if you look at it more carefully on the consequences of these two inequalities-- so maybe let's call this number 3 and this number 4. So if you have number 3, then you can-- if you take delta to be something like inverse poly n, then you'll know that with high probability, so at least 1 minus 1 over poly n, z minus expectations z is less than standard deviation of z times square root log n. So basically, you only lose a log factor if you want to make the probability very high. So if you want to make a high probability event, then you only have to multiply by a square root log n to the standard deviation and then the rest of the probability becomes very small. However, if you use 3-- if you use the number-- sorry, this is number 4. Sorry for the confusion. So if you use the number 4 if it's Gaussian like then you got this. But if you use number 3, then if you take delta to be-- if using-- take delta to be inverse poly n, then what happens is that with high probability, you have this statement. With high probability, z minus expectation z is less than std of z over square root delta, which is std of z times poly n. So there's a big difference between the additional factor here. So if you compare this two factor, you have a big difference. So that's why we want the so-called the faster tail of the smaller tail like in the inequality 4 instead of inequality is 3. And a slightly alternative view, which we're going to kind of, in some sense, switch between these two views. They are equivalent. But we're going to switch this very often. So the alternative view is that you can say that z minus expectation z is less than. So for Gaussian, what you have is that if you look at this, if you view this quantity like this, then you have this less than expectation minus 2t squared over variance of z times n. So now, you can compare this inequality, maybe let's call it 5, just temporarily, versus the Chebyshev inequality, 1. So if you look at 1, then this is-- the right hand side is decay with t in polynomial way. So it's 1 over t squared. And if you look at 5, it decays exponentially fast as t goes to infinity. So that's another way to see the differences, all right? So the tail probability for Gaussian distribution is decaying very fast, exponentially fast. But if you use the Chebyshev inequality, you to get a polynomially fast decaying inequality. And that's another way to see the differences. So we're going to look for the faster tail, right? That's what our goal. So the goal is to repeat. So z-- this is like a Gaussian. That's basically our goal. But of course, how do you say, in what sense this like Gaussian? There are multiple different versions. We're going to formalize that. What does it mean more like Gaussian tail? So we-- to do this formally, let's start with some definitions. So actually, we're going to define what is meant by Gaussian-like to start with. So let's say a random variable x with finite mean, this is a one-dimensional random variable mu, which is equal to expectation x is called sub-Gaussian with parameter sigma if the following is true. Let me write it down. It's not very intuitive when you first look at it. So I don't-- I'm not expecting that you can see what this is really mean. But this is the definition for something is close to Gaussian. And this is not very intuitive. But the corollary is the following. So a corollary is that x is sigma sub-Gaussian if the following happens-- implies the following happens. So x minus mu larger than t is less than 2 times exponential minus 2t squared over 2 sigma squared over-- sorry, t squared over 2 sigma squared for every t. So the corollary is probably intuitive, right? So if x is sub-Gaussian if you have this exponential decaying tail bound. So this right hand side decays very fast in K. And as t goes to infinity, it's actually not only exponential in t. It's exponential in t squared. So this is, in some sense, this is a much more intuitive definition of sub-Gaussian. But the formal definition above will be more useful for the mathematical cleanness. But you can basically think of these two as equivalent. Actually, they are somewhat equivalent. Before talking about that, let's say if you recall that if x is Gaussian, if it were literally Gaussian with variance sigma squared, then this inequality, maybe let's call it 6, then this means 6 is true. I didn't prove this, but this is something relatively standard. So if you have a Gaussian with variance sigma squared, then you can-- if you do some kind of calculation, do some integral, which is not super trivial, you have to do some calculation, but believe me 6 is true, right? So basically sigma sub-Gaussian is saying that you have the same property as a Gaussian random variable with variance sigma squared. And also because of this sigma squared in the sub-Gaussian definition is often called variance proxy. So in some sense, if you are sigma sub-Gaussian, then the sigma is kind of like you can think of as sigma squared, you can think of it as some kind of like pseudo variance. It's not exactly the variance, but it's the kind of alternative version of the variance, which actually is probably more important than the variance itself. So that's the rough intuition. And also regarding these two definitions of this corollary 6, maybe let's call this 7, so 6 and 7 are, in some sense, equivalent definitions up to some small constant, up to some constant factor. So what does it mean is that if you use 6 as the definition, then it means-- suppose you used 6 as the definition, or suppose you satisfy 6, then you know that x is O sigma sub-Gaussian under the definition-- under the formal definition. So in some sense, if you don't care about a constant factor in front of the variance proxy, then these two definitions are-- you can-- 7 imply 6 and 6 also implies 7, up to a small constant loss. So basically, the way that I always think about this is that I always think about 6 as intuitively that I think about it. But when I really use the-- when I really need to use some properties about sub-Gaussian, I mean, I really kind of like-- I want to prove something, I typically use 7. And also, I didn't tell you why these two equations are somewhat related, right? It still sounds like mysterious, why they are related. And here is the reason why they are related. I guess what I'm going to do is that I'm going to show here, 6 implies 7. I'm not going to show 7-- sorry, my number is-- my number is different from my number in the notes. That's why I'm confused. I'm going to show 7 implies 7. But 6 implies 7 would require a different proof. But if I show 7 implies 6, you probably would kind of get a little intuition why they are related quantities. So the kind of general intuition is the following, right? So if you look at the Chebyshev inequality, so Chebyshev inequality, how do you prove Chebyshev inequality? So the way that you prove Chebyshev inequality is something like this. So that you say the probability that z is minus expectation z larger than t is equal to probability that z minus expectation z squared is larger than t squared. And then you use the so-called Markov inequality. You say that this is less than the expectation of this random variable over t square. So the last step is using this Markov inequality. Is it called Markov? Yes, I think it is. So which is saying that if you look at the probability of some random variable, maybe it's called y, larger than t, this is smaller than the expectation of y less than t. Because if you have so much mass larger than t, then your expectation has to be high. That's basically intuition. And you can see that the way to prove Chebyshev inequality is that you raise to the power of 2. You raise to the second power. So that means that naturally you can also consider higher power and apply Markov inequality. Again, you get some other type of inequality. So if you consider higher moments, then what happens is that you can get something like this, right? So if, for example, you can say I'm going to look at the fourth power. So the fourth power, this is-- sorry, this is still equal to this, right? Because you just raised everything to the-- fourth power is the same event. So this is equal to this. And then you can use the Markov inequality to get expectation z minus expectation of z to the power of 4 over to t to the 4. So now, you see that you have a better dependency on t, better-- or faster better dependency-- or faster decay, maybe, faster decay in t, right, which is something we are looking for. We are finally aiming for exponential decay in t. But now, we get something better than t squared. We get t to the 4. So but of course, the trade-off is that our top, this quantity on the top, might be bigger, in some sense, than the variance, right? This is the fourth power of the deviation, in some sense. So sometimes, you can get a trade-off, an implicit trade-off, right? So you get a better dependency on t, but you get a worse dependency in the numerator. And you can try to do this with higher powers, like if you raised to the power of 6, raise to power of 8, so and so forth, right? So actually there are, especially if you look at the early works in this concentration inequality, people do raise to a higher power. It turns out that there is a relatively simple way to deal with all the powers. This, which is called moment generating function. So this becomes-- this will make it cleaner. So that you don't have to deal with each of the power and see which one has the best trade-off. So the so-called moment generating functions is exactly this thing that we define in this definition, we use in this definition of defining sub-Gaussianality. So this is the expectation of exponential of the deviation between x and its expectation. So why this is an interesting quantity, so the reason is that if you look at-- if you Taylor expand this, or Taylor expand what's inside, this is exponential of something. So Taylor expansion would be that 1 plus lambda times x minus ex plus lambda squared over 2 times x minus ex squared plus-- so and so forth, right? And if you write it more formally, so this is something like sum over k from 0 to n. And the coefficient for expansion is lambda to the power k over k factorial times expectation. And you switch the expectation with the sum, the expectation of x minus ex to the power of k. So we can see that this moment generating function is really a mixture of different moments, right? You have all the moments, and every moment have a different weight in front of them. In some sense, this is saying that what we are going to do is that we're going to change the lambda. So that you change the relative weight in front of all the moments. So that you can choose, in some sense, the right trade-off between which moment you are going to use and-- so sometimes, if you choose the right lambda, you're going to choose the right-- focus on the right moment and get the right dependency. So that's the rough intuition. And-- if you really do this mathematically, actually, it's even simpler than this. So what you can have is that if you look at probability of x minus ex other than t. Then this is-- so this is formally, the way you do the trade-off is the following. So you look at this and you say, I'm going to raise-- instead of raising to the power, I'm going to use exponential. So this is equivalent to this. The exponential version is larger than-- exponential of lambda t, right? And then now, you use Markov's inequality for this exponential version. So you get expectation e of lambda x minus ex over this Markov's inequality e to the lambda t Markov. And now, you use the definition of the sub-Gaussianality. So you say that, I guess I need to review what the definition, maybe, or you remember it. So the definition of sub-Gaussianality is that the moment generating function is bounded by exponential of lambda squared. That's the important thing, right? So there's a lambda squared in the exponent, it's exponential of some quadratic function of lambda. So unless you apply that, you get e to the sigma squared, lambda squared over 2 in the numerator. And divided by e to the lambda t. So this is e to the sigma squared lambda squared over 2 minus lambda t. And now, you can see that in the exponent, you have a quadratic. And this quadratic looks like-- wait, am I doing the right thing? So this is a quadratic that looks like this, right? Something-- maybe not-- there's maybe some-- this is a quadratic that looks like this, right? And you can choose lambda whatever you want, right? That's a free parameter. So that's why you want to choose the minimum lambda and minimize this quadratic. So that you get the best bound. So if you take the best lambda, which means that you want to find a lambda and minimize this quadratic. That's relatively easy. You can just take the-- smallest lambda is the global minimum. You just do the derivative and make the derivative to be zero. And the best lambda, it turns out to be t over sigma squared. And you plug that in, then this is equal to e to the minus t squared over 2 sigma squared. So basically, we show this is equation 7, right? This is the equation-- this is the second, this is the corollary, I think it's the equation 6. So basically, you start with the Gaussian, so here you use the definition. So basically, use the definition of the sub-Gaussianity, and you get this tail bound for this random variable. And also you can get the other side. So here, you only know that x is not too much bigger than the Ex plus t. You can also get the other side. Less than minus t, and how do you do that? The truer thing would be that you just flip. You define x prime to be minus x. And then probability that x prime minus ex prime is larger than t is the same as probability x minus Ex is smaller than minus t. But just by a simple definition, and then you apply what we have already got on x prime. And then that implies what you have-- the other side of the bound for x. But this is not super important. It's just that the two sides are basically the same for our purpose. OK. I think OK, so what happened so far? So I have defined this sub-Gaussian random variable and have argued that the sub-Gaussian random variable is basically saying that you have two ways, right? So one way is that the sub-Gaussian random variable basically means that you have a very fast tail, a very fast decaying tail. Or the moment, some kind of moment, you can think of e to the lambda x minus mu as the moment, some kind of moment is bounded. And all the moments are bounded by something in this form. So far, I only talked about one random variable, right? But the reason why I care about this is the following theorem, which is the mean just in some sense. So saying that if you have-- if you have all the xi's, all the independent random variables are sub-Gaussian, the sum of them is also sub-Gaussian. So you can compose. And that's the biggest benefit of this sub-Gaussianality. Our independent sub-Gaussian random variables with variance proxy sigma 1 squared up to sigma n squared, respectively. Then if you look at sum of them, it's also sub-Gaussian with variance proxy sum of sigma i square from 1 to n. So this is as a corollary, because the sub-Gaussian with this variance proxy, you know that you have the concentration for z, which is of this exponential form. So you know that you have this tail, that decays exponentially fast. So this is very important, because-- very useful and very important, because now, if you have a sum of independent variables, you want to know how fast the tail decays. You can look at whether each of them is sub-Gaussian. I'm going to prove this in a moment. The proof is actually just the two lines, which is actually very cool. So but before proving this statement, let me try to give you some examples on what random variables are sub-Gaussian. It's basically-- the applicability of this theorem depends on whether you can show each of the xi is sub-Gaussian. If you can show each of the xi is sub-Gaussian with very good parameter sigma i's, then the theorem applies. And you got a pretty good bound for the sum of them, right? So what random variables are sub-Gaussian, right? A single random variable are sub-Gaussian. So there are some examples here. By the way, whether your random variable is sub-Gaussian sometimes depends on what sigma you choose, right? So if you choose bigger and bigger sigma, there is at least either more chance that they can be sub-Gaussian. Of course, it's not like-- it's not guaranteed that if you choose sigma to be really, really big, you can be sub-Gaussian. That's not always guaranteed. But at least intuitively, it's not a binary question. It's not saying this one is sub-Gaussian, this one is not. Sometimes, it depends on what parameters you choose. So at least it's not always a binary question. For example, for Rademacher complexity is also called Rademacher variable, Rademacher variable just means that random variable, that's it. So basically means x is uniform from plus or minus 1. So this one, I claim is sub-Gaussian. The reason is-- intuitively, the reason is that if you look at this random variable, if you look at density, is something like you have a spike at 1, spike at minus 1. So basically the density decays very fast after you go outside plus or minus 1. It decays extremely fast. It becomes 0. That's why it's sub-Gaussian. And technically, you can say that you can prove this larger than t is less than 2 exponential minus t squared over a big constant c0 for c0 to be O of 1, maybe let's say 2. This is because if t is less than 1, then right hand side-- is less than 1-- is equal to-- is bigger than 1. So that's always true, right? I think I choose this so that-- yes, because the right hand side is equal to exponential minus 1 over c0. And if you take c0 to be a big constant, maybe 2, then this is larger than 1. So you verify this for c less than 1. And then if t is bigger than 1, then the LHS is just zero. So that's why it's also true. So that's Rademacher random variable. So that means that Rademacher random variable is O of 1 sub-Gaussian. Sub-Gaussian with variance 1, with variance proxy of 1. And similarly, you can prove that-- similarly, if x minus e of x is bound by m, So basically suppose you have a random variable where e of x is here, and if you look at a window of size m plus m minus m, all right? So suppose your density is literally 0 outside, and you have some maybe whatever density we want. And literally 0 outside. Then once you go beyond the window m, then the density decays extremely fast. Density just becomes zero. So that's why this is O of M sub-Gaussian. It's not-- you still need to-- to formally prove variables you need kind of to construct-- you still need to verify the definition, of course, right? But I guess it's kind of intuitive that it's sub-Gaussian, just because the tail vanishes completely after you have the window M. And there is a stronger claim, which also got the right constant here, I only have O of m. But you can actually get a stronger claim, which get the exact height constant. So this is saying that if a is less than x is less than b, almost surely, so your random variables almost surely bounded between a and b. Then you can prove this, e to the lambda x minus ex. This kind of generating function is always less than e to the lambda squared times you want the quadratic in e lambda squared, quadratic lambda in exponent. And you care about the constant, because the constant is the variance proxy. And you can prove that this constant ea minus squared over 8. And this is saying that x is sub-Gaussian with variance proxy b minus a over 4. And this is actually a homework question. It's not that trivial to prove it, actually if you want to get a right constant. If you just want to get some constant, I think if you want to get instead of 8, you want to get 2, it's relatively easy. If you want to get 8, you need to do a little bit slightly more about it. We'll have some hint in the homework as well to help you to prove it. All right, so these are about all-- so this is all about bounding random variables. Basically, this saying that if you have a bounding random variable, it's going to be Gaussian. And also this works for Gaussian random variables, of course, right? So a Gaussian random variable has to be sub-Gaussian, right? So as we motivate it, right, so if x is from mu sigma squared, then I guess formally, you can prove the following. You can show that e to the lambda x minus ex, you can compute this. Actually, this is equals to exactly e to the sigma squared lambda squared over 2. So that's why it's sub-Gaussian with variance proxy sigma squared. I think these are the-- bounding random variables and Gaussian random variables are probably the most important examples of sub-Gaussian random variables. And just a small-- in the homework, we're going to talk about something called sub-exponential variables, which is a weaker version of sub-Gaussian random variables. And this is precisely to deal with the fact that some random variables are not sub-Gaussian, whatever variance proxy you choose. So just to give you a rough sense on what the homework is about, so here, when you define a sub-Gaussian random variable, you can-- in this corollary view, so this alternative view, here you have t squared. So you insist that the decay is exponential in t squared. And that's a relatively strong requirement. And there are random variables that doesn't have this fast decay. So for example, I think one typical example would be if you take the Gaussian square, if you square the Gaussian, which becomes a-- I'm blanking on the name of what's called, chi squared distribution, right? So that one doesn't have this fast decay of tail. It's not t squared. It's t. So for these random variables, you still want to prove something about concentration. And you can still do them almost the same as sub-Gaussian random variables, with some minor technique-- with some technical kind of differences. And that's what the homework, one of the homework questions, your homework 1 is about. So all right, cool, so any questions so far? OK, so now, let's prove this theorem about this additivity of a sub-Gaussian random variable. So proof of theorem. So our goal is to show that the sum of xi is sub-Gaussian. This is the goal, all right? So we just use the definition. We start with a definition. The definition is that if you wanted to prove it to be sub-Gaussian, you need to look at the moment generating function. I have some-- some type of [INAUDIBLE].. OK. So you look at the moment generating function. And so here, you can see the nice thing about this, which is that you can-- because this is exponential, it can decompose very easily. So you can write this as exponential lambda x1 minus ex1. And again, because it's independent, you can switch the expectation. You can factorize. Each of the xi's are independent. So you can switch the expectation with the product to get expectation e of lambda x1 minus ex1 times expectation of e lambda x2 minus ex2. OK. So this is using independence. And then you just say, I know that each of these random variables is sub-Gaussian. So I just bound-- use my definition that each of the random variable is sigma i squared sub-Gaussian. So you bound it by e to the lambda 1 lambda squared sigma 1 squared over 2 times e to the lambda squared sigma 2 squared over 2. This is by definition. And then you got this is e to the lambda squared over 2 times sum of sigma i squared. And you get this. That means that sum of the xi's is sum of sigma squared sub-Gaussian. So this is the variance proxy for sum of xi's. And you can see that the benefit of using this moment generating function, the exponential, is because you can factorize the exponential easily, right? So if you don't use exponential, if you use the 4th power or the 8th power, right, you wouldn't have such a nice, simple proof. And are there questions? OK, so that's the first part of the lecture, right, which is about a sum of independent random variable. And now, I'm going to talk about a more complex function of independent random variables. So now, I'm going to talk about something like this. How does this kind of things concentrate? And you can see that in some sense, you want to say that this function F, when F is kind of close to a summation in some sense, in some weak sense, then you still have very similar type of bound. That's the spirit. But what does it mean by close to summation? We'll see. So here is the theorem, one of the theorem, which is actually something we're going to use in our future lecture, which is called McDiarmid inequality. So there is a bunch of conditions. So suppose you have a function f, I guess, little f is the capital F I wrote before. So you have a function f satisfy the so-called bounded difference condition. What does the bounded difference condition mean? So it's saying that for every choice of x1 up to xn and xi prime. So I guess for every i and every choice of x1 up to xn-- by the way here, these xi's are little xi's, because here, I haven't got any random variables yet. These are just the generic numbers. So for every i, for every choice of x1 up to xi, and for every xi prime, which is-- which will be used as a replacement for xi, if you look at these two qualities, one is that you apply f on x1 up to xn. And the other one is that apply f on x1 up to xn. But replace xi by xi prime. So basically replace one coordinate by something else. And you look at-- if you look at what kind of changes you can make by doing this. And you assume that the maximum changes you can make is by ci. So basically, this is saying that you are not very sensitive to-- this function is not sensitive to changing a single variable, a single input, a single coordinate of the input. And if you have this bounded difference condition, then you can say that X1 up to Xn, now they are capital X, independent random variable. And we have probability that f x1 up to xn is deviate from its expectation by t is less than this exponential thing minus 2t squared over sum of ci squared from 1. So in other words, I guess equivalently, you are basically saying that-- you are essentially saying that fx1 up to xn, this is sub-Gaussian with variance proxy something like sum of ci squared, a big O. There are some constants that you may lose by doing a limit. Your variance-- this is using the equivalence of the two definitions, right? So this is the more intuitive definition of sub-Gaussian. And if you change to the formal definition, you will lose the constant. Can I ask? You suggest that we were removing this as functions of f are kind of input sums. But would you say that those conditions-- so if it looks like a sum, but you could have-- if xi prime and xi differ by greater than ci-- Yeah, so-- yeah, that's a very good question. So I think before, I forgot to repeat a question. So from now on, I should try to repeat a question. The question was that I mentioned that you want to make some conditions on f, which make it similar to the sum. So and why this is similar to the sum? So first of all, I think a small clarification, I guess, by similar is actually a very weak sense. You'll see that in some sense, all of these conditions becomes, in some sense, not very similar. But I think they are only similar in the sense that you want to make sure that no coordinate is very strongly influencing your final outcome. So when you have a sum, so if you change one coordinate, you wouldn't influence your final outcome much. And here is the same thing. So basically, I think whether it's a sum or not, it doesn't matter. It's really about whether you have certain kind of Lipschitzness property. So maybe just briefly, also, we can verify that this condition contains the sum, at least. So that probably would be useful. So suppose you have fx1 up to xn is equal to sum of xi. And each of the xi is bounded by something like bi and I don't want to put ai. And now, suppose you change one of the xi, how much you can change the final outcome? So then you can say that you have the bounded difference condition where ci is equals to bi minus ai, because that's the biggest change you can make if you change one coordinate xi. So that's the maximum kind of range of changes for the sub. But you can see that-- you can imagine many other functions that have this property, which doesn't look like sum at all, all right? So indeed, more precisely, I think the kind of the intuition is that you want this function f to be somewhat Lipschitz in some cases. Lipschitz are not super sensitive to individual things. Yeah, that's the general intuition. [INAUDIBLE] Right, so the question was why you just don't-- why don't just assume that f is Lipschitz, right? So this is a very good question. And the very short answer is that we don't know how to prove that version. We don't know how to prove that if f is Lipschitz, then you have this result. And a longer version is that people have been actually trying to-- this is very-- a lot of researchers, especially mathematicians, have worked on this area. And there's a question about what's the right definition of Lipschitzness. I guess you probably will see in a moment, I'm going to show two more general version. And they have a different definition of Lipschitzness, or the intuition of Lipschitzness. And they are somewhat complicated. It's not as clean as you expect, just mostly because there are some technical challenges in those cases. And you will see also a case where if xi is sub-Gaussian, then you have a very clean theorem. It just literally, as you said, you just assume f is Lipschitz. We'll get to that in a moment. [INAUDIBLE] Right, so I guess your question is that here, you need this absolute bound in some sense. In some sense, to make sure you have this bounded difference condition, right, so you need some things that kind of absolutely-- to be absolutely bounded. For example, in sum case, where you need xi's to be absolutely bounded between ai and bi, right? And this is not very-- this is a little bit different from the intuition we had about sub-Gaussian. Before, we were saying that if each random variable has a fast tail, then the sum also has a fast tail. But here, you need absolute-- some kind of absolute restrictions, right? So this is actually related to the answer I had before. If you look at all the technical details, actually, it's not that easy to deal with a tail that can go to infinite. So there are some technical challenges here, which prevent us to have something super clean, I would say. So for example, if you know xi is sub-Gaussian, we will see that you have a very clean theorem. But if you don't know xi is sub-Gaussian, then this is kind of technically very complicated to deal with the tail of each of the xi. And in some sense, you can imagine, right? So maybe this is all a bit too advanced, but for example, if you have xi, so the tail is sub-Gaussian. Suppose xi is just Gaussian. And if f can square it, so suppose in the function f you square xi inside somewhere. Now, xi becomes xi squared. And the tail becomes slower, as I said. So when you square it, it becomes chi square distribution. The tail becomes slower. And if you take the fourth power, it becomes even slower. So you have to somehow balance this, right? It's not only about the input. It's also about what f does, right? If f does something super bad to-- for example square the Gaussian or raise the Gaussian to the higher power, then the tail becomes slower. And your concentration becomes worse. So that's kind of the challenge. Yeah, so let me proceed with a more general version. And then I'm going to talk about the Gaussian version. And then at the end, suppose I have time, I'm going to prove this theorem. So this theorem is something we can prove ourselves, without doing a lot of hybrid work. But the theorem I will introduce next is somewhat kind of a very challenging proof. So this is a more general version. So I think this is theorem 3.18 in the reference book by-- I guess if you look at the lecture notes, there is a formal-- this is van Handel. So it's a book on probability theory. So in this book, they basically-- what happens is that they extend this bounded difference condition to something milder. And the definition-- so you start with some definition. This is b minus i. Let's define this to be fx1 up to xn minus you take the e over z fx1 up to x minus 1 z. So basically, you're saying that if you look at x and you change one of the coordinate, and you want to see how much you can make it smaller. So because this quantity is always larger than zero. So basically, you are saying that how much you can make it smaller by changing one coordinate z, right? e, you can just think of e as mean, so minimum, right? So the difference between this and before is that before, you require. So basically before, in McDiarmid, you require di minus fx to be less than ci for every x. But here, you don't make that-- you don't insist that now. You have-- at least you have an x as an argument of this dif, right? So it defines sensitivity at every point. You have-- you didn't assume a global sensitivity thing. You talk about a sensitivity at x. That's a quantity. And then you can also define the sensitivity on the other side, which is just sup. And now, these are two functions that measures the sensitivity at every point. But they are not global sensitivity. And now you can define a global sensitivity, d plus, which is the sup over all x1. Now, you take sup. But before taking sup, what's inside the sup is the sum of this squared. So basically, the-- let me just write down all the definitions and then interpret them. This is minus 1. And then maybe let me write a conclusion. So you get that probability of x1 up to xn minus expectation f larger than t is less than expectation-- is exponential minus t squared over 4 d minus. So you have a little bit different bound for upper side and lower side, which is probably not important for many cases. But just for the sake of completeness, let's write both of them. 4d plus. So I guess x1 and xn are independent, of course. So that's the-- so this is the theory. So I guess the important thing is that what is this d plus and d minus? And how does it different from McDiarmid, right? So basically, I think the difference is that when you do the ci in the McDiarmid is you take sup over x1 up to xn. And then di plus fx 1 up to xn, your first x sup, all right? This is a ci, right, which is a global sensitivity for the i-th correlate. And then the sum of ci squared, the variance proxy in the McDiarmid is you take sum over i from 1 to n. Then you take sup over x1 up to xn. Pi plus f x1 up to xn squared. So basically, you look at a global sensitivity for every accord. And you take the sum over it. And here, the difference is that this d plus or d minus, so they are-- you are first looking-- you are taking sum of the sensitivity over all coordinates at this point x, you first take the sum. And then you take the sup. So it's probably-- it's not that easy to find a concrete example to see the differences of these two. But I guess you can imagine the order of doing the sup and the sum does matter. So it's possible that, for example, you have a point x, such that only for one coordinate, you are very sensitive. And for other coordinates, you are not very sensitive. So then you take the sum. And then take the maximum. It's more advantage to do that. And in some sense, I think the mathematicians spend a lot of time thinking about how do you change the order. So the best thing you want to do is you take the sup at the very end, so like this one. This one, actually, there is a small sup somewhere in the middle, because in our definition of p of f, you still have this inf. So the best thing would be that you just define the sensitivity for every thing, like a gradient. And then you take sup at the very end, which is what I'm going to show for Gaussian distribution. But this is the best we can know for general distribution, right? So you look at a sensitivity at every coordinate. And you take the sum of all the sensitivity. And then you take sup of f. But the sensitivity have to be defined-- to be defined in this instance. Does it make some sense? Yeah. I'm not expecting you to understand all the nuances. I don't even understand exactly all the nuances. I need to open a book to see-- to find the cases where there is a difference. I think there are actually, indeed, quite some differences between these two inequalities. But it's not like-- you probably wouldn't be able to see the differences. OK, and now, let's answer this question about what happens if all the xi's are unbounded, right? So what happens if x1 up to xns are unbounded? If these are unbounded, like Gaussian random variable, even you take f to be your sum, you probably wouldn't satisfy the bounded difference condition. You wouldn't satisfy this condition here either in this improved case. Because here, there is an inf here. So even f is a sum and xi's are Gaussian, this one would be infinity. Because there's no bound for any individual-- there's no absolute bounds for any individual random variable. So that's the next question. How do we deal with the case when x1 up to xn are not bounded? And there are some existing results along this line. So the first result is called Poincare inequality, which is one of the very beautiful results also for other reasons, not only for the reason for concentration inequality, but also for other reasons not related to this course. So this inequality is saying the following. So if x1 up to xn are Gaussian, which means 0 and 1. And you have some function f, and you can look at the variance of this function. You didn't prove that this is sub-Gaussian, you only showed a bound on the variance, which is something necessary to have. So if you don't have a bound on the variance, you probably wouldn't be able to show it is sub-Gaussian. The variance is less than-- this is exactly as suggested before in the question. So this is less than the gradient squared. And you take expectation of the random variable x. So this is the expectation of the gradient of this random variable. So this is, in some sense, the ideal type of right hand side that you would hope for. So the concentration of this random variable f is somehow controlled by how sensitive, how Lipschitz the function is. So this is the idealistic and basically best kind of thing you can hope for. But the limitation here is that on the left hand side, you only control the variance. You didn't control the tail explicitly. So if you want to turn the variance to the a tail bound, you have to use the Chebyshev. You get 1 over t squared bound. And you can also deal with this with other kind of Gaussian variable. It doesn't have to be mean 0 and 1. That's easy. And the strongest thing here is the following. So here is the stronger theorem, which we can deal with the tail. So here, you suppose f is L Lipschitz with respect to Euclidean measurement, Euclidean distance-- sorry, distance, Which is saying that fx minus fy is less than L times x minus y squared for every xy. So in some sense, this is saying that-- basically, this is saying that the gradient of fx is uniformly bounded by L, right? So you can see that this is different from this one, because here, you require the gradient, for every point, to be less than L. And above, you require the average gradient to be something small. So here, we make a strong assumption to say that this function is just the global Lipschitz. And then you can have a stronger bound on the tail. So now let x1 up to xn be id from Gaussian. And now you can have the tail bound that would move like f1. So this larger than t is less than 2 exponential minus t squared over 2 L squared. So basically fx is L sub-Gaussian, maybe O if I have sub-Gaussian. But the L is not expected gradient. The L is the absolute bound on the gradient. So you can kind of see the flavor of all of this concentration inequality, it really depends on when you take the sup when you take the expectation. For different kind of conditions, you can have different theorems with different strengths. Any questions? [INAUDIBLE] I don't think I know the exact result off the top of my head. I think-- I think the higher moment-- could you get a higher moment from the one below? I guess, I think if you want to have higher moment, you have to assume something stronger. That's my hunch. So for example, this one below will give you a higher moment. So I'm not sure whether you can have a higher moment bound that has weaker conditions than this. I don't know. Also, I don't know too much about PDEs. So I could miss. I don't know everything. This is the only thing I know. But indeed, this Poincare inequality has a lot of different applications, not only here. So we have-- this is-- we have 15 minutes-- we have 10 minutes. So it's a little bit challenging for me to give the full proof for the McDiarmid inequality in 10 minutes, but I think I would try a little bit. If I couldn't have the full proof, I can give you a sketch. So that's the last thing I was planning to do. So for all of the inequality above, like this Poincare inequality, this tail bound for Gaussian, I think they are beyond the scope of this course. We are already doing a lot of things in the technical part. So these things, probably, even I'd do it, I would just invoke a theorem from a book. So you don't need to know the proof. For the McDiarmid inequality, I don't think you need to know the proof. But I think the proof is kind of interesting to some extent. So it's probably worth showing. So let's try that in the next 10 minutes. So we care about bounding. We care about something like this. And we have the bounded difference condition. And the high level intuition is that you want to-- so this one can correlate f of x1 up to xn is kind of like something-- it could be a very complex function, complicated function of x1 up to xn. But somehow, you still want to reduce it to a sum in some sense. But a reduction is not that-- it's not straightforward, the reduction is like this. So the way you do it is the following. So you say that-- you defines a sequence of random variables. Let's define z0 to be the expectation of x1 up to xn. So this is just nothing. It's just a scalar, which is a constant. And then define z1 to be expectation of f x1 up to xn conditional x1. So what does this mean? This is a function. This is a function of x1. So basically, z1 is a function of x1. But you average out all the other xi's. And you can also define zi, which is the expectation of x1 up to xn conditional the first i random variable. So this is a function of x1 up to xi. So given x1 up to xi, this becomes a scalar, because all the other randomness got ever stopped. So in some sense, you can see that z0 doesn't have any randomness. z1 has a little randomness, because it's a function of random variable, x1. So it's a random variable. And zi has more and more randomness. And zn is finally what you care about, which is the fully random case. And the important thing is that you care about zn minus the z0, the f minus the expectations. And you can decompose this into a sequence of things. So like this telescoping sum. And this is what I mean by reduction to the sum. So basically, now you have a sum of random variables. And you somehow kind of think of them as independent in some sense. They're definitely not exactly independent. But you're going-- we use the proof that you use for the summation. That's what we want to see. And if you-- look at this, right. So this is a function of x1. And this is a function of x2, of x1 and x2, so on and so forth. And this is a function of x1 up to xn. This depends on all the random variables. OK? And now, let's try to see what we know about each of these z and zi minus zi minus 1. All right? So first of all, we know that for every zi, if you take expectation of zi, this is expectation of-- expectation of f x1 up to xn. So in the inside, you have a function of x1 up to xi. And then also that you averaged out all the randomness of x1 up to xi again. So this is-- so this is equals to this expectation of f by-- this is called a total law of expectation, right? You take the expectation of the conditional thing, then you get the expectation. So this is equal to this, which is equals to basically the z0. And then, this means that the expectation of zi minus zi minus 1 is equal to zero. So each of these random variables in this decomposition is mean 0. Unless you have-- so basically, the intuition is that this would define zi to be zi minus zi minus 1. What you're going to do is that you're going to have-- in some sense, you want to bound the moment generating function of each of the di. And then you say that because the final thing is a sum of the di, you can bound the moment generating function of the sum of the di. So let's work on each of the di first, right? So I guess I'm going to claim that zi minus zi minus 1 is always less than ci, where the ci is the bounded difference condition in the condition of the McDiarmid inequality. So how do I do that? I guess-- let me see whether I can simplify this proof a little bit for the sake of time, I guess it doesn't. So let's only prove it for z1 minus z0, just in the interest of time. So if look at z1, z1 is expectation of x1 up to xn condition on x1. And if you-- so I guess you can replace the first one by the sup over all the possible choices of x1, right? And after you do this, this quantity is not a function of x1 anymore. So it doesn't matter whether you condition x1 or not. So you literally just get expectation sup fz x2 up to xn. So-- let me see. And also, you know that z1 is bigger than expectation if, for the same reason, xn. So sometimes, you have some kind of upper bound lower bound for z1. I guess these two qualities are not exactly useful for the bound. What's really useful is this. If you look at z1 minus z0, this is then-- so this is expectation of x1 up to xn conditioned on x1 minus f x1 expectation f x1 up to xn. So you can bound this by expectation sup using what we have done above. And minus expectation of f x1 xn. So here, both the-- and then you can put this inside. I think it's slightly confusing when you really look at the math. But intuitively, what you're saying is that what the difference between z1 and z0 is only one coordinate. But we know that if you change that one coordinate, you cannot make much difference, right? So that's what we know. For any x2 up to xn, if you changed only x1, you wouldn't make much of a difference. That's why z1, z0 wouldn't make much of a difference, because the only thing different is x1. OK, but maybe let me have the formal proof. So on the other hand, you can also prove that-- the same thing. So you can prove this is larger than inf z. So basically, I'm basically trying to say that the difference between z1 and z0 is upper and lower bounded by the extreme lower case, right, where you pick your z in the worst case. And this means that if you define this to be a1 and you define this to be a2, then a-- maybe let's call this-- sorry, let's call this b1 and this a1. So you have upper bound and lower bound on z1 minus z0. And you can show what's the upper bound and lower bound. So the b1 must be a1. So this will be expectation sup. That's the extreme low case minus the inf, all right? So this is exactly the ci's that we defined, right? So if you change your random variable-- you change your inputs in the first coordinate, what you can change. So the maximum change is c1. So this is less than c1 by condition. So basically, this is saying that z1 is between b1 and a1. And b1 minus a1 is less than c1. So this is saying that each of the random variables z1 minus z0 is bounded in your small interval. And similarly, you can also show that zi minus zi minus 1 is bounded between-- is bounded between something like bi and ai. And bi minus ai is also less than ci. So recall that our final goal is the zn minus z0, which is the sum of zi minus zi minus 1. And we have proved that each of these random variables is somewhat bounded in some small interval. And now we can use the moment generating function. So what you do is you say you take expectation of lambda zn minus z0. And this is expectation e of lambda sum of zi minus zi minus 1. So the first thing we have to do is to factorize them in some way, right? So how do we factorize them? We just use the conditional-- we kind of do the chain-- in some sense of the chain. So what you do is that your first condition down, x1 up to xn minus 1. So then you have this expectation e lambda zi minus zi minus 1 conditional x1 up to xn minus 1. And when condition on it, you get this. And then the rest of the things, it's a function of xn up to xn minus 1. All right, so this is-- what's inside your condition x1 up to xn minus 1, this one only depends on-- so this is a function of x1 up to xn minus 1. And this is the function of xn. So that's why it's inside that expectation. And then this term, because zn minus zn minus 1 is bounded, and it's bounded in a strong sense, in the sense that for every possible choice of x, so you know that this is a kind of absolute bond for zn minus zn minus 1. So we know that this, lambda zn minus zn1, this is less than exponential of lambda squared sigma cn squared over 2. This is because if you have a bounded random variable, and we know that it's sub-Gaussian. So you can verify this in various ways. One way to do it is to just-- actually, this will show up in the homework. This is one of the homework questions we defined. So if you have a bounded random variable, it's sub-Gaussian, right? And you can bound the moment generating function. And then you can replace this term by-- this absolute quantity cn squared over 2 and times the sum of the other terms. I think this is n minus 1. And then you peel off the second term again and again. So you do this iteratively. I guess given that we're already running out of time. So look at this. So if you have-- you can do something like this. I guess this is actually 8 if you really do it carefully. So yeah, I guess I will just sketch this. So this means that f minus expectation f is equals to sum of zi minus zi minus 1 is sub-Gaussian with variance proxy sigma squared. I guess that's the end of the proof. But this proof is optional. It's just that we have more time. So that's why I show the proof. OK. Any question? What was the step before the equation in the blue circle-- is that-- [INAUDIBLE] At the end of that line, is that just based on the [INAUDIBLE].. You mean this one? Yeah, so from here to here? This is just-- it's just a triple step, I guess, maybe technically what I should write is maybe-- let me do this here. So if you want to do two steps, the first thing is you do this. Sorry, you do-- you just-- do the total expectation. You condition-- you first condition x1 up to xn. We do this, right? So this is the law of total expectation. And then you find that this term is a constant when you condition on x1 up to xn minus 1. So that's why you can move it outside. Yeah, there's nothing deep there. OK, sounds good. OK cool, so I guess see you next Monday.
AI_LLM_Stanford_CS229
Stanfords_FREE_data_science_book_and_course_are_the_best_yet.txt
I'm going to share with you what is one of the best books on data science that I have ever read it's free and it's important that you see it because well it's getting more and more difficult to find high quality free content I'm going to show you the book tell you why I like it and show you how to use it to get a job so two questions come to mind the first is why go to all of the effort and time of creating such a great Learning Resource and then just give it away and the other is if it's that good why is it so relatively unknown the answer to the first one's easy this was made by a group of Stanford professors and they're not doing it for the money show me the money and the second answer well it doesn't have a big marketing budget okay so why do I like it well there are several reasons first of all it's concise it doesn't go into any more detail than is necessary it's comprehensive it covers pretty much everything that you're going to need to know if you want to do data science it's clear the explanations are very well written everything's very well explained and the best bit for me the python version has just been released a few weeks ago so at the end of each chapter there are what they call Labs where you get to go through all the concepts that the chapter contained but learn how to implement them using Python and then after that there are lots of exercises which if you work through will really help you understand the concepts that's where all the learning takes place really and so in just a few hundred Pages you have everything you need but is it enough it covers data science really well but that's not going to be much use to you if no one actually taught you how to learn nobody taught me how to learn at school or at University and that's quite common which is why I recommend this things using the latest findings from cognitive science it teaches you how to become an effective learner what methods work and what methods don't and it shows quite convincingly that most people use ineffective learning methods and if you only knew the right ones to use you'd supercharge your learning I think anyone serious about learning should have a copy of this while we're on the subject of excellent learning materials I would like to share a free and easy way of learning coding math and data science that is a perfect accompaniment to this book brilliant.org I really like brilliant and it's not just because they sponsored this video it's because over the years their platform has really helped me to learn new topics and brush up on old ones they teach you how to think and they make learning a very active process brilliant has thousands of interactive lessons that range from basic to Advanced topics with new content added every month after a brief quiz during sign up brilliant tailors your learning path to your interests and skill level allowing you to explore and learn at your own pace I've been doing their new thinking in code course instead of just teaching you coding this course gets you solving real world problems right away you'll be creating simple programs and understanding how coding impacts our world if you're interested in trying out brilliant use my link brilliant.org python programmer and sign up for free you get to try out everything brilliant has to offer for 30 days and the first 200 of you will get 20 off Brilliance annual premium subscription what else do I like about the book well actually quite a lot it's got its own video course I think it's on edx but if you go to the book's website you can get a link to the video course but it's an entire free video course that accompanies the book and you can follow through the course and the book at the same time and it's made by the same people it's really high quality on the website as well there's also a forum and the website is where you'll find access to the free pdf of the book and the link to that is in the description okay so how would I use all this to get a job well once you've worked through to the end of chapter four which is the chapter on classification you'll have a pretty good idea of the basics so at that point what you should do is start contacting non-profits and charities in sectors that you have an interest or some kind of expertise and offer to do projects for them for free it's unlikely that they will have certainly some of the smaller ones that they'll have budgets for data scientists so you could actually do some really valuable work and create some great insights for them you'll learn so much from that not just about how to work with data but the sort of questions that clients want answered and the organizational limits and constraints that will affect what you can do you'll learn how to talk to clients and the expertise that you gain from it will really help you in your job search because later down the line when you're having job interviews you'll be able to talk about the projects that you've done and you'll have some real insight that other candidates at this level won't have and then it's really just rinse and repeat keep working through the book and keep acting Charities and non-profits and offering to do work for them then when you've reached the end of the book and have say I don't know half a dozen or so projects under your belt start applying for jobs
AI_LLM_Stanford_CS229
Stanford_CS330_Deep_MultiTask_Meta_Learning_OptimizationBased_MetaLearning_l_2022_I_Lecture_5.txt
For today, we're going to recap a little bit from what we talked about last week, from the meta-learning problem set up and black-box meta-learning. And then we're going to get into optimization-based meta-learning where we're actually going to be embedding an optimization process inside another optimization process. And this is part-- part of what we'll be talking about today as part of homework 2. And so it should also be useful for that. Great. And then, yeah, by the end of the lecture, you should have a sense for the basics of optimization-based meta-learning techniques and how to implement them in things like PyTorch. And also some of the trade offs between black-box meta-learning and optimization-based meta-learning. Cool. So to start to recap from last lecture or some of the previous lectures, we've talked about multi-task learning and transfer learning. And we introduced this notion of the meta-learning problem statement, which is kind of a form of transfer learning where our goal is having been given a set of training tasks, we want to, more quickly or more proficiently, solve a new task. And we looked at this kind running example meta-learning problem where we have-- we're trying to do 5-way classification. We're given one example from each of the five different classes as this really tiny training data set set. And our goal is to be able to predict the label for new examples as being among one of those five classes. And the way that we did this is we set up a set of tasks that look a lot like the kind of tasks that we'll see at Meta Test-Time. And so these were our training tasks. And we perform meta-training on these tasks because we're trying to learn how to quickly learn each of these training tasks such that when we're given a new task at Meta Test-Time, we can quickly learn that task. And this was an example, an image classification. But you can also replace this example with other machine learning problems. Great. And then one of the big topics of lecture on Wednesday last week was how we can actually solve these few shot learning problems with a black-box neural network where we basically just pass in the training data set into something like a recurrent neural network, have that neural network possibly output a set of parameters for that task. And then we make predictions for new inputs for that task by passing that through the neural network with those parameters. We also talked about a second version of this where it's not explicitly outputting the parameters of an entire neural network, it's simply just giving you, for example, another hidden state of that RNN, and then you can just, again, pass that through the last module or the last time step of the RNN. So this kind takes this-- it has this more general form where we're going to be passing in a training data set and a test input into some function. And we're going to be making a corresponding prediction. This method was great in that it's very expressive. You can represent lots of different learning algorithms with these large black-box neural networks. But they can also be somewhat challenging to optimize. And that's maybe something that you came across in homework 1, if you've got started on that. So in this lecture, we're going to be thinking about other ways to try to represent this function that goes from a training data set to a set of task parameters. And in particular, we know something a little bit about machine learning. So you can think of this RNN as trying to mimic a learning process. And instead of trying to just have-- through a black-box neural network at this, we could actually take a little bit about what we know about machine learning and actually apply that structure to this function and try to actually treat this function f as an optimization procedure. So that's what we'll really be focusing on today. So in particular, what we can do is we're basically going to replace this RNN on the top left with some form of optimization process. And in particular, we're going to focus on optimization-- on gradient-based optimization. So instead of putting these examples into a RNN, we can instead pass it-- we can actually run gradient descent on these examples in order to get a set of task specific parameters. Now the key idea here is that we're going to be embedding this gradient descent optimization inside a broader optimization process, the meta-training process. And then the key question is, what actually are we going to be-- what are the meta parameters? What are we going to be, kind of, meta training in this process? And there's a number of different free parameters of this process. What we'll focus on to start is a scenario where we're optimizing for the initial parameters of this neural network such that running gradient descent on a few examples gives us a good set of task specific parameters for that task and for other tasks. Now, why might this make sense? So if we go back to a couple lectures ago, we were talking about fine-tuning. And we would take a set of pre-trained parameters data and run gradient descent initialized with those pre-trained parameters. And we found that fine-tuning can actually work much better than learning from scratch. So we see that the green and orange lines are much lower. They have much lower error than the blue lines. But they also don't do great in the few-shot regime. And so essentially, you can think of this as trying to optimize for a set of pre-trained parameters such that we can actually do very well in the few-shot regime, unlike if we were just going to pre-train with something like supervised learning. So in particular, let's start to formulate what this looks like as an objective. So fine-tuning is what we want to be able to do at test time on our small training set for our new task. And so what we can do is we can take one step of gradient descent. We can also imagine more than one step, but we'll start with a simplifying case of one step of gradient descent. This will be used to get parameters for task i after fine-tuning on the data set for task i. And then once we have those parameters, we will evaluate how good those parameters are on new examples, specifically examples in the test data set for that task. And then we'll essentially optimize for the set of pre-trained parameters such that this one step of gradient descent is generalizing well to new examples. And of course, we won't just do this over one task. We'll do this over all of the tasks that we have available to us. So this is essentially what our objective is going to look like for this optimization-based meta-learning process. So the key idea here is to try to learn a parameter vector, an initial parameter vector theta that transfers very nicely with only a few examples. I mentioned that this is an example for this one gradient step. But in practice, you could also put multiple gradient steps in this inner loop here. And it will-- the equation will get a little bit longer, but the conceptual aspect of it is the same. So from here, we can-- now that we have our objective, we can actually write out what this looks like as an algorithm. So again, we can think of the same 3-way 1-shot classification problem that we considered in the last lecture where basically different tasks will correspond to classifying digits from different alphabets. In this case, you could think of different alphabets as only having potentially three characters. And the first step again will be to sample a task. This will correspond to three different characters. Then the next step will be to sample a couple images per character. And this will allow us to basically give us a training data set and a test set for that task. And so in particular, we might have x equals a, b, and c. And y equals 0, 1, 2. And then also, this will be our training data set. And then likewise, a, b, c. And this will be the test set for this task. 0, 1, 2. And so before what we did is we just pass the training data set through a neural network. Instead what we're going to do is-- I guess there'll be a step 0 here, which is to randomly initialize our meta parameters theta. And then what the step 3 is going to do is it's going to take one gradient descent step on our training data set. So we will start with the current value of our meta parameters, then run gradient descent on our three training examples. And so I'll just write this as something like this. This will give us a set of parameters, phi i. And then once we have these parameters, we will then take our test examples, run that through our neural network with parameters phi i to get a corresponding prediction. And so for example, we'll be looking f phi i of this example here. That will give us a corresponding prediction. And then we'll compare that prediction to the corresponding label to get how well this neural network is generalizing to new examples. And from there, we'll then back propagate all the way back into the initial parameter vectors, theta, in order to update our meta parameters. And so in particular, we'll have an update on theta, which will be based off of how well phi i is doing on your held out examples D test for task i. So this will be based off of ultimately the gradient of this. Cool. Now one other thing that I'll mention here is-- I guess I should maybe-- well, actually, sorry. After step 4, then you'll go back to step 1, and repeat and continuously update theta based on this value right here. So this is the full meta-training algorithm. One thing that I'll mention here is, here we're going to be optimizing for some set of initial parameters. But you could also optimize for other parts of the inner loop learning process as well. For example, you could optimize for the learning rate here as well. And you could optimize with respect to some weights applied to the data points or something like that. Yeah. So this is the gist of it. Any questions? on the meta-training process? Yeah. I'm slightly confused about how phi i and theta are related to each other. So at step 3, you get phi i from just modifying theta. But for step 4, you were update theta using the gradient obtained through [INAUDIBLE] Yeah. --phi i. OK. So that's-- like the [INAUDIBLE].. Yeah, exactly. So you can think of step three as the inner loop of this process because it's running the learning process to get-- running the learning process on a task i. And then you can think of step 4 as this kind of outer loop objective where we're actually going to be differentiating through the inner loop process in order to figure out what would have been a better initialization that would have allowed me to better generalize with this small data set. The inner loop could also be more than one gradient steps, it can also be a few gradient steps. Yeah. This is similar to online/offline [INAUDIBLE] Bootstrap Your Own Latent where you have two separate networks that are meta-learning adjacently. So the question is, is it similar to things like Bootstrap Your Own Latent and offline and online processes? I'm not-- It's been a little-- I'm a little bit rusty on the details of BYOL. We will next week talk about unsupervised pre-training methods and talk a little bit about how those relate to meta-learning methods. Yeah. Could you clarify again how this is different from the black-box learning algorithms from last week? Because for example, if theta, in this case, were the parameters of an RNN, then would we end up with something pretty similar to what we had last week? Yeah, so the key difference with the black-box method-- so this is the-- black is the optimization-based approach. The key difference for the black box approach is really step 3. And so the difference with step step 3 is instead of actually obtaining phi i through gradient descent, we obtained phi i by running it through a neural network. And so there was a neural network. I can't remember if I named it f or g in the previous lecture. But there's a neural network that took this input, the training data points for the task and outputed the parameters. And now instead of just passing this through a neural network, we're going to be running gradient descent initialized at the meta-parameters theta. Yeah. Would this work when the loss functions between different tasks are different? Will this work if the loss functions between tasks are different? Yeah. So you could certainly have different loss functions for different tasks. What you would do in that case is L i-- you basically have an-- L would be kind of-- you'd have a different loss function for different tasks. And so you would just need to use that loss function here and also here, and everything would all still work out. Yeah. How do you make sure there isn't, like, some [INAUDIBLE] that's happening in theta? Like, if you can imagine, if you like loop over a bunch of different tasks and you come back to task 4, how do you make sure-- [INAUDIBLE] is there a guarantee that it'll still be as good as it was when you first started? Yeah. So the question is, how do you make sure that forgetting doesn't happen? And as you loop through these tasks, it may not be as good at the first task once after you've gone through the other tasks. And really, the key thing here is when you actually sample a task, you may also-- instead of sampling just one task, you may want to sample a mini batch of tasks. And this will actually give you a gradient that isn't just or a [INAUDIBLE] that isn't just for one task, it's for multiple. You may ultimately get a little bit of forgetting in so far, as like SGD, will kind of forget some data points, for example. But having a mini batch, having a large enough batch size can certainly help with that. Yeah. So the difference here is last time we were taking loss on the query side, but this time we'll be taking loss of the support side and train it in step 3. Am I right? So the fourth step is still with respect to the query set. The-- Test sets? [INAUDIBLE] Yeah. So the third step is really where the difference is. And so it is running a gradient step on the support set. Before, we are also using the support set in the black-box meta-learning approach. And we are passing that into f data. And so, yeah, one key difference is here we're actually taking-- sort of, taking a gradient step on both the support set and the query set. Yeah. But the key step is the difference between 3. If you basically just put this into three, then you get exactly the black box approach that we had last week. So one other thing that might help a little bit with some intuition here is that you can think of theta as this set of meta parameters that's being meta-learned. And if you think about phi i star being the optimal parameter vector for a task i, then you can think of meta-training as this really thick black line where when you're at this point in the meta-training process, if you take a gradient step with respect to task 3, you're quite far from the optimum for task 3. And likewise for other tasks. And as you continue the meta-training process, ultimately, at the end of the meta-training process, you want to be at a place in the landscape where if you take a gradient step with respect to task 3, you're very close to task 3. If you take a gradient step with respect to task 2, you're very close to a good parameter vector for task 2. So this may give you a little bit of a visual depiction of the meta-training process. Now one thing I will note about this diagram that can potentially be a little bit misleading is there isn't just a single optimum for any given task. And so it's actually going to be more of actually a large part of the parameter space for any one task. And your goal is to try to find a part of the landscape where if you take a gradient step you're going to get to a region that's good for that task parameter rather than just a single space. Yeah. That's the gist of it. In principle, this may-- in this diagram, it kind of looks like it's almost averaging the task parameters. In practice, you may get something like that. In practice, you may also be in scenarios where you're actually pretty far from the average, but where kind of a gradient step we'll take you fairly far. And on that note, I think it's worth mentioning that this learning rate here, alpha, in practice will be much larger than a typical learning rate that you might use because you want to be able to actually go pretty far and actually be able to traverse very far in the parameter space for different tasks. Yeah. [INAUDIBLE] Improve the training of the calculation-based adaptation? So certainly this does look a lot like a hyperparameter optimization, except where you're optimizing the full initial parameters rather than just slow learning rate. Your question was, can you also do hyperparameter optimization with this? Or-- [INAUDIBLE] or something like that. Like, is this [INAUDIBLE]? Yeah, so you could certainly consider using hyperparameter optimization techniques for learning the initialization. Unfortunately, a lot of them are designed for optimizing a very low dimensional set of hyperparameters, like one hyperparameter or five hyperparameters. And the initial parameters of a neural network may be millions of hyperparameters. And so a lot of those methods won't scale well to such high-dimensional things. And that's why in this case, we're actually going to just be differentiating through the learning process and using gradients. But there are-- there's a lot of ideas that are transferable between the two literatures. Yeah? So here, we are taking loss with respect to the [INAUDIBLE] tree in the third step. So what is loss here, really? Because we concatenate the major label, right, when we pass it. And then we are obviously trying to find the loss with respect to the training set levels again that's operated by the network? Yeah, great question. So in black box meta-learning, we were concatenating. We were passing in both the image and the label into the network. Here, when we run this inner loop, we're going to have a model-- we're going to have a model f theta that's kind of a neural network model. And we're going to be only passing as input the training examples into this. We're not going to be passing in the labels. And then we're going to get a corresponding prediction, and then the-- this loss function right here, Li for theta, Di train. This is going to equal the sum over examples xi, yi, in D train i of-- I guess if it's something like regression, you'll have something like y hat train i minus y train i. If it's classification you'll have something more like a cross-entropy loss. And so this is-- this is the definition of this loss function, and this is how you're going to get the gradient. And so you're not going to actually going to be passing on the labels into the network, but the labels will still come into play when you're comparing the predictions from the model to the ground truth predictions. Yeah? How does optimization-based [INAUDIBLE] typically compare to what this approach is performance-wise, and why would you choose one or the other? Yeah, we'll talk about pros and cons of black box versus optimization-based towards the end. Yeah? So as you're studying the random boundaries and doing the operation [INAUDIBLE] designing the major [INAUDIBLE] you have to model the non-linear [INAUDIBLE] of parameters, right, in the operation. Are you referring to this learning? Yeah. Because it'll also be learning right here, too. In general, yeah, it's a great question. So do we need to carefully schedule it? It is a hyperparameter, and you do need to pick it carefully. If you pick it to be too large, then it may be very difficult. Like, the updates may be just way too large. If you pick something too small, it may have trouble actually differentiating and getting to different parameter vectors for different tasks. And so it's important to pick something that-- that has a middle ground. The other thing that you can do is you can optimize it as part of-- as part of step 4 as well. And what approaches have found is that it's actually helpful to optimize a different value of the learning rate for different layers, or even for different parameters, rather than having a single learning rate for all the parameters. And this is because some parameters, especially biases, like to have a very large learning rate, and others like to have a very small learning rate. And if you try to optimize for a single learning rate for all of them, then they find a middle ground that isn't good for the weights. [INAUDIBLE] Yeah, so if you do optimize a different alpha for these different-- for the different layers, it does actually end up being quite stable. Yeah? [INAUDIBLE] across [INAUDIBLE] in step 3, can we add a term before regularization? Can we add a term for a regularization? So yeah, this loss function-- this loss function could also include a regularization term. One thing often-- actually one reason, or one interesting thing here to note here is, oftentimes you don't actually need a regularization on the inner loop. And this may be a little bit surprising, because the inner loop data set is often really tiny. But because you're optimizing explicitly for generalization, it's going to be optimizing for at least a part of the optimization landscape that is nicely behaved, and for which you don't get-- you don't overfit. That said, it may be helpful to add regularization to this term here if you're worried about overfitting to the tasks that you have. Yeah? In this view, the training set is used in step 3, and the test set is used in step 4. And is it important to maintain that separation between the two [INAUDIBLE]? Yeah. So the question is, is it important to maintain this kind of distinction between train set and test set in step 3 and step 4? It is really important to have some held-out data points in step 4 that are distinct from the data points that you're using for train. If you use the same exact data points, then you'll meta-train it to be able to memorize the things that you give as input and not actually learn the task that you care about. That said, when you do sample a task and when you sample the images in step 2, it's OK to kind of mix and match which images you use for train and test across these different iterations. The most important thing, though, is that within step 3 and step 4 that these are-- that you have some images in the test set that are held out. Yeah? So my question is regarding testing results. Could we do any [INAUDIBLE] during testing? You mean after the meta-training process? Yeah, after the-- Yeah, exactly. Yeah, let's go through that. It's a great transition. So at meta-test time, we are going to be given a task. We can call it task j. And then we'll also be given a training data set for that task. And after the meta-testing process, what we'll be given is we'll-- kind of at the end, I guess, the output is a set of meta-parameters theta. And so what we're going to do is we're going to run gradient descent. So we're basically just going to run fine tuning starting from meta-parameters theta on the training data that we have for our new task. So our estimate for the parameters for task j are going to correspond to theta minus alpha grad theta of the loss function for the training data set. It's a good idea to use roughly the same number of gradient steps in the inner loop in step 3 as you use at meta-test time. You want meta-training time and meta-test time to match so that you are preparing the method for what will happen at meta-test time. Of course, once you have these task-specific parameters for your test task, then you can make predictions on new data points by give it an input, pass it through your function f of by j to get a corresponding prediction. Now, this algorithm-- the algorithm where you learn these set of initial parameters is referred to as model-agnostic meta-learning. And the reason why it's called that is that nowhere in here do you see anything about a neural network architecture or model. And in that sense, the algorithm is somewhat agnostic to the particular architecture that you use. You can parameterize the model-- the f model here however you want. And as long as it's amenable to gradient descent, you can optimize for initialization of that model such that gradient descent gives you a good generalization on the kinds of tasks that you meta-train train it on. Cool. So we've already gone through the algorithm on the board. Again, the only difference from the black box approach is that instead of running the training data set through a neural network, we're going to run gradient descent on-- starting from our initial meta-parameters. The other thing that I'll note here is-- or just try to emphasize here is, like in the black box approach, when we evaluate phi i, we're not taking a gradient with respect to phi i. We're taking a gradient with respect to theta. And in this sense, we're sort of treating phi i more as activations than as weights. We're not ever running gradient descent on phi i itself. It's more of a product of the inner loop. And yeah. Something to keep in mind. Yeah? So on the same thing. Why does it do more training on phi i? The question is, why doesn't alpha not train it on phi i? So if you-- so phi i is kind of the output of step 3, and there's nothing that-- if you then updated phi i in some way, there's nothing that relies on that updated phi i in the process. And so it would sort of-- similar to how when phi i was activations, it's something that sort of gets wiped out at the next time you look at the task. Yeah? Sorry. I might be hurt here, but do you have results to show? I see that you're the first one on this paper. Yeah, I can show some results. I'm curious to know how this [INAUDIBLE].. Yeah. So I was going to go through some math, but maybe I could skip to the results a little bit first, and then go through some of the math. So this is kind of skipping ahead a little bit, but there's one-- so there's one paper that actually ran architecture search in addition to this algorithm. And so they were actually additionally optimizing for-- for an architecture that is-- for which this actually works well. The kind of basic architecture on 5-way, 5-shot mini-image that gets you 63.1% which is a lot better than what you've been doing in homework 1 if you've gotten started with homework 1. And if you also optimize for the architecture, you can do even better than that. You can get up to 74%. The state-of-the-art these days I think is in the low 80s at this point, and there's-- in-- well, we'll get into this a little bit later. But in general, these kinds of approaches are competitive with state-of-the-art. Also, the approaches that we'll talk about on Wednesday are also competitive on these image classification problems. But I actually also think that the image classification benchmarks themselves are not the most interesting benchmarks, because even just learning good features can do very well on those benchmarks. And there are a lot of other application domains where we're-- getting good features isn't quite enough. And so we've seen the applications in drug discovery as one example that can do quite well. I guess the short TLDR though, is that these kinds of approaches can do quite well on meta-learning problems. Yeah? [INAUDIBLE] Yeah, I can't remember actually the specific approach that they used in this paper, but one thing that was interesting is that instead of-- they found a fairly non-standard architecture-- one that is actually quite deep and narrow. And typically deep and narrow things are not-- typically, you want enough width to be able to optimize, so it was kind of interesting that it found something a little bit different from that. But you can take a look at the-- if you search for Kim in Autometa, you'll be able to find the paper. Cool. Let's go back to some math though. So one thing that actually no one has brought up in the questions yet is that this is actually going to bring up some second-order derivatives. And the reason for that is that you have the inner loop here which has a gradient with respect to theta, and you also have the outer loop which also has a gradient with respect to theta. And so you might wonder, do we need to compute a Hessian? Hessians in deep learning are very scary, because if you have an n dimensional parameter vector, then the Hessian is going to be n squared. And so if you have a million parameters, that's going to be a million squared values in this Hessian. Also, if you want to run more integrated steps, there's this question of, do you get even higher-order derivatives? So we're going to do a bit of math. I guess to preface some of the math, if some of you are scared of math, PyTorch will do all the math for you. And so don't worry too much. But I think doing-- looking-- going through the math is actually really helpful for understanding what happens under the hood and what is actually the complexity of these algorithms in practice. So the first thing that you need to do-- need to know before we get into the math of the meta-learning algorithms is about Hessian vector products. And the cool thing about Hessian vector products is that you can actually compute them much more cheaply than trying to compute the Hessian itself. And in particular, if we can actually compute these kind of meta-gradients without actually-- it would be really nice if we didn't have to basically construct the whole Hessian in order to compute these meta-gradients. And so in particular, one intuition that you can think of for this is, say we have some function f. The gradient of that function is g. Then say that we are evaluating g of the gradient of f at x plus delta x. Then we know that this is roughly equal to g of x plus h of x times delta x. So this is just a kind of typical Taylor expansion. And from here, if we replace delta x by r times some vector v-- so r is going to be some small value-- g of x plus rv. So these ultimately going to be the vector that we want to be able to compute, so our goal is going to be able to compute h of x times v. If you replace delta x with rv, then you get-- this equals g of x plus rH of x times z. And from this form, you can see that we can actually move things around and actually solve for H of x times v. And what we get is H of x times v is roughly equal to g of x plus rv minus g of x divided by r. Can double-check my math. Yeah, yeah. And the cool thing about this is that it means that you could actually approximately compute this Hessian vector product with just two gradient evaluations. And that's really awesome because that means that we don't have to compute this giant Hessian. And this is going to be more expensive than one gradient evaluation of course, but it's-- but it's a lot cheaper than competing the whole Hessian. Now you might wonder, OK, we have this approximation sign here. Can we get the exact Hessian? And there are actually algorithms that have the same complexity that will also give you the exact Hessian. I'm not going to go into those, but Perlmutter's algorithm is one example of that. But this gives you a little bit of intuition for how we might go about computing these kinds of Hessian vector products. Cool. Yeah, so that's the good news-- is if we can basically compute the meta-gradient as a Hessian vector product, then we can compute it with only around two gradient evaluations. Now let's go back to-- let's go back to meta-learning. So we-- and actually, for notational purposes, we're going to have-- we're going to separately represent partial versus full derivatives, so I'm going to use d dx to refer to full derivatives. And the gradient of x evaluated x equals x prime-- or x prime equals x-- this will be kind of a partial derivative. That will help keep our notation straight a little bit. And remember that phi i is going to be equal to theta minus-- let's use the full derivative here-- dd theta of L of theta comma di train. Cool. So this is everything that we had before, and our goal now is going to be to compute the gradient-- or the meta-gradient in step 4. So the-- the MAML objective is to minimize with respect to theta of the loss of phi i evaluated at the test data set for that task. So this is the same as what's written up there. And for this we want to be able to compute the gradient of this with respect to theta. And so to do that-- let's see. Let's go over here. So we have d d theta of this loss function, and now for this, we can use the chain rule. So if you remember your calculus, first we're going to take the gradient of the outer objective, and then we will differentiate from the inner. So we get the gradient of-- I'm going to use phi bar here of L, 5 bar of di test evaluated at phi bar equals phi i. And then once we do the chain rule on the inner loop, then we just get d phi i, d theta, because this isn't a function of theta right here. So this is the derivative of our loss function with respect to theta. Here, this is a row vector, so we can refer to this as v. And this is a matrix. And then let's look a little bit more at this matrix right here. So this matrix-- d phi i, d theta. If we look at the definition of phi, we see d phi i, d theta. Does anyone want to give it to me? What do I write next? [INAUDIBLE] I is a matrix. [INTERPOSING VOICES] Alpha. That's Hessian. Yeah, and then the Hessian. Awesome. Cool. I'm glad some people are paying attention. And this will be with respect to a D train i. Cool. And so if we plug this expression in to here, what we're going to get is this is equal to v times I minus alpha v-- whoops-- times H right here. So this is just the Hessian right here. And that's great, because we then get a Hessian vector product right here rather than anything that has to do with the full Hessian. Can people see this? Is this too low? It's OK? OK, good. Cool. So to answer the first question, we do get second-order derivatives here, but we only get this Hessian vector product. And so in practice, to compute the meta-gradient for this algorithm, it just requires a few extra backward passes of your neural network. It doesn't require anything more than that. Yeah? [INAUDIBLE] with multiple intermediate steps, but does this change at all? Or-- Yeah. So then the question is, what happens if you do multiple gradient steps? So if you do multiple gradient steps, then your phi i is going to be equal to theta minus alpha d, d theta, L of theta D i train. This is just the first step. And then the second step-- you're going to be running a second gradient step on this parameter vector. So if we refer to this parameter vector as theta prime, then this next step is d, d theta prime of L theta prime D i train. And the key thing to note here is, this second gradient step is with respect to theta prime, and not with respect to-- well, is with respect to theta prime first. And then second, this isn't like-- when you take a second gradient step, it's not a second-order gradient step. It's just a second gradient step in sequence. And so when you actually go to compute the meta-gradient, you get the same exact form as before, except now we have a different D phi i, D theta. And so what we're going to get for the new D phi i, D theta is going to be the same as before, but now we're going to have a third term, which will look like minus alpha times this, differentiated. We need to differentiate this third term by theta, and what we get there is something like-- we do the chain rule again, so we're going to do theta bar prime L, theta bar prime D i train. This is going to be evaluated at theta bar prime equals theta prime times D theta prime, D theta. And so, basically you'll get these products of Hessians, but you don't get-- you don't get anything that has a 3 on it. And that's good because the higher this number is, the nastier it gets. Yeah? So, this tells me that the 3 and 4 are complete gradient steps. So you take one gradient step at 3, and then you do 4, and then you go back and do 3 and do 4 for an entire [INAUDIBLE] batch? That's it? Right. So if you were to take multiple gradient steps, that just means that the equation on step 3 will have-- will be different. Instead of taking one gradient step, you will have two gradient steps-- like what's written here. And so the only thing that-- if you take multiple inner-gradient steps that's changing is just that the equation in 3 will be a little bit different. How do I know theta prime at fourth step, and have already completed the multiple gradient steps at the third one? When you actually backprop through, you need to actually store all the intermediate parameter vector values. Because basically you can think of it as you're going to kind of-- the forward process of this inner loop is running gradient descent. And then to backpropagate through-- backprop through that again, you need to kind of backprop through those gradient steps. And so, the-- while we don't get third-order gradients, if you have a lot of inner loop steps, it will increase the amount of memory usage linearly, and also increase compute as well. Yeah? So is taking multiple gradient steps worth the overhead? Yeah. So the question is, is taking multiple gradient steps worth the overhead? For future learning problems, you actually don't need a very large number of gradient steps. You can actually do quite well with a pretty small number of gradient steps-- between one and five gradient steps. And so in practice, at least from what I've seen, you don't actually need more than five inner-gradient steps. And that ends up working pretty well on few-shot learning problems. And it's also not that expensive. If you wanted to run a hundred gradient steps, things get a lot more expensive. And in two weeks we're going to talk about more advanced meta-learning techniques, including ways to differentiate through, like, hundreds of gradient steps. And doing something like this with hundreds of inner-gradient steps is not a great approach. Yeah? Yeah, so I'm trying to connect these Hessians vector products with the [INAUDIBLE] we have. So these are the same as alpha in this case? Like, the-- The r here? Yeah, the r here. So you can think of this as a finite difference method, and r just needs to be small here. And so it's not quite the same as alpha. The most-- the biggest thing is that v is the same. So v here is the same as v there, and r will be separately chosen. So alpha is just going to be the multiplier based off of the inner-learning rate, whereas to compute this-- if you wanted to use finite differences to compute this, you would separately select a small r to use with finite differences. And in practice, you-- things like PyTorch will not use finite differences. They'll actually give you an exact Hessian vector product. Yeah? So does [INAUDIBLE] do not have labels in [INAUDIBLE]?? So D train does have labels. So it will have both the examples and the labels. You'll only be passing the examples into your neural network. But then, to compute the gradient in your inner loop, you'll use the labels there. But in the black box [INAUDIBLE],, we do not need a label? [INAUDIBLE] which class is the most similar to one of the sample [INAUDIBLE]? So the black box [INAUDIBLE] also actually did need the labels as well, and so we were actually passing those in to the neural network as well. And the reason why it needed that is it needed to-- it needed to know what to output for this letter. And so it could tell you, OK, it's the same class as this. But then it needs to give you a number for that class. And so we passed in the labels for that reason for black box as well. [INAUDIBLE] It's supervised, yeah. Yeah? [INAUDIBLE] labeled-- changed the labels, changed [INAUDIBLE] label, but they don't want a label [INAUDIBLE]?? Yeah, it does solve phi. So the question is, the kind of-- here the label-- we assigned labels 0 to A in this task, but in another task you might assign a different label to A. Because the assignment of labels to images is somewhat arbitrary in classification problems, and so it's also useful to kind of randomize that assignment across tasks. [INAUDIBLE]? For other problems, because [INAUDIBLE]?? Yeah. So for things like regression problems, you'll typically keep the label intact, yeah. And we'll talk a little bit more about some of that stuff in two weeks in some of the advanced meta-learning topics. Yeah? Why don't-- does the Hessian first? The Hessian-- The Hessian vector product, yeah. [INAUDIBLE] And that's-- by default, that's [INAUDIBLE]?? Yeah. So here's my slide on that. [LAUGHTER] [INAUDIBLE] Any other questions? Oh, question? No. OK, cool. Good. OK, we also went through meta-test time already, but it's on the slide in case you want a reference for that. And the gist of it is just to run fine tuning. Cool. So yeah, that was really the overall basics of optimization-based meta-learning. Now let's compare optimization-based versus black box meta-learning, and then also talk about some challenges and solutions. If we have a little bit of time, we'll also look at a case study at the end. So at a conceptual level, there's a way that you can look at these two things in a way that's quite similar. So the general form of black box adaptation was to pass in the training data set and the test input into this big neural network and get a corresponding label or a corresponding prediction. For an algorithm like MAML, you can also view it in a very similar light as a kind of a computation graph that takes as input the training data set and the test input. And the way that it looks like that is, essentially you can view it as a computation graph that just has a gradient step inside the computation graph. So D train and x test are still inputs to that overarching computation graph. It's just that it has a gradient step inside of it where we're taking a gradient with respect to our training data points to get our parameters and make predictions based on those parameters. So from this view, you can-- they end up looking more similar than perhaps they look like on the previous slides. And also from this view, it means that you can somewhat mix and match components of your computation graph. And there are methods that, for example, have learned an initialization but replaced the gradient update with a learned network. And for example, this paper does something like that where it learns theta, and it also learns this network that kind of takes the gradient and warps it in some way to predict a different gradient than the actual gradient. And this paper actually precedes the MAML paper, and I guess I mentioned this as a-- a conceptual thing that's kind of interesting. In practice, I wouldn't recommend using this algorithm. It's a little bit more complicated and tends to not work as well as actually just using the gradient. But it's something that I think is useful to know conceptually. Cool. And then we'll also look at this again on Wednesday when we look at a third class of approaches. Now-- now let's actually look at how these algorithms perform in practice. So in general, both of them can represent a variety of learning algorithms. But one thing that's nice about the optimization-based meta-learning algorithms is that at meta-test time, you're literally just running fine tuning. And so even if you don't actually have a very good set of meta-parameters, fine-tuning should still give you something that improves on the parameter vector that you have. Whereas, if you just throw something into a recurrent neural network, you can't really-- you can't really expect it to necessarily give you something reasonable if the data set is out of distribution from the task that it solved before. And so we can explicitly test this. And so we're going to compare an optimization-based algorithm, MAML, with two black box algorithms called SNAIL and MetaNetworks. And specifically, we're going to look at omniglot image classification, and we're going to look at performance as you vary the task. And in particular, the first thing we're going to look at is all of the algorithms are just meta-trained on the original omniglot data set. But then we evaluated them on tasks that had digits that were warped. And so they were all trained on the center line, so they're going to do the best at the center. But then as you warp the-- as you warp the characters more and more, the performance will get worse because it's more out of distribution. And so what we find here is that an algorithm like MAML actually is able to generalize much better to these distribution tasks compared to something like the black box meta-learners because it has this kind of structure of running gradient descent embedded within it. And it's just running fine tuning at test time. We can also look at this on a second task. This is-- we're kind of warping the size of the digits. And here we see a similar trend where the optimization-based algorithm is better able to extrapolate. Yeah? [INAUDIBLE] because they come from a black [INAUDIBLE] and fine-tune them on the gradients [INAUDIBLE] we really support the [INAUDIBLE] performance? You're asking, can you-- as a baseline, can you fine-tune the black box meta-learning? Yeah, the black box parameters [INAUDIBLE],, and then fine-tune them on the-- on the [INAUDIBLE]?? Yeah. Yeah, so if you have A black box meta-learner that's actually outputting parameters, then you could also fine-tune those parameters a little bit, and you would expect that to do a little bit better than if you didn't fine-tune. Both of these approaches, SNAIL and MetaNetworks, are not actually outputting a single set of parameters, but you could sort of fine-tune the last part of the r and n. And in principle, that should improve it a little bit. I suspect that it may not improve it all the way up to the purple line, but I suspect it would improve it a little bit. Yeah? What could explain this rapid fall inaccuracy which comes in the scale and not so much in the shift? Yeah, so the question is, why is it dropping more rapidly for the scale versus the shift? I should say that the x-axes on these two plots are not very comparable. One is in scale and one of those is in radians. And so, if for example you zoomed out more on the radians plot to show a wider range, I would expect it to drop off more quickly. I also think that as you make a digit smaller and smaller, it's probably a little bit harder for the neural network to read than the larger one. And so that's, I expect, why we see more of a drop-off on the left side than on the right side. But that's somewhat speculative. Yeah? So what is the underlying model architecture chosen for both algorithms for all the-- What is the underlying model architecture? So for MAML, it's a four-layer convolutional network with, I think like 32 filters-- something like that-- per layer. And it's kind of a standard architecture that's been used in multiple works, whereas for SNAIL and MetaNetworks, the architecture is a little bit more specific to the method. Because it's kind of a non-model agnostic method, it's kind of-- yeah, and so SNAIL was the one that used the interleaved-- the interleaved convolutions and attention layers. [INAUDIBLE] convolutions [INAUDIBLE] For? The scale of the shifting part? You're asking me if the convolutional layers help-- Help with the-- understanding the gradients in the data distribution, which is the scale of the shift. Whereas the SNAIL I guess just use RNNs, right? So SNAIL-- so SNAIL was using-- I can't remember if there was-- I'm pretty sure there was a backbone that was convolution-based. And likewise for MetaNetworks. I can't remember the specifics of it, unfortunately, but I'm pretty sure it is using that. And I should mention and emphasize here that all of these were only trained on this scale, and so they were not-- they didn't see any data at different scales and different sheers of the digit. Cool. So the-- we see that we-- like, the structure of gradient descent is helping us generalize to outer distribution tasks. Now you might wonder if this structure comes at a cost, because the-- maybe for example, by embedding gradient descent, we don't have as much expressive power in terms of the algorithms that-- the learning procedures that we can represent. And it turns out that you can actually theoretically show that if you have a deep neural network, the MAML function of running gradient-- one step of gradient descent can actually approximate any function of the training data set and the test input. So it can sort of approximate any learning algorithm. Although it does make some non-trivial assumptions. Well, it assumes that the learning rate is non-zero, that the data points are unique, and that the loss function gradient does not lose information about the label. But really, the strongest assumption is that the network needs to be extremely deep for this sort of result to hold. And you can get more expressive power with the RNN-like approaches with a smaller neural network. Yeah? What does it mean for a loss function gradient to lose information about the label? [INAUDIBLE] Yeah, so essentially what that means is-- just very approximately, is if you have one loss function that-- that looks like this versus a loss function that looks like this, if you just look at the gradient of this function, you don't actually know where you are on the line at all. And so that doesn't tell you-- you know what your prediction is, but it doesn't tell you what the label is relative to your prediction. And so it doesn't hold for L1 loss functions, but it does hold for things like L2 and cross-entropy loss. Yeah? Why is it that the data points and the data set needs to be unique? Yeah, this is a little bit getting into details that aren't super important. But the gist is that if you have duplicates of data points and you want it to give you different phi i's for different numbers of duplicates in the data set, it's hard to-- it's hard to understand how many duplicates there are in the gradient. For example, if you think about the gradient of two data points versus the gradient of five data points that are identical, the gradient is the same because it's just averaging across them, and so it's not going to-- the learning algorithm isn't going to be able to differentiate between-- it's only able to count the number of data points you have, basically. Yeah? So when would you [INAUDIBLE]? Or is there any point in using black box optimization? Yeah. Let's go through some of the challenges and solutions, and then-- well, I'll talk about-- I'll get into full depth of that on Wednesday. But I'll also talk more about the pros and cons after we go through some of the challenges and solutions. So one reason to not use these kinds of optimization-based methods is that sometimes this sort of bi-level optimization can be somewhat unstable because you have an optimization inside another optimization. And there are ways to try to stabilize the process. One way to stabilize it is actually to give it a little bit more expressive power. Don't just learn the initialization, but also learn the learning rate. And this has been shown to, at least in practice, lead to an easier outer optimization. There's also approaches that try to only optimize a subset of the parameters in the inner loop. For example, you could choose to only optimize the last layer in the inner loop, or choose to only optimize the batch form parameters in your inner loop, or something like that. And then there's also some work that showed that if you-- that if you have a different learning rate in a different batch form per gradient setup in the inner loop, that can also help stabilize things. Because if they're coupled, then they-- whenever you have basically the same value for something, it can cause those values to sort of fight during the optimization process and not find a good happy medium. And then, I don't want to go into too much depth in this, but there's also approaches that try to introduce a sort of context variable that, instead of only optimizing for the parameters of this neural network, you can essentially add a variable that's kind of part of theta here as part of the network. You can think of this as kind of like the zi that we saw in multi-test learning that you're optimizing as well as the other parameters. And this can also kind of increase the expressive power of the meta-optimizer and in practice leads to better results as well. I'm more giving this just as an overview if you want to dive a little bit deeper into things that can help these algorithms work better in practice. Cool. So yeah, a range of simple tricks that can help the optimization a lot. And then the second main challenge I want to talk about is that if you have one or a few inner-gradient steps, it's very easy to differentiate through that. But if you have a lot of inner-gradient steps, it becomes very compute-intensive and very memory-intensive. And there are also a few tricks for trying to address this. The first trick is a bit of a hack, and it's actually something that I discovered because of a bug in TensorFlow. And in particular, the first time that I implemented the MAML algorithm, it turned out that TensorFlow didn't properly implement this term, and it just silently basically set this to be the identity. And it turns out that actually, if you set it to be the identity, it actually works some of the time. And so, for simple image classification tasks, you can actually-- and that's not the identity, by the way. Yeah. And so yeah, it turns out to actually work in some cases. There's another paper that explicitly proposed this sort of first-order method as well. And so if you do approximate as the identity, you actually-- you don't have any second-order terms, and it can work well on simple problems. In other problems, I found that it doesn't work well at all. Is that why we're using PyTorch and not [INAUDIBLE]?? [LAUGHTER] Not quite. I mean, TensorFlow has since fixed this issue. This was very early days in 20-- early 2017. Yeah. But since then, both-- and actually the early days of PyTorch-- they also didn't have their second-order derivatives properly implemented as well, so-- yeah? Yeah, when you make the [INAUDIBLE],, you simply assume that phi i is very similar to theta? [INAUDIBLE]? So it's-- it's basically saying that the loss landscape around phi is very similar to the loss landscape around theta. I don't think we know why it works, although my guess would be something that it has to do with the optimization landscape being well-conditioned. But yeah, we don't fully know why this works. Yeah? So [INAUDIBLE],, does that mean that the Hessian was not calculated correctly? It was 0, therefore i, or-- Yeah, so this was being set to-- to 0. Yeah? So it doesn't-- it did not know how to [INAUDIBLE]?? I don't remember the exact details of the bug. But it was something like-- I think it was something like basically the Hessian wasn't implemented or something, and it was silently returning 0 rather than throwing an error. But why does [INAUDIBLE]? We don't know. It's an observation. It's an observation, yeah. Another thing that you can do is, if you don't want to backpropagate through many integrated steps of the whole network, if you only optimize the last layer of weights, then this can actually be very cheap. And there are a number of papers that have done this that actually get quite strong results. And in some of these cases, you can actually compute the-- you can compute potentially the last layer in closed form. And that-- yeah, that has a number of benefits. This works especially well in settings where-- in image settings where learning features are a big part of the problem. Cool. And then there's also something called the implicit value theorem that is actually pretty cool. It allows you to differentiate the meta-gradient without actually differentiating through the optimization path. We might talk about this a little bit in some of the advanced meta-learning lectures, but I more mentioned it here as a reference in case you're interested in digging deeper into that. Yeah? In the second idea, is it only optimizing the last player in the inner loop or the outer loop? In the inner loop. So in-- basically, yeah. In step 3, you only optimize the inner loop, and in step 4 you optimize the whole network such that-- basically optimize for features such that optimizing the inner loop gives you a good performance. Yeah? [INAUDIBLE] layer works the best? Like, [INAUDIBLE]? Yeah, so you could also think about fine-tuning different parts of the network. You could also have something where you fine-tune the last layer for some tasks and fine-tune the first layer for other tasks. Although then there's this-- you have to actually select which layer to adapt for different tasks. There's also a paper that actually optimizes everything but the last layer, and they actually found pretty good results with that. I'm not exactly sure how they came up with it. It's called BOIL if you want to look it up. But yeah, you could imagine something like that. [INAUDIBLE] Cool. And then I already showed this slide, but there's also some works that have looked at architectures for this kind of bi-level optimization and have found that actually, sometimes deeper and narrow architectures can work pretty well. And yeah, that's the gist of it. So the takeaway here is that we're constructing this bi-level optimization problem. In terms of the pros and cons, one thing that's really nice about it is it has this sort of positive inductive bias at the start of meta-learning. And what I mean by that is, before you do any meta-learning, you're already running gradient descent on your training data set in the inner loop. And that means that you already have a pretty good starting point for learning from data. And in contrast to black box approaches, black box approaches are starting with a randomly-initialized recurrent neural network. And so when you pass in a data set into that randomly-initialized neural network, it doesn't have any sort of inductive bias at the beginning that will give you-- that will actually do anything remotely like learning at the start of initialization. Because of this, it tends to extrapolate a little bit better because you're embedding gradient descent, because you're running fine-tuning. And it's also quite expressive if you have a deep enough neural network. And it's model-agnostic. The downside is, it does typically require a second-order optimization, which can be computationally-- it's a little bit-- certainly a little bit more expensive than other approaches, especially the approaches we'll talk about on Wednesday. And so this leads to more compute and memory usage. Cool. We have six minutes, so I think I will go through this case study. We'll also look at some more case studies on Wednesday this week. So this was actually some work done by some folks at TUM, and also some folks at Stanford. Sherrie was actually a PhD student at Stanford. She graduated I think last year, and is now starting as a professor at MIT. And this is some work that they did on trying to do land cover classification. And the motivation is that if you have a satellite image, it's useful to be able to predict how the land is being used. This can be used for urban planning, for understanding how things are changing over time. But it's very expensive to label these satellite images, as you might imagine. And so labeling data is really expensive, and different regions look different and have different land use proportions. And so this means that if you train a model on one part of the world and then try to apply that model to other parts of the world, it may not generalize well to all parts of the world. And so what they were looking at is-- they had croplands from multiple different countries, and they framed this as a meta-learning problem where different tasks correspond to different regions of the world. And so what they wanted to be able to do is, for a new region of the world that they didn't have labels for yet, they wanted to label a small amount of data for that region and then get a good segmenter or classifier for that region of the world that can basically fill in the rest of the labels. And so you can think of this as-- this is a diagram from their paper where you try to find a good set of initial parameters such that when you fine-tune on a new region of the world, you're able to quickly get a model that can accurately segment satellite images from that part of the world. So they looked at two different data sets. One of the data sets had geographic metadata in it, and so they were able to separate out-- explicitly separate out new regions of the world. And so blue-- dark blue is meta-training, light blue is meta-validation tasks, and orange is meta-test tasks. And kind of as an example of a two-way, two-shot classification task, they're trying to classify if a square is forest or croplands. And-- and so they're given that, and then they have two examples per class of what those parts of the land look like. And then they also had a second data set where they were looking at-- where they didn't have any kind of geographic metadata, and so they used clustering to try to guess the region of the world and separate things out into meta-training, meta-validation, and meta-test. And here they are doing a segmentation task where they were basically-- the support set corresponded to one small square, and then the query set corresponded to the other squares. Cool. And so they-- they were comparing randomly initialize-- random initialization. So they were training from scratch on a small amount of data from that new region. They also compared that to pre-training on all of the data they had so far and then fine-tuning on the target task. And lastly, they compared to the MAML algorithm that we talked about today. And what we see is-- here is the results on the first data set. And this is the performance or the accuracy of land cover classification as you increase the number of data points on the target task. First we see that random initialization isn't able to do very well because it doesn't use any other data. Using a pre-trained network is able to do a lot better, especially with less data. And then MAML is able to do-- if you have one or more data points from the target region, is able to do even better than a pre-trained neural network. One benefit to the pre-trained neural network is it can actually already do descent classification just in-- without any additional data from the new region. And that's why the dark blue is above the orange and the light green at 0. And then likewise on the DeepGlobe data set, they looked at both a random split and a harder clustered split. Here the results are similar on the clustered split. On the random split, they actually found that the pre-trained model was able to do quite well. Which makes sense, because the pre-training data and the target data-- you're going to basically have more data points that look more similar across the kind of pre-trained data and the target data. Cool. And then if you're interested in looking at this more, I'd encourage you to take a look at the paper. Here, just kind of-- yeah, just one example of a fairly real world problem of using this kind of approach. Cool, so that's it for today. We covered the basics of optimization-based meta-learning, we talked about how it compared with black box meta-learning. On Wednesday, we're going to be talking about our last class of-- last type of meta-learning method, non-parametric few-shot learning methods. Then next week we'll talk about unsupervised pre-training, and the following week we'll talk about more advanced meta-learning topics. Yeah. And as a reminder, submit your project form and your homework on Wednesday.
AI_LLM_Stanford_CS229
Stanford_CS229M_Lecture_19_Mixture_of_Gaussians_spectral_clustering.txt
OK. I guess, let's get started. Let's see. Is this working? Yes. So I guess last time, we have talked about unsupervised learning. And today, we're going to continue with unsupervised learning. And first, we're going to continue with the moment method. And here we're going to talk about higher order moments. And then, next, we're going to talk about something called clustering, or spectral clustering in more technical words. So these are different type of unsupervised learning algorithms. So I guess just to continue with what we had last time, last time, we ended up with this mixture of Gaussians. The setup was that you have some x which is sampled from a mixture of k Gaussians with mean mu i and covariance identity. And so last time, I think, at the beginning, we talked about case 2, where you have a mixture of two Gaussians. And in that special case, you can just take the second moment of the Gaussian to recover the mu i's. And then we moved down to talk about cases bigger than two. And in that case, we have argued that if you take the second mixture, if you take the second moment, then this is something like 1 over k times sum of mu i mu i transposed. And this is not enough to recover mu, mu i's, because, given this second moment, you still cannot identify the mu i's precisely because there are multiple mu i's that can have the same second moment. So that motivates us to consider this third moment. So the third moment is-- as we discussed, it is the expectation of x tensor x tensor x. So this is the third-order tensor in dimension d by d by d. And let's compute what's the third moment with the hope that the third moment will tell us enough about mu i's where we can recover mu i's from the third moment. And that's indeed the case. So what we do here is the following. So we compute the third moment. And I guess the initial step is always the same because you have a mixture of k clusters. So what you do is you have 1 over k times the sum of the moment conditioned on each of the cluster i where i is the cluster ID. And now the question becomes that if you have a Gaussian drawn from-- if you have an x drawn from Gaussian, then what's the third moment? What's the expectation of the third moment? So what is this expectation of x tensor x tensor x conditioned on i? So let's do some kind of simplification just to-- this is an abstraction in some sense so that we can make the notation simpler. So suppose z is from some Gaussian with mu-- let's call it a just to distinguish it from the-- a and covariance identity. And the question is, what is-- you have this lemma. I guess the condition is this. And then our question is, what is the expectation of z tensor z tensor z? And the claim is that this is pretty much equal to a tensor a tensor a but with some caveats. There are some other terms which are something like this. Let me write it down and explain. So this is from 1 to d, expectation of x tensor el tensor el plus the expectation el tensor expectation x. So this is a formula-- oh, sorry, my bad. This is not x. This is z. There's no x in this lemma. We already changed our notation to z. So note that expectation z is really literally a. So, basically, you already have a formula that expresses the third moment of z into a function of a. That makes sense because a decides everything. So everything at the end of a should be a function of a. So the reason why we still use ez in this formula is because we want to implicitly say that this is something that is about the first moment. So maybe the more important thing is that this means that you can compute-- so we can compute a tensor a tensor a from linear combinations of the third moment and this first moment. Why is it useful to get this? I think this will be clearer, why it's useful to get a tensor a tensor a, it's useful-- or we'll see it in a moment. But this lemma tells us that if you know the first moment and third moment, you can get a tensor a tensor a from-- sorry. I'm messing up this letter here. So here is the z. OK. And let's see. I think I would-- any questions so far? I guess it's not exactly clear why this lemma is useful at the current point. I guess the main point is that you can compute out what's the third moment when z is just a Gaussian. And I think the proof-- and I'm going to show the proof-- I think the proof is nothing super interesting, but it tells you how to do this kind of derivations for the moment. And once you see it once, then all the others become kind of trivial. So how do you compute the third moment? So what you do is you do it for every entry. So you say, look at the ijk entry of this thing. Then this is just expectation zi zj zk, where zi denotes the coordinates, the i's coordinates. And I think there's something. Sorry for the note takers. I think I changed my rotation to v here just to be consistent. Let me go back, change this to v. It's just a generic variable. It's just somehow, later, I used v. So let's change this to v. So what we do is that we-- just to compute this moment, we just, in some sense, do this in a brute-force way. So what is zi zj zk? You can write zi, which would be vi plus some xi. And vj and cj can be vj plus xi j. And zk will be k plus xi ck. We're using the fact that z as a vector is equal to v plus some ksi where ksi is from-- it's a spherical Gaussian. That's my definition of ksi, in some sense. Because ksi is equal to z minus v, which has a spherical Gaussian distribution. And then you can-- and, by the way, vi is the ith coordinate, just to be clear. And then we can expand this so there are eight terms in this product. So what are them? So one of the term is vi vj vk. That's easy. v is deterministic because c is random. And some of the terms look like expectation vi vj xi k. One of the terms is this. Another term is expectation vi vk xi j plus expectation vj vk xi i. And these terms will be equal to 0 because the expectation of ksi is 0, and v is a deterministic quantity. So that's why they are going to be 0. And then we have the other three terms that looks like expectation vi ksi j ksi k plus expectation vj ksi i ksi k plus expectation vk ksi i ksi j. So these terms is a little bit different. Let me deal with it in a moment. And the last type of term is the product of the three xis. So how do we deal with the rest of the four terms? So the thing is that if you look at expectation xi i xi k, this is equal to what? This is equal to 0 if i is not equal to k because if i is not equal to k, ksi i and ksi k are two independent random variables. And you can factorize it to get expectation of ksi i ksi k. They're both 0, so you get 0. And this is 1 if i is equal to k because-- OK, maybe let's have more steps. So this is equal to expectation ksi i squared, which is equal to 1, if i is equal to k. So in summary, expectation of ksi i ksi k is equal to indicator that i is equal to k. And you can also try to do this with ksi i ksi j ksi k. And here you can still try to do the same thing, try to divide into different cases, whether i, j, k are all the same, or maybe two of the i and j's are the same, and k is different. There are a few cases. And actually, if you enumerate all of those cases, it turns out that it's always 0, regardless of the choice of i, j, k, but for different reasons. For example, when i, j, k are all the same, then this is the third power of xi i. So it equals to expectation xi i q. And that's 0 because the third moment of Gaussian is 0. And when i is equal to j but not equal to k, you can do another different calculation. But generally, you can do all of calculations, and they're all equal to 0. Sometimes, I think the reason-- fundamental reason-- is that as long as you have-- even if you have all degree polynomial, all degree monomial of these ksi i's it doesn't matter. It's always going to be 0. The expectation is always going to be 0. So these are all, in some sense, elementary calculations. And then if you use this, then you can continue here. You can get-- this is expect equals to vi vj vk plus vi times the indicator j is equal to k plus vj indicator i is equal to k plus vk indicator i is equal to j. And this pretty much completes the proof. So then you just have to rewrite this in a tensor form. So if you verify, I guess, how do you write it? I guess if you verify this equation, the target equation, entry by entry, then you see that this is actually exactly-- let's see. So this is vl. That's an el. Sorry, v. l plus. All right. So this is our target equation. This is what we got for every entry. So let's just verify these two, verify that they are the same thing. It's just a reorganization. So how do you verify that? It's kind of like if you take the i, j, k coordinates, do you see that-- so the question is, what is the i, j, k coordinate of this guy is, right? So v tensor el tensor el-- the ijk coordinate is equal to-- it always has a vi there because vi is always there. But the jk coordinate of this el and el depends on-- so, basically, you have to-- the only possibility is that this is e-- so the only possibility that-- so, basically, this is-- I guess, maybe, one way to write this is that if you really do it, this is vi times el, the jth coordinate of el, and el the kth coordinate of el. And in what case is the jth coordinate of el and kth coordinate of el-- are both non-zero? The only case is that l is equal to j and l is equal to k. That's the only case that this can be non-zero. So that's why this is equal to vi times 1 when l is equal to j. l is equal to k. And the only way this can happen is that j is equal to k, when j is equal to k. Otherwise, it's going to be 0. So that's how you verify. I don't expect you to verify completely on the fly. But in some sense, the exact formula doesn't matter that much either way. You only need to have a formula that depends on the third movement of v and also some formula-- basically, you just need a formula for v. OK? So any questions so far? So now let's see how we use it. So how we use it is the following, and you can kind of see what kind of things we exactly need. So you need-- so now you look at an x. x is a mixture of Gaussians. z was only a single Gaussian. And you use the single Gaussian on the-- the single Gaussian has a building block to complete the moment of the mixture of Gaussians. And what you will do is that, OK, when condition i becomes a Gaussian and you apply the lemma, and you get 1 over k times sum over i from 1 to k, and then this is mu i tensor mu i tensor mu i because mu i is taking the place of v. And then you have the additional three terms-- mu i tensor el tensor el, and el tensor mu i tensor el plus el tensor el mu i. OK. So-- parentheses. OK, and so, basically, the third moment of x is a function of mu i's. It's still a little bit messy. So what you do is you say, I'm going to get rid of all of these terms by using the first moment, the first in the moment of x. So what you do is that you first reorganize a little bit. You get this somewhat cleanly-looking term, mu i tensor mu i tensor mu i. And then you switch the k with the two sums. For the rest of three terms, you get sum over l from 1 to d, 1 over k times sum over i from 1 to d mu i tensor el tensor el. And you have the two other terms which-- I guess in theory. You can imagine what they look like. They just are permutations. They are rotations of these terms in some changing order. And now this one becomes the first moment of x. So you get 1 over k. You will get the same-- this is i, sorry. This is i-- i plus something that depends on the first moment of x. So what does this mean? It's that you can move those three things to the left-hand side. So, basically, this means that we can compute this tensor from the third moment and the first moment. So that's basically our interface. Once you have this tensor, then the next step will be-- next step, so we'll go from here. So from this thing-- I guess let me write this. I think I should have introduced this, too, mu i's. And just for the notational purpose, so a to tensor 3 is just a shorthand for a tensor a tensor a. So, basically, what this whole computation is saying is that now you can compute this expectation-- sum of the third moment of mu i, and then you need to design an algorithm to compute the mu i's from this. And if you can do this question mark, then you're done. The whole thing is solved because you can first use the moment to complete the third tensor for mu i's. And then you can run this algorithm. There are actually some cleaner ways to deal with this. You don't have to deal with these additional terms in order to get-- there are some other ways to get this exact set of tensors as directly in a cleaner way. But that requires a lot of other machinery. So that's why I'm only using this relatively brute-force way to get a circular tensor. But the point is that you can always get something like this. So now the problem becomes this so-called tensor decomposition problem. So, abstractly speaking, this tensor decomposition problem is something like you-- so, abstractly, you have a sequence of vectors, a1 up to an. These are all in dimension-- ak-- in our dimension d. So these are unknown. And what you're given is a vector that looks like this from 1 to k. And then your goal is to reconstruct ai's. And you can also ask about this-- for different orders of tensors, you can also ask the same question. So questions-- also, for example, you can have some other rth order tensor for some r that is bigger-- possibly bigger than 3. And it turns out that you can also get the fourth-order tensor-- the fourth-order power from this moment method. You can take the fourth moment of a beta, and you can get ax to tensor 4 with some rearrangement like we have done. So, basically, this is a kind of interface. It's where you basically reduce the moment. You reduce the [INAUDIBLE] problem to the so-called tensor decomposition problem. And this tensor decomposition problem also has certain-- somewhat kind of like a-- let me also introduce some notions for this, so-- notations. So the rank of the tensor-- so some basic notion for the rank-- so I guess let's say a tensor b tensor c is a rank-1 tensor. This is the definition of a rank-1 tensor. And then the rank of a tensor k-- tensor T-- is the minimum k such that T can be written as a sum of rank-1, a sum of k rank-1 tensors. Sometimes, this is also called CP decomposition. So in some sense, the reason why this is called decomposition is that you observe this-- some of these rank-1 tensors, you want to decompose it into components. And each component is rank-1. And this question is also sometimes called CP decomposition because there are some other decompositions for tensors that could also be meaningful in other cases. But actually, it's also fine to just call it tensor decomposition because this is the most popular decomposition for tensors. OK. So I guess now it becomes a very modularized question. It's an algorithmic question where, how do you figure out the components from-- given a lower tensor, how do you figure out the lower components? So what I'm going to do is that I'm going to basically list some of the existing results but not really talking about details, because, actually, what happens in this area-- I think this area becomes kind of very popular around 2013, 2012. In the very beginning, I think, a few papers kind of lay out the framework for this whole thing. So how do you compute a moment? How do you convert it into a tensor decomposition problem? And then those papers provide some somewhat easy tensor decomposition problems, or they actually invoke some of the existing tensor decomposition problems in those early papers. And then this field, somewhat kind of like-- because this question becomes two parts. One part is about, how do you do the moment? How do you turn the moment into a tensor? And then the second part is, how do you decompose the tensor? So so people have-- there are a lot of papers involving some of my works as well. But there are actually a lot of works that tries to understand how do you decompose all different kinds of tensors, under what conditions you can decompose. So what I'm going to do is I'm going to list a few conditions that you can decompose these tensors computationally efficiently. And those conditions, you will turn into a condition for the upstream problem. For example, in the mixture of Gaussians problem, you're going to have some conditions. So just to set up kind of the basis, let me see where I wrote this. Somehow I didn't notice this. So maybe the number 0 is that, in the most general case, in the worst case in the more TCS language-- so we're calling it the worst case-- or in the most general case, this problem is not solvable. So finding the ai's are computationally hard. Actually, there are several layers here as well if you want to discuss the details. In the very worst case, actually, the ai's are not unique. You don't have a unique decomposition. And when the decomposition is unique, there are also cases where the decomposition is unique, but you cannot find them in a computationally efficient way. I think there's a question. So [INAUDIBLE] you can put [INAUDIBLE]?? So if you take 3, you replace 3 to be 2, then it's pretty much like symmetric. This here is symmetric, but you can also make it asymmetric. But, yes, you are right. It's basically linear algebraic stuff like FE. And this is a very good question. So I think, in some sense, as you will see in some of these questions below, in some aspect, the tensor decomposition is kind of closed to matrix decomposition. But there is one fundamental difference. So that fundamental difference is what enables-- that makes these kind of tools powerful but also challenging. It's powerful in the sense that it's fundamentally powerful because here, there is no rotational environment. I guess this no rotational environment, also, you have to interpret it in a careful way. So what I mean is that some of ai tensor 3 is not the same as some of the rotation of ai tensor 3. However, this is true for matrices. So if you have some of ai transposed, this is the same as some of r times ai. r is the rotation matrices. I guess it depends on how you rotate it. My bad. I think this-- how do I say this? I probably shouldn't say this on the fly without thinking about what's the best way to. I guess, technically, I should rotate on the right. So maybe-- let me not make it precise. But I think maybe one thing to realize is that if you have matrices, you have a times a transposed, something like this, which is kind of like a sum of ai ai transposed if you put all the ai's as columns of calculating So then this is equal to a times r times r a transposed if r is a rotation matrix. And you just cannot do this for the tensors that often. But what happens here is that if you permute-- if you have ai, and you permute it, permuted indices to ai prime, where ai primes are just permutations of ai, then the resulting sum, the third tensor, is still the same. So you only have the rotation symmetry, but no-- you only have permutation symmetry, but no rotation symmetry. And this actually makes it somewhat powerful because, in many cases, this is the case. For a mixture of Gaussians, you can permute all the centers, and there still is the same Gaussian. But you cannot rotate the coordinate systems to make the same-- you cannot rotate the-- at least you cannot take linear combinations of the centers to still maintain the same nature of Gaussian. And I think this also applies to neural networks. I think, for neural networks, you have the permutation symmetry where you can permute the neurons in intermediate layers, and also the associated edges. And you can still maintain the functionality of the neural network exactly the same. But you cannot do arbitrary rotations in it because you have the nonlinearity with activations. So, yeah, but I guess this part is supposed to be somewhat abstract because if you see a lot of math, then you can probably understand this a little more better. But anyway, so there are some fundamental differences between this and linear algebra. So that's why tensor decompositions becomes difficult-- especially the work is. OK. Going back to the list of questions, as I said, the starting point is in the general case, you cannot hope to do anything. But there are many cases where you can do something. So the easiest case is the orthogonal case. So orthogonal case means that if a1 up to ak are orthogonal-- and in this case, actually, this is the closest to the eigenvector case. So here you can say that then ai is actually the global-- each of these ai-- each of ai is the global minimizer. There are multiple global minimizers. So that's why each of them is a global minimizer-- maximizer, actually-- of this objective function where you maximize the l2 norm-- maximize this tensor picked by a rank-1 tensor. So I guess if you're not familiar with the notation, then what I really mean is that take this sum of Tijk times xi xj xk. So this is the extension of the quadratic form for matrices. So suppose you have a matrix. Then this is the quadratic form. And for tensor, this is this tensor form. So eigenvectors can be defined in this way if you change the tensor to the matrix because an eigenvector is what's maximizing the quadratic form for the matrix. So in some sense, in this sense, the components are some kind of eigenvector. And then you can find this. So this is an interesting property. So this is saying that ai is kind of like eigenvectors of T. And also, we can find it. It's not trivial to find it. But we can find ai's in polynomial time. And actually, the way to find it is that you try to solve this optimization. And it's one way to find it is that you try to solve this optimization problem back when you use that. So that's one way, one case. And another case is that-- a more general case that you can have is the independent case. So it turns out that if a1 up to ak are linearly independent, then this is also a good case. You can find this in polynomial time. I think the algorithm is called Jenrich's algo. I'm not going to describe all of this algorithm, just because it will take too much time. And then, sometimes, these are things that you can-- as long as you have some kind of basic knowledge, you can search over the literature, and there are many papers about this. But these are-- so 1 and 2 are both about so-called undercomplete case. 1, 2 are the so-called undercomplete case, which just really means that k, the number of components, is less than d. You can see that number 1 and number 2 can only happen when k is less than d because if k is bigger than d, there is no way that a1 up to an are linearly independent. It's because your number of components is bigger than dimension. So they cannot be linearly independent. But actually, you can also do this for overcomplete case. Overcomplete case are still possible-- are still possible in certain cases. So there are several different ways to deal with overcomplete case, which means k is bigger than d. So the first one is that you can look at higher-order tensors. So you can say that suppose a1 tensor 2 up to ak tensor 2 are independent. This is a much relaxed condition that a1 up to ak are linearly independent because now you have a higher dimension. So now this only requires k needs to be less than d squared to make this possible to happen. And suppose this is true. Then you can just replace ai by ai tensor 2. So you can recover ai from the sixth-order tensor. So you recover from a1 tensor power 2 to the tensor power 3, i from 1 to k, which is still the same as the sixth-order tensor. And how do you do it? You just invoke the third-order tensor on ai to the power 2. And then, after you get ai to the power 2, you can get ai by just taking the square root. So this relaxes the restriction on the k but with the cost of estimating the sixth moment, because how do you get this? This is the thing with r to the d to the sixth. So you have to somehow do something with the sixth moment. And it will be less simple and efficient. And, well, another slightly clever way to do this is that you can do fourth-order tensor with the same condition. So you say that-- fourth-order generic tensor. And what does generic tensor really mean? It means that you exclude-- excluding algebraic set of measure 0. So you exclude a small set of-- a measured 0 set of tensors. And except those kind of tensors, you can do this. And this is saying that when k is less than d squared, you can recover ai from the fourth tensor, right? So before, if you do a trivial reduction, you get the third-- you need to use the sixth-order tensor. But now you only have to use the fourth-order tensor. And this algorithm is called FOOBI. And you can also have a robust version of this. This algorithm by itself is not robust. You can also have robust versions of this. I guess let me not write down these references. I'll add the references later, I guess. If I could just get the initials, I think these are some references like this, where you can get a robust version of these algorithms. And if you want to be more ambitious-- so you want to say, that I want to even deal with third-order tensor, then what you can do is you can say you can have random tensors. And by random, it means that if you assume ai's are randomly generated unit vectors. I guess whether it's unit vectors is not that important. But for convenience, let's say they are all unit vectors, and they're all randomly distributed on a sphere. And then, for even third-order tensor, k can be as large as d to 1.5. So you can have kind of overcomplete case even with third-order tensor. And there are some references here, which, I guess, I'll add it to the notes eventually. OK. OK cool. So this is just a very quick list, kind of probably a little boring list of references. But I guess you see the rough idea, right? So you can, for various conditions for the component ai's, you can have various kind of algorithms and different results. So, typically, if you have more restrictions on ai's, you get stronger results, right? So the strongest one would be you assume they are random. Then you can even decompose overcomplete answers when the order is only 3. But if you don't have that strong assumption, you have to go with the fourth-order tensor or even sixth-other tensor if you don't use the right [INAUDIBLE].. So this is basically what's going on in this area. And you can see, there are many, many papers that deal with different kind of setups. So I will add some references to the lecture notes. But generally, this is something you can kind of search on internet. And they are just-- before we conclude this part, there are other latent variables that can be done-- can be done by moment method, or method of moment, using the same strategy, where you first complete a moment, you turn it into a tensor decomposition problem so you can do the so-called ICA, independent component analysis, you can do the hidden Markov models, and you can also do topic models. I think there are even more than this. And I'm just listing a few that are most prominent. So these are all viable models for unsupervised learning. And for each of these, you can try to compute certain kind of moments and rearrange your moments so that you get a tensor and then decompose the tensor to construct the true pattern. Any questions so far? What do you get if, say, for example, it's a third-order tensor? So you want to activate it based on [INAUDIBLE].. Right. I guess it would be more general [INAUDIBLE] tensor. It's more-- [INAUDIBLE] So [INAUDIBLE],, is there, say, [INAUDIBLE].. I don't [INAUDIBLE]. What is the first [INAUDIBLE]? I think-- let me-- maybe I didn't-- let me try to answer, and then you can clarify if I didn't answer the question. So I guess the flow just is something like you first start with the data. You compute some tensor, maybe this, or maybe fourth-order-- maybe I said fourth-- here. And, of course, you cannot compute this exactly. You compute this approximately. You have some error in estimating this fourth moment. And you know that if you don't have any error, then this will be something like some of ai to the tensor 4, i from 1 to k. And then you decompose. You get ai's. And I guess-- how does the dependency kind of-- so I guess one thing is whether it's overcomplete or undercomplete, right? So why does that matter? That matters because this k is-- what is k? In a mixture of Gaussians, k is number of mixtures. So if you can handle overcomplete tensor decomposition, that means that for the original problem, you can handle more than d mixture. The number of mixtures you can handle is more than dimension. And if you can only do undercomplete tensors, then your number of mixtures has to be less than the matrix. That's why people care about overcomplete tensors. My question is, [INAUDIBLE] expectation [INAUDIBLE] with [INAUDIBLE] larger k [INAUDIBLE].. With larger k? The k here is something fixed. It's not about-- so I guess there is another thing, which is k is the number of mixtures in our data. It's something fixed. Consider. So I guess maybe what you're asking is this empirical thing. So the real thing is that you work on this. And then you say this is approximately equal to the sum of ax to tensor 4. And then you decompose that approximate version. So you also need your algorithm, your decomposition algorithm, to be robust to some errors because you don't know exactly this thing, this lower tensor, exactly. You only know approximate version of it. Am I answering? I'm not answering the question? Go ahead. Maybe I'm not answering the right question. [INAUDIBLE] Right. [INAUDIBLE] This is the tensor decomposition. Right. You can think of tensor decomposition as a low-rank approximation for the tensors. Yes. So [INAUDIBLE]. So it's a [INAUDIBLE] best approximation [INAUDIBLE].. So all of these theorems so far I listed, they all work for approximate version, even though I didn't really talk about the approximate version yet. I didn't talk about approximation explicitly. So in some sense, the first-order bet is that even you don't have an approximation. You get exactly a low-rank tensor. You have to be able to decompose it. Even that's nontrivial, right? So for matrices, it's trivial because you just take SVD. But for tensors, it's not trivial. So that's why the first-order bet is to say, I get exact low-rank tensor, I can decompose it. And then the second question is the so-called robustness, which means that you get approximately low-rank tensor, how do you decompose it? I think all of these algorithms, I think, are robust. And there are some robust version of them. And typically, if you don't care about the optimal sample efficiency, then they're all robust just for trivial reasons. But if you really care about exactly how many samples and how robust they are, it becomes a little tricky because you have to talk about sample efficiency, how does the concentration work, so. [INAUDIBLE] find the largest [INAUDIBLE].. Yeah, you can kind of think of the ai's as the largest eigenvectors. Largest eigenvectors. Yes. You can roughly think of that, yeah. OK? That's good? OK, cool. OK, sounds good. So I guess-- OK, cool. So then I'm going to move on to the last subtopic in this course, I guess. It's still about unsupervised learning, but it's about a slightly different type of unsupervised learning, which is more like class III. And you can see that we are still doing spectral methods. We're still doing some kind of spectral decomposition. But it's decomposing a slightly a different way. I guess you will see once I formulate a problem, and then you can see that before-- with the tensor method, you are building some pairwise information between the coordinates-- or threewise information between the coordinates of the data, right? So here, from now on, I'm going to talk about a different type of approach where you build pairwise information between the data points. And then you do something on top of that. So I guess I'll specify more clearly. So spectral clustering-- So I'm going to discuss, actually, a bunch of different algorithmal setups under this broad framework. This whole spectral cluster kind of framework, I think, is proposed by Shi and Malik around 2000. I think also Andrew Ng, Michael Jordan, and Weiss in 2001. Maybe this is 2001 and this is 2000. I don't have the references in the lecture notes. So it has been, like, 20 years old. So I'm going to kind of discuss a bunch of classical things about this. And also, next lecture, I'm going to talk about my own work, which kind of built on top of this to get it to a deep learning case-- to extend it to the deep learning case. So the general idea is that suppose you have-- so we are given n data points. Let's call them x 1 up to x n. And let's say we are given-- for the moment, we are given a similarity matrix. And don't ask me how to get this. Let's just assume that we have a similarity matrix G which is of dimension n by n. Actually, it's going to be a problem to construct a similarity matrix to some extent. But for the moment, I say we will have this. In some cases, we do have this similarity matrix and this G, where each of these entries of this matrix is doing some similarity-- is capturing a similarity between two data points, x i and x j. Here you can interpret this as similarity or something like-- or just generally some matrix that captures some relationship between data points. I think it's reasonable to think of them as similarity. And the larger they are, the more similar, I say. But this is not that important. So I guess you can see that this is what I call the pairwise information between the data points but not pairwise information between the coordinates. Actually, if you do-- in certain cases, they are kind of the same. But in some other cases, they are not the same. So, for example, one example could be that you have xi's are images. And then rho xi xj measures the semantic similarity of the two images. How do you get this? I think it's a little bit kind of tricky because typically, you cannot just take an l2 norm to match the semantic similarity because there could be two images that looks pretty different but are semantically similar. But for the moment, let's assume we're given such a matrix, such a similarity matrix. Example 2, which is probably more kind of like classical usage of these kind of models, so where you can say, think of x i is our users of social network, and rho of x i x j is equal to 1 if they are friends, like on Facebook, I say. And when they are friends, it means that they share some kind of similarities, maybe similarity in jobs or interests or some other things. So you can think of this as a similarity measure between two users. And eventually, I want to classify-- in this case, you want to-- eventually, you're going to classify the users into groups. So you want to say, I can detect hidden communities between users from this unlabeled graph. And so basically, the goal is to do some clustering with some kind of clustering-- I guess maybe I should just-- just clustering, clustering the data points. I guess, in the social network example, maybe you have all of these users where you have, let's say, so many users. And there's some friendship relationship between them-- something like this, maybe. And then what you want to do is you want to detect some so-called hidden community. So, for example, you can say this is a cluster. This is another cluster. And maybe this cluster corresponds to people at Stanford, and this cluster corresponds to people at Berkeley. And so, of course, between Stanford students you have more connections, and between Berkeley students, they have more connections. And there are some connections across the groups and so forth. And in this case-- and also, for example, for this example 2, you can think of this, also, G, as a graph. I think, even in general case, you can view G as a graph. But it will be a weighted graph. Here G, in this social network case, G is binary because Gij is binary. So you can view G as a graph. And Gij is an edge. And your goal is to kind of partition-- there are many ways to say what your goal is. So you can say you are clustering data points, or you can say you are partitioning the graph into different parts so that each part has more connections. Within each part, you have more connections compared to across different parts. So in some sense, you can all view it as partitioning the graph into kind of components that are separated from each other, that separate from each other, to some extent. There's no way you can completely decompose that into completely disjoint parts. But you can somewhat decompose them, kind of partition a graph into more or less disjoint parts. And so this is kind of the general type of setup. I'm going to kind of discuss probably one or two instantiations about this. So I guess the general theme is the following. So I feel like this is a pretty deep kind of observation in math. And the general kind of way to think about-- to say this is that eigendecomposition of this graph G really relates a lot to the graph partitioning. So, again, decomposition of this adjacency matrix G-- here, by G, I mean an adjacency matrix-- relate very well to the graph partition problem. So you see that in all of the examples I'm going to give, the main approach is to do some eigendecomposition. And actually, sometimes, it's not eigendecomposition of T. It's eigendecomposition of some transformation of T. But the key point is that eigendecomposition seems to relate so much to partitioning and clustering. And it's not that obvious. But eigendecomposition is a very linear algebra thing. And graph partition is a very combinatorial thing. And this is why it's kind of useful because when you deal with combinatorial stuff, right, typically, the way I might kind of-- I'm not a really combinatorics person. But my way to think about it is that many combinatorial stuff, once you can relate it to algebraic or linear algebraic or other kind of polynomials, then you get exposed to different type of tools. And you can do, sometimes, a lot more things than you expected. So this is the general thing. And we're going to see probably two or three examples to see why this is the case. So now I'm going to do something more concrete. So this is called a stochastic model. This is a very concrete setup where you can do math, and you can say what-- you can instantiate what I mean clearly. So the stochastic block model-- I can just abbreviate it to SBM-- and so G is assumed to be generated randomly from two-- sometimes it can be more, but I'm doing only two groups-- two hidden communities or groups. So the setting would be something like you have n vertices or n users. And you assume that there are two hidden groups, S and S bar. And this is the partition, meaning S and S bar are disjoint. And then you assume that if you are from-- two users are from the same hidden community, then they are more likely to be connected via an edge. So if i and j are both from S, or i and j are both from S bar, then the probability that Gij is 1 with probability p and 0 with probability 1 minus p. And then if i and j and otherwise-- if i and j are from different-- otherwise, which means that they are from a different community-- then Gij is 1 with probability q and 0 with probability 1 minus q. And here, importantly, p is much larger than q. Maybe that's much larger for the moment. How much larger? We'll quantify it in a moment. But you need p to be larger than q. I guess maybe I'll just write larger but not much larger. So, basically, from the same hidden group, you have a higher chance to be connected by an edge compared to from a different kind of group. If you draw this, I guess-- I don't know how to draw something, a random graph. But I think you can think of there is an S, and there is an S bar, some edges. And then, if p is something probably close to 1, you're going to have something like this. Within the group, you have high probability to connect each other. And across the group, you have some sparse edges, maybe just some little edges. OK. And now the goal becomes-- so the goal is to recover S and S bar-- if you recover S, you can recover S bar-- from the graph G. All right. So this is a well-defined data generation model. And basically, want to discover the hidden groups where you want to do the clustering. And our approach is going to be eigendecomposition. Decomposition. So maybe, before talking about eigendecomposition, for some extreme cases, you don't have to do eigendecomposition, right? So suppose-- let's just do some kind of somewhat trivial warmup. So suppose p is 0.5 and q is 0. Then you don't have to do any kind of-- I think, almost, you don't have to do anything, because you're going to see two disconnected parts, right? So if p is 0.5, and q is 0, you basically have some S and S bar. And you have some ideas-- not complete connections. You have some ideas here. And then there is a clear two subgraphs. You can just basically kind of-- for example, you can say, I start from this. I look at all my neighbors and then put them all in S. And then-- because if you see the edge, you know that they are from the same group, right, because if they're not from the same group, you have zero chance to see an edge, al right? So, basically, you just need to see all the points you can reach from this single point, and you've got all of these three points. And then you declare that to be S. And the same thing, you can do it for the other. You can do some kind of-- you can just try to-- does that make sense? I saw some confusions. I don't know. Basically, the algorithm I'm going to do is the following. I start with a node and then see what this node can reduce to. And I put it into my set. And then I do this repeatedly to see what other nodes I can reach to. And at some point, I reach the boundary. I reach a closure. I cannot reach any new nodes. And then I declare this to be S. And the rest of them, I declare to be S bar. That would work pretty reasonably well for p is 0.5 and q is 0. And that's just because, first of all, you don't have any false positive because all the nodes you discover should belong to the same group because if-- and secondly, I think you can also try to show that-- you can find all the nodes because if somebody is in your same group, it should connect to someone, someone you know. This is the so-called small world phenomenon, right? If this other user is from the same group, they should be connected with you by some paths, some path. So anyway, but you can see, even for me to convince you this algorithm is working for p is 0.5 and q is 0 is not that trivial. You have to do something right. And this is a combinatorial algorithm. And what we're going to do is that we are going to do a more linear algebraic type of algorithm. And you can see everything becomes kind of even clearer, and this is a more powerful algorithm. And you don't need this combinatorial reasoning. So what do we do? So we basically just do eigendecomposition. And as a warmup, what we're going to do is that we're going to do eigendecomposition-- because how do I simplify eigendecomposition? What's the right acronym for eigendecomposition? Eigendecomposition for G bar, which is the expectation for G. So clearly-- so what is G bar? G bar is the expectation of G. So you have a weight. And a weight is the expectation of-- just the expectation of Gij. So clearly, you don't have this phi bar. But just for the starters, let's look at this expanded version. And what is expectation? What is this G bar ij? This is going to be equal to p if i and j are from the same class, from the same community and equal to q otherwise. So that means that, basically, if you look at this G bar-- so suppose this is the indices for S. This is the indices for S bar. This is indices for S, and this is for S bar. So when you have both i and j from S, you will get P. So you get p, p like this. And here you get q So this is G bar. And my claim is that in this case, suppose you have the axis to G bar. So the top eigenvector of G bar is the L1 vector. And the second eigenvector of G bar is interesting-- is this vector where you have 1's on the coordinate in S and minus 1 on the coordinate in S bar. So, basically, if you've got the second eigenvector of G bar, you've solved the problem because you can just read off the community membership from this eigenvector. That's the k. OK. So it sounds a little kind of interesting, right? The proof-- I guess, what's the intuition here? So I guess the intuition probably comes from the proof. So let's first do number 1. Number one is almost always true for any finite cases. It doesn't even have to be such a special G bar. So what you do is you just say you get G times L1 vector. And what is G times L1 vector? Basically, you multiply G with L1 vector here, something like this. And you modify it. Basically, you are just the looking at the low sum, looking at some of the entries in each of the rows. That's what's G times 1. So what's the sum of the entries of each of the rows? The sum of the first row is something like p times n over 2 plus q times n over 2 because there are n over 2 entries with p and n over 2 entries with value q. And every row has the same thing. So this is equal to-- basically simplify this-- p plus q over 2 times n times L1 vector. So you can kind of see that 1 is the top eigenvector. Actually, even for more general-- for general kind of graphs, weighted graphs. So for any matrix with fixed row sum or for any graph with a so-called uniform degree. The degree of a graph is really, literally, the row sum of the adjacency matrix, so how many items do you have connecting to each of the vertices, that's basically the row sum. So if all the degree of all the vertices is the same, that means the row sum of the adjacency matrix is constant. And that means that the L1 vector is the top eigenvector. So this is a very interesting fact. So, basically, the top eigenvector doesn't really tell you much. You have to go to the second eigenvector to see the interesting thing. So now let's look at a second eigenvector. So what I'm going to do is-- there are many ways to evaluate whether this-- I think let's call this vector u. There are many ways to verify u is an eigenvector. You can directly multiply it, and see what's the eigenvalue. I think, probably, the most intuitive way to think about it is the following. So let's look at G bar, subtract from G bar-- a background thing. [INAUDIBLE] Where? Which one you were talking about? Sorry. The expression [INAUDIBLE]. Oh, sure, sure. But the negative of S is n over 2. Oh, I guess, sorry, I didn't assume that-- my bad. I didn't assume that this is an equal partition. I should assume that. So this is also possibly-- assume, also assume S is n over 2. S bar is n over 2. If they are not equally weighted, I think you have to do a little bit other things to deal with it-- not super important, but yeah. So if the S and S bar are not exactly same, I think L1 vector is not the eigenvector anymore. So you have to re-weight-- you have to kind of massage this matrix G a little bit to make it still true. But we'll get to that in a moment in next section, I guess. But so far, OK. So let's assume S and S bar are balanced. And now how do we see that the second eigenvector is this vector u that we want to look at, we're looking for? So the way to think about it is that you subtract from G bar this background matrix, 1, 1 transposed, times q. So, basically, the structure q from-- every entry of this matrix. 1, 1 transposed, times q is really just a matrix with all entries being q. And then what's left is this matrix. Let's call it-- let's say r is equal to p minus q. You get r-- something like this. So this is S. This is S, S bar, S bar. And here we have 0. OK? So now you can see that this matrix becomes nice because it's a block diagonal matrix. So for this matrix, if you want to verify, maybe let's call this matrix G prime. So we can verify G prime times u is equal to u. And how do you verify it's going to multiply off u? How do you verify this? This is just because you can do this for the two blocks separately. So this is really just-- so I'm going to go r, r times 1, 1, minus 1, minus 1. All right. So you do these two things separately, and basically, you get r times n over 2 for the first of two coordinates. And you get minus r times n over 2 for the second set of coordinates. So this is r times over 2 times u itself. And also, u is orthogonal to L1 vector just because half of them are positive, half of them negative. So you take the inner product. It becomes 0. So that's why, if you even look at G bar times u, this is equal to G prime times u because the background you subtract off is orthogonal to u. So that's why this is equal to r times n over 2 times u, which is p minus q over 2 times n times u. OK. So that's why u has eigenvalue p minus q over 2 times n. So this is the-- so I think the main point is that after you subtract off this background thing, then this G prime is block diagonal. And this means that the eigenvector aligns with the blocks. I think this is kind of the fundamental things that we are looking for. Maybe, just to generalize this to make it look a little bit more convincing, so suppose you have a matrix A which looks like this. Suppose you have some-- and just 1, 1, 1 here in this block, and you have a lot of 1's. And you have a lot of 1's here. Suppose you have three blocks now instead of two blocks. Then because you have block diagonals, so we know that for every block, you can do your own thing, right? So then, if you look at the eigenvectors, you can see that-- you can see that this-- so I guess each of these three vectors are eigenvectors because you can do each of the blocks in a separate way, right? So I guess there is-- so, basically, you can say that-- so you can say that I'm going to choose the L1 vector for the first block and then 0 at all the other places. That's still eigenvector. And then, when you have this, then you have that-- if you have these three eigenvectors, then the rows-- if you look at every row here, so this is 0. This is 0. This is 1, right? So this row gives the cluster ID of the vertex. So each row gives us the cluster ID of vertex. So I think this is the fundamental intuition about why eigenvectors are useful for capturing the clustering structure in the graph. It's just because in the extreme case, when you have extreme clustering-- so every block, every subset-- in these three blocks, or three subsets, they have, really, just strong interconnections, and know any other cross-group connections. In that case, the eigenvector just strongly aligns with the block structure. And here I think-- but it was complex. What makes things a little bit more complex is that you have some background. You have some more things here and here-- so some random entries, small entries other places-- then it would change the-- it would elevate the entire matrix a little bit, right? But it wouldn't change the eigenspace fundamentally. That's pretty much the intuition. Any questions? So it seems like here you have the [INAUDIBLE] structurally. And then, as you have some permutation, [INAUDIBLE]---- Right. --you would have to [INAUDIBLE]. Right. Right. So how would you permute this? That's a great question. Actually, that's exactly why this is working because-- so the question is, what if you permute this, right? So if you permute it, it's kind of like you're permuting-- the eigenvector will permute accordingly. So suppose you have a-- I'm not sure whether that makes sense. So, for example, suppose you have-- you declare this part and this part to be the first block. And then this part and this part will be the second block. I think your eigenvectors will permute-- the coordinates of the eigenvector will permute accordingly. And that's why it's aligned with the-- that's why the alignment is maintained, and you can discover the hidden structure. OK. Sounds good. So I guess maybe another thing-- I'm not sure whether this is a confusion for you. It could be confusion, could it not? So here, the eigenvectors-- there's no negative values in this construction, right? But the reason why I didn't have negative values is because it makes it simpler because-- so, for example, let's say even this matrix, this vector, this is also an eigenvector because it's the sum of two eigenvectors. And all of these are going to have the same eigenvalue. So any combination of the eigenvalues is still an eigenvalue. That's how you get the negative values. And there's something special about L1 vectors because-- so here, in this example, in this A example, there is nothing special about L1 vectors because there is this background noise. But when you-- so what happens is that when you add kind of a background noise to it, then the L1 vectors becomes-- stands out. So here you have three eigenvectors that are equalizing. But when you-- an L1 vector is in a subspace of these three eigenvectors, right? So L1 vectors is indeed a linear combination of these three things. And when you add the background noise, the L1 vector direction stands out. And then you are left with two other directions which are still the same. And those two other directions will tell you the block structure. So maybe another way to think about this is if you have two blocks-- suppose you have two blocks. So if you don't have any background noise, then the eigenvectors will be 1, 1, 1, 0, 0, 0. These will be our two eigenvectors. But then, when you add it back on-- and you can represent these eigenvectors in two different ways. This is an eigenvector eigensystem. You could also write it like this, just because you have different ways to represent a two-dimensional subspace of eigenvectors. But when you add the background noise, then this one will stand out. So that's why you can only use this system to see it, but not here, because-- I'm not sure whether that make sense. No? So, basically, without adding the background noise, you have this direction, which is this 1, 1, 1, 0, 0, 0. And there's this direction-- 0, 0, 0, 1, 1, 1. And also, you can have this direction, which is the L1 thing, and this direction, which is the 1, 1, 1 minus 1, minus 1, minus 1. So they're both-- so you have these two different sets of coordinate systems. And when you add background noise, you're going to elevate, or you're going to increase the strength in this direction. But the subspace doesn't really change. This direction becomes the top eigenvector, and this becomes the second eigenvector. But fundamentally, nothing really changed that. I hope this only clarifies this, not confuses. OK. So I guess I'm running out of time. Let's see. So I guess I'll take two minutes to give a quick overview, wrap up, and give a quick overview of what we do next. So, basically, you can also actually-- if you really want, you can verify that G is actually equal to p plus q over 2 times 1, 1 transposed, plus p minus q over 2 times 1, u transposed. This is the eigendecomposition of this matrix. And-- OK, T bar. So next, what happens is that we only have access-- so in reality, we only have G. So what do we do? What we do is we just say the intuition is just that G is approximately equal to expectation of G in certain aspects. It's not true that every entry of G is close to every entry of expected G, because you take one entry, G is binary, and the expectation of G is p of q. There is no way they are close. But this is in terms of the spectrum. So, essentially, we want to show that, essentially-- even though we need a little trick to make this work nicely-- essentially, we want to show the operator norm, the difference between these two, is small. Then, even if you use G to do the decomposition-- so this means that decomposing G is the same-- is similar-- to decomposing expectation of G. That's pretty much it. And now you can see the concentration inequality that we discussed in the earlier lectures, in this course, becomes useful. So concretely, what you do is the following. So you write G is G minus expectation G plus expectation G. Expectation G is just G bar. So this is G minus expectation G plus p plus q over 2, 1, 1 transposed, plus p minus q over 2, 1, 1, transposed-- sorry, u, u transposed, right? So you also say that this part doesn't matter too much, right? It doesn't really change your eigenspectrum. To make it cleaner, what you can do is that you subtract this part because u is something you want to discover. The top eigenvector is something you already know. So we probably shouldn't ask the top eigenvector. You should just directly look for the second eigenvector. What you do is you move this to the left-hand side. So you get this p-- you look at this matrix. This is something you know because G is something you know. And then this matrix is equal to this plus p minus q over 2. So you can view this as a perturbation. And this is something you are really looking for. So, basically, you start from the left-hand side. You take an eigendecomposition. So you do eigendecomposition of G minus p plus q over 2, 1, 1 transposed. And you hope that the top eigenvector of this matrix is close to-- and the hope is the top eigenvector is close to u. That's basically our goal. And how do you make sure that this is true? The only thing-- so it suffices to show this G minus EG in terms of the spectrum norm is much, much smaller than-- the noise is much smaller than that signal in terms of the operator norm. And so, in some sense, you need some robustness of eigendecomposition. I guess I didn't really discuss any of the existing theorems. But essentially, if you have this, you can prove that eigenvectors of the sum of these two matrices is very similar to the eigenvector of one of these matrix. And this is called Davis-Kahan theorem. I guess I wouldn't have time to talk about all of this. But this intuitively makes sense. So if the noise is small enough in terms of the spectrum, the operator norm, then you get the signal. And so how do you get this? How do you show this is true? I think I'm going to discuss that in the beginning of the next lecture. It's essentially you just have to prove some concentration inequality using some of the tools we had in lecture 3 or 4 of this course. OK, any questions? What about multiple clusters where the noise was [INAUDIBLE] decomposition [INAUDIBLE]?? So if you have more clusters, the noise will hurt the entire spectrum. And it becomes a little more complex. So first of all, if you have no noise, then you can still prove that the eigenvectors are enough for you to recover the blocks. But this robustness thing will be a little tricky because now you have more eigenvectors. And the noise has an influence to each of them. And you have to, again, control some noise-to-signal ratio using a little more advanced techniques. But essentially, the-- yeah, I think it's just really the mathematical part that's a little bit more complicated. But fundamentally, it's doing the same thing. [INAUDIBLE] question. Is it sufficient to-- my impression is [INAUDIBLE] of this noise, this eigenvector, you must always do [INAUDIBLE]. The new second eigenvector, we want to show that it's close to u and what I guess the eigenvector of the new matrix is supposed to be. When we analyze the operator norm of G minus [INAUDIBLE] G vector, it feels like we're trying to bounds how much of all of the eigenvectors we need. Right. Right. Do we really need to do that? Or is there a way we can go around and just sort of argue about how the [INAUDIBLE]? Yeah. So I think that's a great question. I guess, just to rephrase your question, you are saying that we really need to say that G minus EG is small in all directions. So you just have to say that G minus EG is only not maxing up with the direction u. I think you do have to say, to some extent, G minus EG is small in all directions because if G minus EG is very, very big in one direction, even if that direction is completely orthogonal to u, then that direction will be a new top eigenvector, right, because you're talking about the max. But I think, like, how do you exactly measure this? There is still some room to negotiate. But you do have to, in some sense, say something about all directions of the noise. OK. OK. Thanks. I guess, see you Monday-- or Wednesday.
AI_LLM_Stanford_CS229
FeiFei_Li_Demis_Hassabis_Using_AI_to_Accelerate_Scientific_Discovery.txt
My name is Fei-Fei Li. I'm the co-director of Stanford Institute of Human-Centered AI. So welcome to Stanford HAI's headquarter space. It's truly an honor to be here. First of all, welcome President Marc Tessier-Lavigne for attending this. And our guest of honor doesn't really need any introduction. But nevertheless, I will get started here. We are very honored to host a talk, and then later, a discussion by Demis Hassabis, the founder and CEO of Alphabet DeepMind. Demis started his career in video gaming, developing new games at a few studios before launching his own. So for those of you who are taking away games from your kids, think twice. Then Demis headed back to school for a PhD in cognitive neuroscience to better understand the architecture of brain at MIT, right? So through various places, MIT, Harvard, University of College London. Demis has always been this very creative thinker and leader in thinking about the relationship between machine intelligence, human brain, and so on. And then in 2010, he co-founded DeepMind. You all know DeepMind. It is one of the first truly AI companies of our era, and it was acquired by Google in 2014. Deepmind has made breakthroughs in data center energy consumption, generative-- in this case, degenerative eye conditions and protein folding through the very impressive AlphaFold work that we all have heard of. It has made significant advances in the realm of deep learning and reinforcement learning. And true to Demis's its gaming roots, Deepmind has also beaten a few world champions at the very ancient game Go. Now DeepMind is looking to apply techniques from AlphaFold to nuclear fusion in hopes of halting our climate crisis. I hope we're going to hear more about that. And of course, DeepMind and Demis teams are exploring ideas around-- well, they have pioneered the term artificial general intelligence, which Demis has called this an epoch defining technology that will change the very fabric of human lives. And before I just turn the podium to Demis, I do want to say that when HAI was founded, Stanford HAI was founded back in 2019, Demis was part of our founding distinguished Fellows, acknowledging his profound impact in the field of AI. And I cannot think of a better person, a better name, a better leader to be here today to talk about AI, given all that is going on in the world. AI has truly had a public awakening moment. And it's no longer just a niche field that nerds like us play around, it's impacting human life, society, and our future. And HAI has been mentioned to be one of the forums that will host this kind of intellectual discourse about AI and the societal impact. So especially to the audience students today, not only you're here for a treat to hear what Demis has to say, I hope you engage in a dialogue with him after this talk. So without further ado, Demis, please. Thank you. [AUDIENCE APPLAUSE] Thanks so much, Fei-Fei, for that very generous introduction. It's really amazing to be here, and thanks for inviting me. As Fei-Fei mentioned, obviously the HAI was set up in, I think 2019. And this is actually my first time to be here physically because of obviously COVID and all the travel restrictions, everything got in the way. We've been meaning to do this for years. So now finally I'm here, and it's great to be here. And I hope to be here a lot more often. I have a lot of reasons to come over to the Bay Area. So I hope to pop in here every now and again. So I'm going to cover on my talk today, trying to-- the sort of real passion of mine, which is to use AI to accelerate scientific discovery itself. And of course, at the end of the talk, I'll also discuss a lot about the current vogue of generative models and generative AI and large language models and the work we're doing in that part too. And maybe that can lead into some of the Q&A and discussion that we'll have afterwards. The DeepMind was founded way back in 2010. It's almost like the middle-- the medieval times now of AI. They're 12, 13 years ago now. And it's been incredible to see-- we've been in this sort of very amazing position of being a kind of at the forefront of where the whole AI field was going. And it's been sort of-- we expected things to go sort of like this, but even for us, it's been amazing to actually just live through that. The back in 2010, it was very difficult to even-- I remember trying to raise our seed round of a few hundred thousand dollars. It was almost impossible to do. And these days, it seems like $1 billion rounds are being done every other week. And that's just over a decade, a very short amount of time. And back then in 2010, if you can-- some of you, most of you not old enough to cast your minds back to that, but it was just nobody was talking about AI. Very few people were talking about AI, and certainly not general AI. And it's certainly in academia where Shane Legg and I-- Shane Legg's the chief scientist of DeepMind-- were studying. You almost get eye rolls from professors and other things where, if you discussed about this more general AI or the original aims of the AI field to build a human-like intelligence. So it's been astounding to see what's happened, and obviously, things are changing incredibly fast, even month by month now. So we set up DeepMine in 2010 because we felt there was a lot of different techniques coming together that we could see. Deep learning had just been invented, and we've always been big proponents of reinforcement learning, and we wanted to bring those things together along with some understanding we had about the human brain. My PhD, I worked on the hippocampus, and memory systems, and imagination, made some interesting discoveries in that domain that I thought would also potentially carry over into ideas for AI systems and architectures. And we thought all of that was coming together and also the advent of a lot of compute power and specifically GPUs, which ironically, of course, were invented for games. So everything for me, as you'll see, comes back to games one way or another. So what was our mission for the DeepMind? Well, we thought of it as kind of an Apollo program effort. If we really went for this with intense focus and ambition, we felt that a lot of progress could be made quite quickly. That's how it turned out. And our mission statement was to try and solve intelligence. And by solve, I mean fundamentally understand the nature of intelligence, and then try to recreate that in an artificial construct, and then use it to advance science and benefit all of humanity. And we're still very much on that same mission today. So what did we start off with? Well, I think our first big result was back in around 2013, where we finally managed to scale up what then became called deep reinforcement learning systems to something that was actually a significant scale and quite challenging for humans to do. And we went to the earliest game systems that were popular at all, which was the Atari systems from the 1970s, and you might recognize Space Invaders here, and the set of around 50 games that were on the emulators for Atari systems. And what we built with Atari DQN, our first big successful system, was a system that learned how to play and maximize the score just by only being given the pixels on the screen, the raw pixels on the screen. So it was very much probably the first example of the kind of end-to-end learning system working on something really challenging, perceptually challenging, like an Atari game. So that was an incredible moment for us. And I remember back in 2012, 2011, when we were struggling to even win a single point, a game like Pong, and we were just wondering, well, maybe we're just 20 years too early with these ideas of learning systems. And then suddenly, it won a point, then it won a game, and then it didn't lose any points. And then eventually, by 2013, it was playing all the Atari games. Of course, we then took that much further. And maybe still the thing we're most well-known for is the program, AlphaGo, that Fei-Fei mentioned, where we tried to crack the game of Go, really the Everest of game AI. And we needed to do it through self-learning systems. So all of you hopefully will know Go, this super complex game that's played in Asia. It's got more possible positions, 10 to the 170 by one estimate than there are atoms in the universe. So one cannot possibly brute force this, solutions to Go. And in fact, furthermore, even people who play Go, the top masters, they don't really understand and can't explicitly explain what the rules are or the heuristics are that they're following. So unlike chess players, we can distill chess players', grandmasters', knowledge into a set of rules and then program basically expert system, chess computers, to play pretty well, like Stockfish. That approach was impossible with Go. So you have to use this learning approach. You have to allow the Go system to learn for itself. And then obviously, famously in 2016, we had this massive million-dollar challenge match in Seoul. 200 million people watched the match around the world, and AlphaGo famously won that match 4-1. But more important than it winning was the creative strategies that AlphaGo came up with. Most notably, move 37 which is shown here on this diagram, the red, circled in red, in game 2 that really blew the Go world and all the Go experts blew them away in terms of this novel idea that no human expert would have ever played, because it's basically on the fifth line and it's very early on in the game, and that's unthinkable to do from a space perspective. But of course, a hundred moves later, that stone, the move 37 stone, that black stone happened to be in the perfect position to decide the match and decide the fight that was spilling out from the bottom left. So it was almost like AlphaGo presciently put it there for hundreds of moves later for it to be in the perfect position. So it's these creative strategies-- and creativity in some ways, and just want to unpack that slightly-- that I think is one of the promises of these learning systems. The point is that they may be able to actually solve, come up with solutions, to problems that we ourselves don't know how to. And with expert systems, of course, this is not possible because you can only program directly a solution that you already know yourself how to solve. So you're inherently limited with expert systems to solutions that experts, other experts, know how to do. So I think in terms of creativity, we can think of three different types of levels of creativity, at least three, maybe something interesting for us to discuss. I think you can think of the lowest level or the least, the most mundane creativity, is being interpolation to averaging things together. So you imagine you show a system, a million cats, pictures of cats, and then you say, come up with an average of all of those pictures, that will be a novel cat. It won't be something that's exactly the same as any of those other million cats, but it's just a simple averaging process. Then you have extrapolation, which I think AlphaGo has shown, which is, given all of human knowledge about Go, then it then plays 10 million games go against itself, or at least, AlphaGo Zero, the successor version of AlphaGo did. And then it discovers new-- it can extrapolate new strategies never seen before not in its training corpus. But then there's still one level of creativity to Go, I would say, which I'm calling invention or out-of-the-box box thinking, which is, could these systems actually invent Go? So not just a new go strategy but actually invent Go or invent chess. And our systems today still can't do that. But I don't think there's anything magical about that level of creativity. Maybe you can call that true creativity. I think it's still a systematic process that can be probably modeled by these systems. I just think one thing that's holding them back is that we don't really know how to express that demand to a system in a way that it could understand. If I was going to get a computer system to design a game like Go, I would say something like, invent a game that only takes five minutes to learn but many lifetimes to master, is aesthetically beautiful, can be completed in 10 hours of play so it fits into a human day. These kinds of things is what I would give as instructions, and then I'd hope it would come up with something like Go. But there's no real way, for our current systems, at least, to take on board such abstract, conceptual instructions and do something with that. But I think maybe that's within reach now. So how does the self-play system work? And I'm just going to combine together, actually, a range of systems together, AlphaGo, AlphaGo Zero, AlphaZero, and even our more recent thing, MuZero. So AlphaZero was our program that ended up-- could end up playing any two-player game to better than world champion level. And it's a very simple process. If you break down this self-play idea, which I think is still something worth considering in the generative AI space, too, actually, the analogous of this, and I don't think it's been used yet. But what we did here is we started with V1 of, let's call it AlphaGo and/or AlphaZero, and it starts off playing randomly, whatever game it is you've given it to play. It doesn't know anything about strategies or anything, so it's just playing purely randomly. And you play a hundred thousand games like that, and then you generate your V1 data set. And you use that data set to train a V2. And what the V2 is trying to do is predict what sorts of moves are likely in certain positions and also who ends up winning the game from that position. And what's the probability chance of either side winning from that Position So that's the two jobs of the network, to try and model what the game space is like. So then you train up V2. You then have a face-off match, a hundred-game match of V1 versus V2. And you see you have some kind of threshold. In our case, we said a 55% win rate threshold, where if V2 beats V1 by above that threshold, you assume that it is significantly better. And then you replace the master system, the generator system, with that new V2 system now. And you go around, of course, iterating this round. So now you play another hundred thousand games with V2, so it's slightly stronger. So that means the generator data are slightly better quality. And then you use that to train V3, which is, of course, then a slightly better system, and then you face off against V2 and you continue. Now, obviously, V3 doesn't beat V2, then you continue to generate more data, another hundred thousand games with V2. So then now you have 200,000 games to train a new V3, and eventually, your V3 will beat V2. And if you do this 17 times for AlphaGo Zero, you go from playing completely randomly to being stronger than the world champion in just 17 epochs, which can be done in a matter of hours. So this is a very powerful system. And what's really going on, if you step back, is that it's basically, you're creating a model to model, for example, Go, and the dynamics of Go, and the strategies of Go, and the likely positions. And that allows you to do tree search on top but do it in a tractable way. So instead of having to explore all possibilities, like in gray here, which would be intractable, you actually use the model to narrow down your search to just the most reasonable options. And then that allows you in a certain amount of constant thinking time to actually find a near-optimal or very, very strong actual line that you want to play, an actual move you want to play here shown in pink. So I'm going to come back to this at the end because I think this is a very general way of thinking about AI and the idea of coming up with a solution to your problem. So we've been very fortunate. Over the last decade, we've been part of creating many big breakthroughs in all sorts of different games, all of them landmark results. The Atari one, the AlphaGo one I mentioned, AlphaZero, I just talked about generalizing that to every two-player game. And then finally AlphaStar, which was our program to beat grandmasters and players at StarCraft II, which is the most complex real-time strategy game, computer game. And it has extra challenges over board games of it being partially observable. It needs things like long-term planning. So it's complex in more challenging ways than a board game. And so this was all of our work in games. Now, of course, although I love games, always have done, playing them, designing them, using them for training for AI. I've used games in every way possible. But although it's been very fun to do that, it's always been a means to an end, not an end in itself. The end was never to just win at Go or win at StarCraft. It was to build-- it was to use games as a convenient proxy to test out our algorithmic ideas so that we could apply them to important real-world problems. So that's what has been so exciting for me in the last couple of years, I would say, is that, that first 10 years was really about building the foundation, the building blocks, of what we see today in the modern AI industry. But as of a couple two or three years ago, I felt that the time was right, actually. We had powerful enough, sophisticated enough algorithms, that we could now start really tackling some very challenging problems, both in industry through our partners at Google-- and pretty much every product that you use at Google now has some form of DeepMind technology in it, but obviously, excitingly, going back to the mission statement and my particular passion of using AI for scientific discovery itself. So I'm going to go through probably our biggest and the result I find I'm most proud of, of us so far, which is our AlphaFold program. Many of you maybe will have heard about that, but for those of you who don't know, what the protein-folding problem is, it's this incredibly important problem in biology of basically predicting the 3D structure of a protein directly from its amino acid sequence, its one-dimensional sequence. You can think of it as a bit like a genetic sequence for the protein. And proteins are the workhorses of biology, the workhorses of life. Every function in your body, every function in life, pretty much depends on proteins. And the function the protein does in your body is very dependent on the 3D structure that it ends up folding into. So of course, knowing the structure, then, can be really important for things like drug discovery and understanding disease. But unfortunately, it takes order of three, four many years can take to even experimentally determine one structure, depending on the difficulty of the protein to crystallize and so on. And in fact, my biologist friends used to always tell me, the rule of thumb is that, so you can take one whole PhD student their whole PhD to basically determine the structure of one protein. Of course, Christian Anfinsen famously said in his Nobel acceptance speech in 1972 that this should be possible. Determining the 3D structure of a protein should be determined, fully determined, by the amino acid sequence. So this protein folding problem should be solvable in theory, but of course, he didn't say how. It's a bit like Fermat's last theorem. I'll just write it in the margin. And then 50 years of toil later, people are still trying to figure out why this should be the case. So it's been literally a 50-year-old grand challenge in biology. I found out about it actually as an undergrad at Cambridge in the late '90s. A couple of my biologist friends were obsessed with this problem and really explained to me, first of all, what it was but also how much additional research it would unlock if one could actually solve this problem. And that always stuck with me as something, and I tend to do this when I hear interesting things like that. I file them away in a metaphorical little book of things to do. And one day, I thought this would be a very suitable problem for AI. So that's the question then, can the protein structure prediction problem be solved computationally? Why is this such a hard problem? Well, again, Levinthal, one of the contemporaries of Anfinsen had famously conjectured this paradox of, well, there's 10 to the 300 possible shapes and average protein can take, an average size protein. That's what he calculated back of envelope. But somehow obviously, nature spontaneously folds these proteins in your body in milliseconds, trillions of these proteins. So how is this possible, that the physics is somehow solving this in a very tractable amount of time. So in a way, that should give you hope that there is some landscape to learn about. Maybe it's a quantum one, but there is some landscape that your AI system should be able to learn about in order to solve this problem in a tractable time scale. The other reason we picked this problem, apart from its significance in terms of enabling many other things downstream, is that there's this fantastic competition called the CASP competition, which is a little bit like the Olympics of protein folding. And it runs every two years and has done since 1994. So it's 30 years in now. And it's very rigorously run, and it's to test the best computational systems every two years. And the beauty of it is, is that what happens is experimentalists, when they know they're about to-- they've just resolved a structure experimentally but they haven't published it yet, they give it to the CASP competition to be a blind test. And then you're supposed to give your prediction in. And then by the end of the competition, they publish the real structure, and obviously, you can compare it then to the ground truth. But it's a double blind experiment. You don't know who the competitors are, and you also don't know what the real structures are. So it's a really clean experiment for testing computational systems. And before we came along with this, this was basically what had been going on in the field for the last decade before we got involved with AlphaFold. We started AlphaFold in 2016, actually, the day after we got back from Seoul and the AlphaGo match. This was our next big project. And you can see here, this is a bar chart of the winning scores of the top team in each of the CASP competitions going back to 2006, so every two years. And on the y-axis, you can see the score that they rated is called GDT, but you can think of it roughly as a percentage accuracy of, how many of the residues, the amino acids, did you get within a tolerance of the correct ground truth position? And that tolerance is a couple of Angstrom. And you can see the winning teams for a decade were only getting about 30% to 40% of those correct-- and on the hardest category, I should say, so free modeling category where there aren't templates or other things that you can rely on that are known. So they're truly unknown from their sequence. And essentially, this is useless for experimentalists. It's just not nowhere near accurate enough, and many area parts of the structures would be completely random. And what we were told is that, by the organizers, in order for this problem to be considered to be solved, you would have to get-- your prediction would have to be accurate to within the width of an atom. So that's less than 1 Angstrom error. And you'd have to do it on over 90% of the residues. So you're looking for roughly a 90 GDT score for that to be of use to an experimentalist. And one way to think about that is if you crystallized a protein twice in two different labs, and you tested it, and then you compared the two structures you got experimentally, there would be some disagreement between those two experimental structures. So there's no perfect ground truth in a sense. So there would be experimental disagreement to that fidelity. So you want to get within that bound, then you're starting to become really useful, potentially. So we entered in 2018 after a couple of years of work with AlphaFold 1. That won the competition by a huge margin. We improved over previous scores by almost 50% getting close to 60 GDT. People were amazed. It was the first time I would say cutting edge ML techniques had been used as the core of a protein folding system, and it had this huge improvement straight out of the box. But the problem is obviously, we wanted to get to this 90 GDT or less than 1 Angstrom error. And we managed to do that with AlphaFold 2, but actually, we had to go through a whole rearchitecting using the ideas and experience we'd learned from AlphaFold 1. But it needed a completely different approach in the end and different architecture in order for AlphaFold 2 to work and for us to crack that, including-- one of the important innovations was actually building some physics constraints and chemical constraints into the network, into the properties of the network. So it didn't have to learn things like Van der Waals forces, and atoms shouldn't be overlapping, and things like that, and bond angles. We actually solved that in a expert system way but then without harming the learning. So I don't there's many other examples of systems that have combined prior knowledge, one could say, prior domain knowledge in that way, handcrafted with a learning system, because usually, that interferes with the learning system. So this was obviously amazing. We released this in 2020. The competition was in the summer of 2020. The results came out in the end of 2020, and then the organizers of the competition, this amazing guy called John Moult who has run the-- founded the competition, run it for 30 years, was almost in tears declaring that the problem had been solved. And obviously looking back at 2016, and I was talking to him about this, given the trajectory, he was despairing of the fact that it would ever be solved in his lifetime. So that was fantastic. Here's an example of a really-- proteins are super beautiful once you actually get into them. They're little bio nanomachines. They're incredible. Once I started really looking into this back in 2016, I was very integrally involved in this project. They're exquisite biological nanoscale machines. And it's amazing that this complex dance can work in our bodies and gives birth to life, really. And so here's one really complex protein, and you can see AlphaFold actually iteratively gets some recycling processes, and it goes through the network several times and then eventually gets to the final shape which you can compare is very similar. The green and blue, the greens, the ground truth, and the blue is the prediction. And they're almost overlapping for something as complex as this. So what did we go on and do then? We decided the best way to get the maximum impact from AlphaFold in the world and the maximum benefit for humanity was to publish the methods, open source, the code. And then furthermore, we realized-- this is the Christmas of 2020 now. We had a pretty productive 2020 and lockdown-- was that we decided actually, how would we give this to the world? And the normal way you normally do this is you normally provide a service on a server, and people give you their amino acid sequences, and then a few days later, you send back to them. You email back to them the predictor structure. And that's the normal way things were done. But because AlphaFold was so fast and also so accurate, we just decided we would just generate all the proteins known to science. So actually over that Christmas, we did the whole human proteome over the Christmas holidays, and then we did 20 model organisms, the most important ones for research and things like plant science, and so on, important crops. And then we did every protein known to science, so all 200 million. And this is really important because in humans, the coverage is around-- I think it was around 17%, 18%, had been found experimentally, and we more than doubled that with high-accuracy predictions. But for other organisms, let's take plants like wheat or rice, very important crop species, less than 1% was known, partly because their genomes are much bigger for evolutionary reasons, but also, there's just a lot less funding and a lot less experimental work done on those types of organisms. So it's actually even differentially useful to people like plant scientists. So that's been amazing to see. And then once we've done that, obviously, we wanted to put that on some kind of database. We thought about doing our own one, but actually, we realized it would be much better to just plug into the main vein of what biologists use every day. And there's all sorts of amazing databases biologists have created, one called UniProt. Obviously, there's the PDB itself that has all the crystalline structures in it that we learned from, 150,000 structures that have been accumulated over the last 40, 50 years. And we teamed up with EMBL-EBI, European Bioinformatics Institute just up the road from us in Cambridge in the UK, and they already host some of the world's best databases. And we very quickly had an amazing collaboration with them over two or three months to create a new database, and host it, and put all the AlphaFold predictions onto that database, and also plug it in to all the other databases, genetics, databases and other things that-- other ways where you might come to find the structure that you're looking for. So that's been amazing. And then on the safety and ethics side, of course, we take this very seriously. We always have done. I'm going to talk a little bit about that towards the end. We actually consulted over 30 experts in the area, from obviously, biologists, but also from pharma, and biosecurity, and even human rights to just make sure that what we were releasing wasn't dangerous in some way and the benefits would certainly outweigh any of the risks involved. And they all came back unanimously saying that they thought the benefits far outweighed any potential risks. And then what has AlphaFold been used for over the last 18 months, nearly two years now? An incredible plethora of things, if you're interested, I'd encourage you to go to our unfolded.deepmind.com, a website which has-- we're collecting use cases of from fantastic scientists around the world who've used it and told us, contacted us and told us how transformative it has been for them. And if any of you are biologists in the audience-- I know some of you are. And if you've been using it, we'd love to hear from you, what you've been using it for. But here's just some examples of-- John Mcgeehan and team from Portsmouth are using it to design plastic eating enzymes to deal with plastic pollution in the oceans. People are using for antibiotic resistance. It was used to help determine the nuclear pore complex, one of the biggest proteins in the body that governs a gateway to let nutrients in and out of the cell nucleus. That was a combination of experimental data and AlphaFold predictions. Jennifer Dowden has been using it and her institute for crop sustainability, improving crop sustainability in the face of climate change, malaria vaccines. And then most recently, I was pleased to see Feng Zhang at the Broad Institute using it to create a molecular syringe as a new drug delivery mechanism, which is fantastic in his recent Nature paper. So it's been really amazing to see and actually more than we could have dreamed of. And just 18 months in, we've had a million researchers use AlphaFold and the predictions AlphaFold has made. We think that's pretty much almost every biologist in the world now, almost every country, and it's already been cited 10,000 times already in just 18 months, the paper, the methods paper, and obviously got some nice accolades from science and from Nature, as well. So what's my-- all these things are happening, and every year something amazing is happening. And I'm trying to make sense of this all while being in the middle of it, and if anything, accelerating the pace of progress. And it's interesting. There's a few things, takeaways, that I maybe want to leave you with, which is, so first of all, there's this notion I'm been saying and thinking about, of science at digital speed. And what do I mean by that? I mean science at this technology speed, so being able to do science at what we normally think of as digital technology speed. And I think AlphaFold is maybe the first example of that but I think it's just going to be the first of many. And I think AlphaFold is a science digital speed in two ways. One is that obviously, it can fold the proteins in milliseconds instead of taking years of experimental work. So 200 million proteins, this is obviously just a funny background envelope rule of thumb, but you times that by a PhD time of five years, that's a billion years of PhD time, by some measure, that has been done in a year. Of course, they're not all perfectly accurate and et cetera, so we still need both experiments, as well, but that has to be helpful. And then the second way to set digital speed is the speed at which it can get disseminated, the solution can be used by actual biologists, drug discovery people, all of pharma, all of that. Now, normally, as I understand it, you may invent some incredible new technique, biological technique. It's a huge breakthrough. But it still might take a decade for that to flow through into the lab. Everyone has to be trained on it. Maybe new equipment has to be built. People have to think about how to use it. Here, it's just a database. You literally do a keyword lookup and it's there, so for everyone in any country, just like a Google search. So because it's digital in nature, the solution, it can also be disseminated in the way we normally think about apps and other digital devices and services. So I think that's an interesting concept, and maybe other people can think of other examples, but I can't. And I feel like this is going to be more-- I'm really excited about the next few years and seeing this happen more and more in many other branches of science. Another observation I have is I think we're maybe at the dawn of a new era of what I like to call digital biology. So my thinking is and why we even embarked on something like AlphaFold is that I feel like biology, at its most fundamental level, can be thought of as an information processing system in some sense. Obviously, it's a phenomenally complex one and an emergent one, so that brings huge complexity. And I think that's why mathematics, humans doing mathematical models of these things, whilst of course, can be very useful, I think they're going to be difficult for maths to fully describe a cell. I'm not sure we're going to ever discover Newton's laws of motion for a cell. It's just too emergent, too complex. And I think that's why AI might be the perfect solution to that type of regime. So I think of maths and how well it describes physics, physics phenomena, I think AI could be the right description language for biology. And I hope AlphaFold is a proof of concept, and I hope that when we look back on it, it's actually not just AlphaFold itself being useful, but it heralds the dawn of a new era of digital biology. And we've actually spun out a new Alphabet company called Isomorphic Labs, a sister company to DeepMind, to try and take these technologies further in biochemistry space and improve AlphaFold further and other things to reimagine the drug discovery process from first principles using AI and computational techniques. Can you do the search mostly in silico and then leave it, leave the wet lab work, to the validation step? That would be the dream. So then, basically building on that and stepping even further back then, what have we done? When I look at the body of work that we've done, what is it that these things have in common? And I think the essence of what our systems are doing is we're effectively trying to find an optimal solution in some kind of enormous combinatorial space. I think that's a very general description of problems, scientific problems but other problems, too. And then what we've done from AlphaGo onwards to AlphaFold is effectively learn a model of that environment, the dynamics of that environment, the manifold of that environment, basically, and the context. And we've learned that from data or from simulated data. Ideally, you have both in many cases. So in games, obviously, we have less-- it's effectively simulated data. And then what you do is you take that model, and hopefully-- it doesn't have to be perfect, that model, but it has to be better than chance, of course. And then you use that model to guide a search process according to some objective function. And if it's a game, then you're trying to win or maximize the score. But if it's something like a biological problem or chemistry problem, maybe you're trying to minimize the free energy, something like that. As long as you can express that objective function, that should work. And I think this basically turns out to be a very general solution. I think this is a general way to think about a lot of problems. I'm not saying every problem can fit into that. Maybe, but a lot of problems. And when you look at things with this lens, you start realizing that that problem can actually be made to fit to this type of solution. And I'll give you an example from drug discovery, which is what we're trying to do at Isomorphic, is-- so this is the tree I showed you earlier, finding the best Go move. So each node here is a go position, and then each edge is a go move. And you're trying to-- intractable space, 10 to the 170, and you're trying to find a near optimal or close to optimal Go move and Go strategy. Well, what happens if we just change those nodes to chemical compounds? And now we're in chemistry space. And you start with some fragment or something at the top, and then you got to decide what you're going to add to that in some sort of generative process. And now you're basically exploring through chemistry space. But obviously you need an underlying model of biochemistry, which is something we're building. And then in theory, you could search for optimal compounds with optimal properties that you're looking for, like no side effects, all of these, solubility, all of these properties-- ADMET properties, they're called in drug discovery-- and do it in the same way, in the same tractable way we did it with Go. And obviously, if this is true, if this works even to some degree, it would be revolutionary, I think, for drug discovery process. So I'm going to come to the end now of, well, we had a golden couple of years in some sense for AI, for science. We've lucky enough to have many nature and science papers published in all sorts of domains, so from quantum chemistry, better DFT functions to approximate Schrodinger's equation. Pure mathematics, we've solved some important conjectures in topology with collaborating with some brilliant mathematicians. We've had working on Fusion reactors with EPFL on their test fusion reactor, controlling the plasma in real time in their fusion reactors and being able to hold the plasma safely in place for arbitrary amounts of time, being able to predict rainfall many hours ahead, more accurately than current meteorological models. So I just showed-- I could have had several more slides of this, but this is what should happen if you're building a general algorithmic solution to things. You should be able to apply it to a mind-blowing number of different spaces. And in general-- we have a whole approach of how to do this, too. How do we find a problem that's interesting? How do we find a problem that fits what I just mentioned? And then we usually try and find one of the world's best experts in this area to collaborate with and really make sure we're actually answering a problem that they actually care about. It's actually quite difficult to do that, actually, articulate what the right problem is unless you get really deep with a domain expert on that. And so we've been lucky to collaborate with many, many fantastic scientists and mathematicians on all these problems. And then in applications, there's a ton of those too. If they mentioned some of those earlier, one of the early things we did at Google was saving about 30% of the cooling energy used to cool the massive data centers at Google. So there's a huge energy saving, and we're starting to explore doing that across actual whole power grids. Wavenet, our most realistic text-to-speech system that everyone uses a variation of that one, of the first autoregressive models that we did, Aaron van der Oord, led that work at DeepMind way back and is now the basis of all modern text-to-speech systems. So pretty much any system that you're talking to and their very realistic voices that come back are based on this work. We compress YouTube videos down by 4% or improve recommendation systems across the board, really, a huge amount of impact there on applications too. Then of course, there's large models, all the rage right now. We've done a ton of work in this area. Just to pick a few things out, there's chinchilla. One of our famous results, they're looking at the scaling laws of large language models and actually showing-- find the important finding, that they're under-trained, or they were under-trained back a couple of years ago. AlphaCode, our system that can program in competitive programming competitions, program at median human programmer level, and that's getting better all the time. Flamingo, our system to describe what's happening in visual images. And then Gato, probably one of our most general agents out there so far, you can think of it as an arbitrary token-to-token translator. And it doesn't care what those tokens are. They don't have to be words. They don't have to be images. They can be anything. It could also be actions, for example, like to control a robot. So that same system, Gato, can do pretty much all the things I've showed you earlier on in this talk and more besides, things like controlling robot arms and things like that, all with the same system. We pioneered a lot of stuff in large land models ourselves. We've been testing them for years now. Our biggest systems are called gopher and chinchilla, and there's some newer ones. But we've been using them to do a lot of research into safety of these systems. How can we make sure they do rule adherence? We pioneered errors like RLHF now that are becoming huge. We published the first paper in that back in 2017. Attention, retrieval, memory, all of these things are important advances that need to be included into our large models, and many people are now experimenting with these things. And my view is currently that scale is incredibly important. Obviously we have to continue on those scaling laws, and adding more compute and more data is clearly helping. Maybe we're starting to see some diminishing returns now on that, depends on how you look at it. But my view is that these large pretrained models have to be a necessary, but they're probably not sufficient in themselves, component of AGI. That's my current position. But I'm not highly confident on that. There is definitely plausible that they might be sufficient. But my guess, my best guess, would be that they're clearly necessary but not sufficient on their own. And I think some of the areas we need more innovations on are areas like grounding and factuality. Of course, we all know about the hallucinations of these systems, and there's many ways to try and address that, planning, missing, memory, and reasoning, and quite a few other things besides, I think, and we're working on all those topics very hard now. And our best chat bot is called Sparrow internally. We published a paper on that just over a year ago. We've been improving it since. And really for us, this is a deep exploration into how we can build a dialogue agent to be helpful more of the time, correct more of the time, and harmless. So they're designed to answer questions with responses that are useful and evidence-based, and it uses search and retrieval to partly fact check and to ground it. And it's pretty cool. The results we're getting on that, which we'll be talking about more soon, is maybe an order of magnitude more accurate and grounded than other systems out there, less than 1% errors even under adversarial testing. So it's pretty cool, but we want-- but at the moment, there's a little bit of a trade-off in the sense that, if you want things to be more accurate and safe, they tend to be a little bit less fun and engaging to interact with. But ideally you want the best of both worlds, and that's what we're working on right now. But it's actually a little bit difficult. It's a bit of a Pareto frontier at the moment. If you make it more rule adherent, more strict, and more polite, and those things, and refuse to answer difficult questions, and stuff like that, obviously it impacts how engaging that's going to be. So that's an interesting dichotomy that we're seeing at the moment and we're trying to solve. So talking about ethics and responsibility, I think AI has incredible potential to help with humanity's greatest challenges. That's why I've worked on AI my entire career, my entire life. Obviously I think that's going to come about in a big way through advancing science. But of course, these are dual use technologies. We've always known that from the beginning of DeepMind. We've had an ethics charter from the very start of DeepMind. That's now part of Google's AI principles. So we help to draft those, and those are publicly available. And so I think AI has to be deployed, and used, and built responsibly and safely and to ensure that it's used for the benefit of everyone. That's obviously, I know, a big part of what HAI and the people here are thinking about, and it's been central and core to us from the start. And people like Fei will know about that. We've been discussing these things for many, many years. And we continue to try and be a thought leader and to provide thought leadership on these topics of AI strategy, coordinating, safety work, risks, ethics, and also engaging with the wider community to get them up to speed with what is sometimes a bewilderingly fast-paced, or definitely is bewilderingly fast pace of innovations and advances. So I'll just finish, then, by saying, approach to AGI, my view is that we are approaching an absolutely critical moment in human history. That might sound a bit grand, but I really don't think that is overstating where we are. I think it could be an incredible moment, but it's also a risky moment in human history. And my advice would be, and I get asked about this all the time, is I think we should not move fast and break things. The typical value mantra of move fast and break things, it's obviously extraordinarily effective. It's created the modern Silicon Valley, this type. And there are other variations on this same sentiment, break things and then you ask for forgiveness later. I don't think that's appropriate for something this powerful. You can't-- specifically, it's not that you definitely don't move fast. I think you move at the right speed. I'm not saying we move slow, but we definitely don't want to break things that we could have foreseen ahead of time. And also, depending on how powerful the technology is, it may not be possible to fix that afterwards, so these unintended consequences. And I think we're seeing that with social media, more broadly construed. Now we're starting to understand, at the scale that it's at now, there are these unintended consequences that can be pretty bad. And I think we should learn from that and do it differently this time with AI, which I think will be far more impactful than social media has been, which has already been obviously hugely impactful. So what should we do then? Well, I would advocate, well, fortunately, we already have a way of doing this. It's the scientific method. And I think we should invoke that here on an organizational level, and as a community, and as an industry. And that involves these typical aspects of the scientific method, thoughtful deliberation and foresight where possible, hypothesis generation with rigorous testing, not out in the wild necessarily, carefully controlled conditions and control conditions, detailed analysis. I don't think there's enough analysis techniques out there. I think that's something we need to do a lot more research on as an industry and community. Update based on empirical data, ideally with external review, and the aim of all of this is to gain a better understanding of your systems before you deploy them at scale and then find out something isn't right. And of course, we're never going to-- this is not a question about getting things perfect. It's not possible. You do have to do some empirical testing in the wild, but we should be thoughtful about that. We can do better than just dumping it out into the world, and hoping for the best, and then seeing what happens. I think if we use the scientific method, we can get a lot, maybe 90% of the way of understanding something, and then this just narrows the things that could maybe go wrong or the unforeseen things. So I would say as we approach AGI, maybe we're getting pretty close to it now, we need to treat it with the respect and the precautionary principle that a technology of this potential demands. So my view is we need to be bold and responsible, so both of those things together. So I'll just end by saying, I think if we get this right, and this has always been my hope, I think AGI, from the point of view as a scientist, could be the ultimate general purpose tool to help us understand the universe, much like maybe the Hubble telescope has helped us understand the cosmos. Thank you. Thank you, Dennis. Now, we get to see each other quite often, but I still love, love listening to your talks, both online or offline. So I think especially with lots of students here, I think we have a lot of questions. So we do have microphones, I think, on the side, but I'll start with a couple of questions, and we'll open it to the audience. So you wear many hats. You're founder and CEO of DeepMind. You're really a world thought leader in AI. You're a gamer all your life. I think your heart's heart, you're a scientist. I say this partially because I am, too. So I'm going to ask you a question to just begin with as scientists to scientists. Really now thinking about all your slides, what excites you the most in terms of a scientific question today? I think you're right about, my heart is a scientist, and then there are these other things I do to enable that science. I'm an entrepreneur because I thought that was the fastest way to get the scientific mission going. Even teaming up with Google was to get the compute power so I could push the mission faster. I think for scientifically, so the thing I'm most excited about in applications is things like AlphaFold and more things like that, the drug discovery. I want to make that happen, and I think we can revolutionize human health, and disease, and things like sustainability. All of the projects I've picked are in these key areas. So that's one thing, but maybe in a even more scientific way, what I'm excited about is I've always thought this journey that we're on that, we're currently on and making a lot of progress now with AI, would reveal a lot about the human condition. Because I've always been fascinated-- and one reason I did a neuroscience PhD is I'm fascinated by the brain, our brains. How are we coming up-- and I've been fascinated about that through games because of playing chess when I was a kid, and for the England junior teams, and so on. And I was trying to improve my own thought processes. So that's what got me starting thinking about thinking, how are we coming up with these ideas and why is my brain making this mistake? Why can't I stop it from doing that? And then, of course, you read philosophy and maybe too many science fiction books I read when I was a kid. And you start thinking about, what is all the big questions, really? Consciousness, creativity, dreaming, what are these things? And neuroscience can go some way to explain that, and I've tried that very far, and probably, there are many neuroscientists in the room. But I think it probably take us another 5,000 years to get to the bottom of all of that. And in some cases, maybe not because what do we have as a comparator? How can you dissociate intelligence from consciousness and things like that? But actually, I always thought attempting to build AI in this way, a general learning system, that learning was general, and then seeing-- of course, we can-- it's an engineering science, AI. You build the artifact, and then you can use analysis techniques to pull it apart like we do in natural sciences. And then comparing that to the human mind, I think that will actually ultimately give us these answers to the deepest questions we've wandered since the classical Greeks or before. And I think it's a tool in many ways as a comparator, but also, it could help us with neuroscience too. Of course, neuroscience is one of the sciences I'm thinking that AI could help with, and a lot of people are using AI for analysis, but we could also do it for models and other things. So that maybe is the thing I'm most excited about, is finding out these deep questions, ultimately, the nature of reality with our AI system. So after all these years doing from energy, to drugs, to games, it is still the science, the brain, the intelligence, that's still-- The brain intelligence and maybe physics, the nature of reality. What's going on here? I just find it so fascinating, always have. And I think for me-- I think most people-- physics was my favorite subject at school. And I read what Steven Weinberg's book, Dreams of a Final Theory. Feynman was my favorite scientist, recent scientist, also a physicist-- Roger Penrose. Roger Penrose, I've had many chats with him about consciousness. And we disagree on most things, actually, but it's always fun talking to him. but then I just thought, actually, it would be better to build a tool that could help us maybe understand all of these big questions, physics questions, the nature of reality, ultimately, and on the way have this incredible engineering project, as well. So it just to me it was the most fascinating project one could ever do. We almost become labmates as a physics undergrad. I got into Poggio's lab, but I went to his student's lab at Caltech. So almost-- Oh we really overlap there, OK. We almost become labmates, so physics, neuroscience, we share. Actually, do you think what you're building will eventually be able to generate the set of Newtonian laws of intelligence for us? I think to the extent that there might be Newtonian laws, I don't see why not. But I actually think we'll be able to use it to understand a new set of Newtonian laws for physics like, what is going on with quantum mechanics, and quantum behavior, and all of these things? I don't see why that would be not tractable at some point with our system. I want to follow up on that, because I was listening to your talk and this incredible engineering feat, as well, as just mathematical feat in terms of AI, in terms of scientific discovery. And you are sitting here in the headquarter of human-centered AI, so I have to ask you, what is human's role in scientific discovery going forward? Yeah, well, this next period, I think, is very exciting. Next period meaning next decade or two, let's say. Hard to say exactly where. I think the AI systems are going to do a lot of the drudgery, pattern matching, searching literature, a lot of this sort of stuff. You could imagine a science assistant language model. We're thinking. We're working on these kind of things where they have to be a lot more accurate than they are now. Now they hallucinate-- I'm sure you've all tried it. They just hallucinate likely paper titles and papers that don't really exist. They sounded convincing. Yeah, they sound very convincing. But once you fix all those problems, it could be pretty amazing. And I think that-- Are you saying that the human roles are like prompt engineers? No, I think coming up with-- it's always the classical thing of actually defining the right question to ask. That's the hardest thing in science. These systems can't do that. That's the invention bit of AlphaGo I was talking about, inventing Go. OK, how are you going to do-- you can come up with a Go move, but it's not-- we've not even got line of sight of how it's going to invent Go, or something as amazing as Go, or as amazing as chess. Now, I don't think it's impossible. So I'm not in the camp of, we need some quantum thing to do it or whatever. Maybe someone like Penrose would argue. But it's given we're talking about Penrose. But so I think a Turing machine can do it. And I often see my role in this way. You've got Penrose in my head now because I've had many interesting debates with him. I see myself as Turing's champion, because I think that Turing machines and classical computers can do a lot more than we thought they could. And I think that is what AlphaFold is showing and AlphaGo is. These are massive combinatorial spaces. Proteins are quantum systems right down at that level. Some people claim that it's NP-hard, all of these things. And somehow we have, with AlphaFold, mimicked effectively a quantum system. We've modeled it, and it's tractable. It's in peak time. You can the answer back in milliseconds. So that's something pretty important there, I think. I'm thinking a lot about and talking to my theoretical computer science friends about, what does this mean for people's NP and these kinds of things? And so I think, yeah, there's just a lot of complex questions here that we've got to try and tackle. Well, let's get Penrose out of your head. Yeah. At least Einstein said what you just said. I think in the Nobel museum, there's a quote by Einstein about, much of science is asking the right questions. Yeah, yeah, so I think this has been-- but roughly, my question isn't just the top-level thing. It's deciding what these systems should investigate, how we should investigate them, what kinds of experiments we should do, hypothesis generation. None of that is possible at the moment. But maybe in a number of years' time, we will have more advanced systems that can do that. Then we're in a different stage. But right now, I think that this should be incredibly useful for human scientists, I would think. So a little bit of a switch of topic because you talk about DeepMind's effort in LLMs. At Stanford, we call them foundation models. It doesn't matter. It's all good. We have a center for research on foundation models that you're aware of. Our NLP faculty like Percy Jung and so on are leading that. One of the mission at Stanford CRFM is pushing for norms. And you're one of-- I remember back in-- it feels like ages ago, you were talking about all these safety ethics and norms. What do you think the norms should be right now in, as you call it, dangerous moment, and LLMs, and foundation models are leading in terms of that? Yeah, I would say it's a risky moment, and we have to-- it's important, what we do from here, I think, as a community, as an industry, and as a society, I feel like we need-- there's several things. There's much more rigorous red teaming of these systems. We do a lot of that ourselves, but it'd be great to have maybe a nonprofit body that does that or governmental body that does that. HAI or not-- Could be, yeah. Could very well be, exactly. I think we need to understand the systems better, so better analysis tools. Interpretability needs to be advanced. Some of the slide things I mentioned for Sparrow, I think they're all safety features, like rules adherence. So one question is, can you get it to obey the rules that you want? The next question is, even if you can do that, what rules should you put in? But there are a lot of obvious things that should go in that we have, but there can be unintended consequences of that, as well. So yeah, I think there's a lot of-- there needs to be analysis of data curation and the type of data you're putting in. I don't think we've done enough on that. We try and fix it at the output with RLHF, but I think a lot of it, it would be good also if we filtered incorrect things and inappropriate things from the inputs, from the training data. So I think there's so many things to be explored and done. And I think the problem is the capabilities are moving very fast, and some of this other work, there needs to be more focus on some of this other work, safety, and analysis, and those types of things. Do partnerships play a role? Yeah, I think so. I think we should be discussing that. We've offered our latest models to people like the Turing Institute in the UK. And we should talk about-- I think we've also reached out to people here. We've got to figure out the right way to interact like that where maybe access to models can be given to external partners or collaborators, and they can independently investigate these things before they're put out into the wild. It's quite hard to do that, because the problem is that at the moment, a lot of this work is very manual, in terms of testing out all the edge cases of effectively an emergent system. It's quite difficult to get that unless you have millions of people using it, but then it's already out there. So maybe we need to also come up with better automated testing for these things, too. Right. I started my question asking you to wear your scientist hat. I'm going to end my part of the question asking you to wear the hat of a world-leading AI thought leader. You really talked about this risky moment a couple of times, one or a couple of times. Yeah. Going forward, the next, just say 10 years-ish-- it's hard to imagine the next week-- pace of AI right now, what is a couple of most important thing you think, as a society in this AI revolution, we collectively should be working on that, even the mighty DeepMind cannot work on yourself? Mm-hmm, well, look, I think there are-- there's many complicated things that I think society needs to debate and discuss. For example, the values these system should have, the rules you want-- as I discussed earlier, what rules do you want them to follow? What things should they be allowed to be used for? What do we want to deploy them for? Some of them are technical questions, but they're also societal questions. And I think there are quite reasonable but different answers to those things. Different cultures, different countries would give different answers to those things. I think there are issues to do with-- it's mostly in the social media realm at the moment, but I think it's going to impact, too, of if you ask a question about something and there's a political answer, left or right, what is the right answer that you should give there? Should you give the answer that the user wants to hear but maybe it's not accurate, that answer? But then, what is accuracy? So these are debates that go beyond AI but I think will be maybe amplified by AI. So I think it's even more urgent that we try and find the right answers to that as a society, global society. And that seems quite difficult because intermixed with the geopolitical situations, and political situations, and that's hot. Maybe it is part of the reason we have all this situation. So there's a lot we can talk about, but I do want to give the QA time for-- especially students, how about we start with a student? And then there are microphones on the side. Just go up to-- walk up to the microphone. I would really love to start with a student question. Honor system, as you look young, you're like a student. Hi, thank you for the talk and the insightful discussions. I'm a PhD student who works on the intersection of robotics and AI. I'm aware that DeepMind also has some efforts that push the frontier of robotics. So I'm curious about your view of the future of robotics and what people at DeepMind are doing to push us towards that future, thanks. Yeah, great question. My view-- and this isn't the view of everyone at DeepMind. There's different views on robotics. Some people think embodiment is going to be critical to AGI itself, so one has to do that. I'm of the view that it's going to be an unbelievably important application of the general AI systems. I'm not sure it's the fastest path to get to AGI to do and to insist that it's embodied. So for us, we see it as very important for two reasons. One is, I think going to be a really important industrial application area with huge implications and impact. It's also used in our fundamental research as a grounding problem. So we're usually often away in some simulation, games, or science, or whatever that is. And sometimes it's good to just ground yourself on a real robot arm or something like that to see if your simulation work really holds. And it also pushes on really interesting things like low-data regimes and transfer learning, things like that, I think is the best setup for that. So we use it as a challenge task quite a lot. And the kinds of works we're doing, we're doing a lot of different types of work, from locomotion, robot arms, to soccer teams trying to learn coordination. So it's quite a wide range of work that we do. Thank you. Let's do this. We alternate on both sides, so you and then you, so we'll go that way. And you've got a lot of questions? Yes. This is your time. OK, I'll try and answer faster. Yeah, thank you, Dennis, for your talk. As a student, a PhD student here, and a scientist, I'm concerned about a technical monoculture where increasingly large resources are devoted to an increasingly small number of future directions. At least that's the perception. What are your thoughts on our roles as scientists, and your role as a scientist, and a funder of scientists in this problem? Yeah, great question, as well. Look, I agree with that in the sense that, that is the direction things are going in, and that's where the money-- because it's been so successful. It's monopolizing a lot of the, I guess, scientists, and money, and talent. DeepMind has always been an incredibly broad church of research. I think we've historically been-- we're quite large, but we've also been the broadest in terms of, from neuroscience ideas to obviously deep learning, reinforcement learning, all of these things. So we have many, and we have theorists, as well. And then on top of that, we have a whole science team. So we've always had multidisciplinary-- it's ver multidisciplinary, what we do. But even there, we've had to slightly change our emphasis to put more emphasis on the large models and the pretrained models, and especially in terms of compute. If you look at the compute resources, it's just the way it is. And as I said, I'm of the opinion that they are necessary. So I think one has to do that work. I think that's clear now after several generations. They're not just going to simply asymptote. I think if you saw GPT-1, or 2, or one of our early language models, maybe you think, oh maybe it's still just memorizing or something. But I think we've definitely gone past that point. So one has to adapt-- I think the only scientific thing to do is to explore that to its limit. But while still, at least the DeepMind, the resources I'm in control of, exploring-- maybe more than half the organization is still exploring other innovations that might be needed, whether that's memory, planning, some of the other things I mentioned. And I'm 50/50 whether that's going to be needed, and whether the large will have enough on their own. And by 50/50, that's not a precise number. It just means I don't know. You really don't know. And so when you're in that much uncertainty, my view is you've got to press hard on both those sides, exploiting what you know and exploring the new things. But I think as an industry, we're probably over in-- probably, that isn't going on everywhere else. I think mostly, they're 90% or maybe even 100% all in on the current techniques. But we're more like 50/50. Do you know what you're going to do in PhD now after his answer? Just kidding. OK. Hi, there, hi. First of all, thank you so much for coming to give the talk, definitely the highlight of my day so far. And second of all, I wanted to pose a more meta-level question. And so you mentioned during the talk that a lot of research is learning how to ask the right question. And so what I wanted to do is reflect that back to you and ask you, considering that DeepMind is one of the leading places where research is done on AI, how do you approach asking the right question? What is your general approach to research to get to effective conclusions, to do the groundbreaking work that you've done? Yeah, look, I spent many, many years, I guess, intuitively trying to hone that ability. And then more recently, I think I hinted at it in the talk, which is, now we've done this several times. We can-- I've really boiled down as to what it is that we're good at, what is it that we've actually done. And then I've trained myself to be quite a broad scientist, to be able to dive deep, dip into something, and then try and understand maybe from an expert in that domain what their problem is, and then remap that to problems that I know. It won't be exact mapping, but it's close enough. And then the difference is you cna make an assessment about how likely that is to work. And that's served us very well. Picking a problem in the sweet spot, I often call it the s-curve problem. And games are great for this, because you want to pick a problem that's not too easy, because then it's going to be trivial and not stretch your general algorithms. But if it's too hard, you won't be able to perceive if you're making any progress at all. So you want to always be in that sweet spot of the s-curve, and you can do that with something like games but also science by keep on picking a harder and harder thing as your understanding of what your capabilities of your systems can do. So that's one of the big things I've used. Another thing, I have many tricks, but maybe I should write an op ed or something like that at some point. But another thing I've used in the past that's been good is if you surround yourself with lots of smart people and multidisciplinary people, and then you present a problem to that group, often the speed and quality of the ideas that they brainstorm, that, I found quite a good measure of the suitability that's-- of the right time to tackle that problem. So if it's flowing easily, and there's a lot-- it doesn't matter if the ideas are actually good or not in the end, but if they're plausibly good, plausibly interesting ones, and they're pretty easy to come up with, and there's lots from lots of people, that usually suggest to me that there's a lot of progress potential there. I found that as quite a good measure. Maybe other people already know that. I don't know, but that's what I've found that's quite useful. Thank you so much. Hello, yeah, thank you so much for your talk. I'm also a PhD student here. I think when I see DeepMind's work, it's really fascinating to see how you're really pioneering the whole AI for science field, and you spoke of many different applications. It also seems like when you spoke about the common motif across all of scientific research, this search problem, so it feels like it's really pointing at a space for a base foundational model for scientific research in general, and it seems like your work at DeepMind really motivates that. So is this something that your group has thought about? Yeah. That looked like-- and I think you mentioned an AI research assistant, but I think maybe it's a bit more than just literature search, yeah. I agree. I agree. It would be more than just literature search, to have the ultimate one. I think that is a way, definitely a very promising route. And I think our work-- I think you've correctly identified our work does point in that direction, and we're exploring that. We call our models base models. Basically, they're foundation models. And I think there is a potential for building a general model that would be good across scientific space, not just do literature but maybe do some experimental things, probably, possibly suggest experiments, this kind of thing. I think that is very likely to be possible to work. So yeah, I would encourage that direction. Hey, I'm also a PhD student, and this question is related to my own experiences. But how do you know when it's time to double down or give up? So you put out AlphaFold 1. You get 60%, and the next competition is in two years from now. You have a decision to make, and you guys really doubled down on that decision. You rewrote everything. It worked out great. Yeah. But are there ever times when it doesn't work out as well? And what is your mental framework around assessing that decision? Yes, that's a great question, and we do have a lot of heuristics for that. So with AlphaFold, it is case by case, obviously. You have to see why you've hit a brick wall. And one of the things I use for that is what I mentioned earlier. You get a different set of people in the room to look at that problem and look at what you've just done. How many new ideas is it, and what's the flow of that? It's not just how many. It's the quality, and how hard was it to do that? That is a good-- so there was a lot of ideas around an AlphaFold 2 in that particular case from a different set of people. So we actually had to change the team quite a lot to get a different approach and a different technique. But there are other things too. We've been many times through this dark valley of-- most of research is like that. You do something super hard, it's going well, and then it's some big problem. And this is pretty normal, and once you've been through it a few times, you-- first of all, not to panic. Secondly, to look for orthogonal-- this is where I think multidisciplinary can help. You look for orthogonal signs that you might be on the right track. So one thing I did with AlphaFold is one reason we picked AlphaFold and I started there after AlphaGo, was that-- do you remember, there was this Foldit game? So I don't know if some of you know, in protein folding, it's like a citizen science game where they-- I think it's the Baker lab. They turned protein folding into a puzzle game. So it was literally a 3D puzzle game when you bent a protein's backbone and things like that. I came across it when I was studying at MIT in about 2009, 2010. I think it'd just come out then. And amazingly, they got-- wasn't that fun a game, but there's quite a few gamers who wanted to do science with their gaming skills. And maybe 10,000 people played it, and some of them got really good. And they actually discovered two or three real protein structures, and they were published in Nature, I think. And I saw that, and that made me-- I was thinking, well, that's incredible because someone who's not an expert biologist has used their intuition to make some, sounds counterintuitive moves that look energetically wrong but that actually would end up being right. And what I thought was what gave me confidence with AlphaFold 2 specifically is we'd already mimicked the intuition in AlphaGo of, in my view, some of the world's top pattern matches. Lee Sedol, he's played Go since he was two. He didn't go to normal school. He went to Go school. And he's played-- his whole mind is Go. He's an incredible guy, and it's his whole universe. And yet we were able to mimic his level of intuition, let's say, somehow with our AlphaGo system. So I thought if we could do that and then amateur gamers were able to solve using this tool with their presumably pattern matching, presumably what was going on proteins, then that must be tractable too. So although that's not a definitive reason to continue, but that was one reason. And then the other reason was that somehow, physics does solve this seemingly intractable problem. We live. We exist. The proteins get folded in our bodies all the time. Sometimes they get misfolded in disease, but they're folded. So in theory, that should then be tractable, if you believe a bunch of things about classical systems and quantum systems, but that's out of scope. So I used these heuristics to try and give you confidence to double down. Thank you very much. Great, we have entered the fast question and fast answer phase. Yes, sorry, I'm really good at fast answering. Great questions, though. Hello, I'm a graduate student, and I'm also involved in AI research. And my question is should technologists, scientists, those investigating and researching these challenging problems bring in the perspectives and ideas of people from marginalized communities, particularly those who do not come from a scientific background? What infrastructure do you imagine being created to accomplish this? Yeah, I think it's a really important question, and we think about this all the time. And when I say multidisciplinary, I don't just mean scientists. We have, actually, ethicists, social scientists, philosophers at DeepMind, and also, we work externally with those types of people. So we try to bring many perspectives into our work when we design things and deploy things. And we pay a lot of attention to that. And I think the community itself could do better, and we could also still do improve. But it's very much at the forefront of our mind, that, to get as many inputs as possible into the designs of these systems, and how they get deployed, and especially the types of people that it might affect. I think that's a very key thing that we should be doing as part of the thoughtful deliberation that I was mentioning in the scientific method. And the understanding part is to understand the consequences of what you're doing, butterfly effects, if you like, not just maybe the intent that you had but also the unintended consequences. Better, too. I think that's all part of making sure it's good for everyone in society. I do also want to add that here in America, part of the effort is democratizing resource for AI research. So Stanford and HAI have been the leading lobbyists of this bill called National AI Research Resource. It's an AI, and compute, and data platform that we're asking the federal government to set up. And one of the key cloths or ask in that is reaching to the traditionally underrepresented communities. So it's a major policy effort that Stanford is leading. Again, we're having increasing number of people asking questions, so you do need to keep your counsel. Hi, Dennis, I'm an undergrad here. A big takeaway I learned from your talk is the long-terms perspective that DeepMind takes to its research. And along that line, what were some of the core founding philosophies that you helped cultivate at DeepMind in its early ages to enable it to continuously innovate at the frontier for as long as it has? Yeah, there's so many. It would take a whole talk to go through that. But I think there were some techniques we bet on early that, we were very rigorous about that. So learning systems, not expert systems, we made that decision early. That wasn't obvious in 2010. And that led us to reinforcement learning and deep learning as being core to what we do. Using inspiration from neuroscience and things like memory replay, episodic memory, these kind of things, that was also a core. Understanding the compute would become a huge factor, and we knew about GPUs and all those sorts of things from gaming. So this was all different threads that we backed early. And then the idea of using simulations and games as a test platform rather than-- we could have chosen robotics or something like that, which I think would have made our progress much more slow. And I think a lot of people who were doing that kind of work back then, even at MIT, were working on robotics platforms. And I spent some time with them. What I realized was that they were spending all their time fixing the server motors and the hardware and almost had no time left over to work on the software, which I thought was the problem for AI. So that was another reason to choose games. And obviously, also, I knew lots of games engineers and brought them on board early. So that was-- and I knew which games to choose and all of those things. So that was easy for me. But it also made strategic sense, I would say. And so those are all things that we stuck true to. Also, to be honest with you, starting in London and being in the UK, that's where I'm based and where do the most of my studying, and I just knew there was a lot of untapped talent in Europe, in the UK. And I also thought-- and I think this goes to geographical diversity, although we're part of Alphabet but we're still very much a UK-based company and European in our outlook. And I think it's important that in the global debate about AI, of course, there's China, and there's US, but there's also-- I think we represent Europe at the forefront of this technology and at the top table of what's going on. So I feel that responsibility too, and that was something that was recommended to me not to do. You've got to move to Silicon Valley, all of everyone was saying that, and all our early funders and other things by resisted for various reasons. But I think that was one of the reasons, and I'm pleased that we bring that perspective, as well. Thank you. Really, really, really inspiring and great to see what you're contributing to the world of science and world of AI and students. Great questions today. Thank you. Yeah, really good questions. Thanks, everyone.
AI_LLM_Stanford_CS229
Stanford_CS229M_Lecture_9_Covering_number_approach_Dudley_Theorem.txt
OK, I guess let's get started. This is working, right? Yeah. So I guess last time where we end up with was-- you view the function class F in some sense as equivalent to a set Q, right? So if you have a function class F, and you can define this Q to be the set of vectors of this form, basically the output vector, which is a vector in Rn. And here f is changing over the class F. So in some sense for the Rademacher complexity perspective, these two objects are not very different. So the empirical Rademacher complex of F only depends on Q. And also, we have talked about the case where you have a finite Q, a finite F. Sometimes, actually, even you have an infinite F, you can have finite Q in some cases, but not very typical. But in this case, what you can show is that you can have a Rademacher complexity bound. This is the so-called Massart Lemma. So you're saying that if your Q satisfies that-- this is at the end of the last lecture. So suppose for every vector in Q we have that this norm of the Q normalized by 1 over square root of n is less than m. Then we know that this quantity, which is essentially the Rademacher complexity of F, is bounded by this 2 times n squared times log of the size of Q over n. And if you translate this back to the function class, then you know that if F satisfies that for every f e f, this f is bounded in average by n, right? You can view this as the average size of f, but it's a quadratic mean of another f-- the later the mean. And then you have the Rademacher complexity of this function class F is bounded by 2m square, log of the size of f over n. So in this time, we won't deal with the case where you don't have finite hypothesis class, all right? So if you have infinite hypothesis class, infinite Q or f, then what you do? And what we're going to do is we're going to do a discretization, but now we are discretizing in the Q space or the outer space of f. So before, I think, in one of the previous lectures, we discretized in the parameter space. And now, we are going to discretize in this more fundamental space, the output space. Because, as we kind of argued, that output space is what's really fundamentally important. The parameterization is just something that influenced the output space, but if you have the same output space but different parameterization, actually, the function class are not different. So the parameterization are not the most fundamental thing here. So what we're going to do is we're going to discretize the output space. And we still have this idea of epsilon. This concept of epsilon cover. So now, we are going to cover the output space Q on output space of f by the so-called epsilon cover. As we recall the definition of epsilon cover-- so recall that the definition was that C is epsilon cover of Q. Now, I'm using-- I'm talking about epsilon cover of Q, but I just changed the variable. I think before, we called it epsilon cover of some other set. So with respect to some metric, rho, for any vector in Q, there exists a vector in C that covers it. And by covers it, it means that-- such that the distance between this vector is less than epsilon. And let me also define the so-called covering number, which is the quantity we are going to use very frequently. So the covering number of-- OK, there are several arguments. One thing is the target radius, the target radius epsilon, and also the set Q, and the metric rho. This is defined to be the minimum size of epsilon cover of Q with respect to rho. Right, so this is the minimum possible size of the covering. Sorry. There's a-- in some sense, you can use this covering number in actually two ways. One way is you talk about the covering of Q and the other way, you can talk about the covering of F, right? So even though I think the fundamental thing is about the Q, I think in the literature, if you read the paper, then, in most cases, people talk about covering of the function F, at least in many papers. So we're going to use that language, but they are essentially the same. So basically, let's first clarify. So if you do this for the covering of F, then it's the same thing. So if you have epsilon cover of the function class F, you just view F as a function class. So then it's saying that it satisfies that for every f in capital F, there exist f prime, such that rho f f prime is less than absolute. So it's just literally the same thing. And also, we're going to choose the rho to be the same for Q and F. So basically, what we're going to do is that we're going to choose rho between two vectors, wherein in the Q perspective, you choose this to be 1 over square root n times the L2 distance. Recall that both v and v prime are dimension-- eigenvector in space Rn. So this is basically our-- sorry. There's no square. So basically, this is a normalized version of the L2 distance. The reason we normalized 1 over square root n is just because this is more consistent. The normalization fundamentally doesn't matter, first of all. So whatever normalization you choose, it doesn't change the essence. And the reason why we choose a normalization here is just simply for consistency with the function space view, where you have our two functions. We would define our rho to be-- suppose you have two functions, f and f prime, and what's the distance between them? Recall that we only restrict our function on the finite set of points, z1 up to z10. So the typical definition of the distance would just be the L2 distance on the set of points. So it's just something like you look at the average difference between these two functions on zi's and then you take the quadratic average, and then you take basically the quadratic average of the difference between f and f prime on a set of zi's. And you can see that these are exactly the same rho, just a view-- you can view them in either the function space or you can view it in the vector space. And typically, people write this rho as rho2 Pn. So the reason-- I guess for those of you who are not familiar with, just to think of it an arbitrary kind of symbol to indicate this. But for those of you who are a little bit more familiar with some of these functionalities-- so I think the idea is that Pn-- this is the empirical distribution. Basically, a uniform over 1z up to zn. And L2 of Pn means that you have a L2 metric defined on this empirical distribution-- this uniform distribution on the sphere. But if you don't know where this come from, like no-- this is just a-- let's just treat it as an abstract symbol just because-- I'm going to use this symbol several times just for formality, but it really just means this. OK, so with this view, basically, as we have said, right, so you have F corresponds to Q and a function f corresponds to this vector fz1 up to fzn in Q. And is a one to one correspondence. Also, the rho corresponds to each other. So you can, in some sense, write this trivial kind of correspondence. If you look at the function space view with the metric rho, then the cover number is the same as you would output. In the output space, the vector space, and you use the metric, normalize L2 norm. And one of the reason why we normalize n by something that depends on n just because you have n dimension. And n is something that's changing. So in some sense, it makes sense to normalize by that. Because if you have a changing vector with changing dimension, sometimes it's hard to compare different cases. So that's why you want to have a norm that doesn't depend on dimensionality. And from now on, we're going to write the function space view that notation. We are going to write in the F notation, but in my mind, I'm always thinking about output space because that's just a vector space, which is much easier to think about. OK, and also, the formal kind of theorem will be stated in the function space, but-- by the way I proved it, I'm going to change to the Q just to make it more kind of explicit. And here, the theorem that kind of deal with-- in some sense, this is a kind of like trivial discretization. What we're going to do is that we're going to first discuss this and then have a more advanced discretization, which is called chaining. So the trivial version is the following, which is, in some sense, basically the same as-- like in similar, the same as what we have done in Lecture 3, but here, we are doing the function space. So let F be a family of functions from some space z to minus 1 and 1. So we assume these functions to be bounded between minus 1 and 1. And then for every epsilon larger than zero, you can show the following. So the Rademacher complexity is less than epsilon plus-- let me write it down and then interpret it. --log of the cover number with the radius epsilon over n. And we're going to show how to prove this, and when you show how to prove-- when we prove it, you will see that this is, in some sense, the discretization error. And this is, in some sense, from the Rademacher complexity of the finite epsilon cover. So we'll see this more clearly in the proof. So in some sense, the general idea is that you approximate. So the proof, the general idea is that you approximate F by an epsilon cover and-- maybe that's-- let's call it C, and then-- maybe let's not give it a name-- by epsilon cover. And then when you have the epsilon cover-- for the epsilon cover, you have a Rademacher complexity bound, and then you pay something because of the discretization or the approximation. OK. And when we prove it, as I said, I tend to kind of change it to the vector space view just because then you don't need all of those kind of function or jargon about function space. Let C be an epsilon cover of Q. Q is the output space. Well, Q is the same thing, right? So then-- let's say this is the size which is equal to the minimum covering number, right, which is just the same as we claimed of the function class. And now-- OK, now if you look at the Rademacher complexity of the function, as we claim that this is, in some sense, the same as the complexity of the output set-- and now, what you do is you say, I'm going to approximate v by the nearby point in the cover, right? So suppose you have the set Q and I have a vector v and I know that v is covered by something, right? You have an epsilon cover like this. You know that this point v is covered by some of this point, v prime, in the set C, right? Every point C-- recall that every point C cover a certain family of points, right? They can cover its neighbors in some radius. And you know that every point can be covered by some vector in C. And the vector v can be covered by v prime, let's say. So then you know that v and v prime, the distance is less than epsilon, and then you can approximate. So for every v and v prime in C-- and you know that distance is less than epsilon. And also, you can write v sigma, in some sense, just trivially as v prime sigma plus v minus v prime sigma, right? Maybe let's call this the z. So it's v prime sigma plus z times sigma, right? And what you know is that z is small because the distance-- well, so you know z and this distance. Recall that we are using a scaled L2 norm. So this is less than epsilon. This is what we know. Then what we know that z times sigma, you can use the-- I think this is one which-- this is Cauchy-Schwarz right, so the inner part of two vectors is less than the norm of the two vectors, the 2 norm of the two vectors. So this is less than square root n times epsilon times the norm of the sigma, which is n times epsilon, right? So basically, we know that this error term is less than epsilon by doing this. And then-- so now, we can go back to the Rademacher complexity. First, use this-- so this is just the less than expectation using a few things, right? So that's the epsilon, right? Because z, in a product with sigma, is less than epsilon. And this epsilon can go outside of all of those things because epsilon is a constant. So then you get plus epsilon. And here, what's the range of v prime? So v prime always has to be in C, right? There's no way we can-- this is our definition of v prime. v prime is the cover in C. So then I guess this is equality. And then this one, you can use the Massart Lemma. This is the complexity of the set C, the cover set C. Using Massart Lemma, you get square root 2log C over n plus epsilon. And we are done, right? C has this size. So this is just the square root 2log N, epsilon, F, L2Pn, over n plus epsilon. OK? So pretty simple. And any questions so far? OK. So now, let's talk about stronger theorem. And this is, in my opinion, a pretty deep theorem because, at least-- for me, I don't have much intuition about it, but, hopefully, after I show the proof, it's intuitive, but it's something non-trivial. And generally, this type of technique is called chaining, and there could be multiple ways to do this kind of chaining in different situations. So here, I'm-- here, the particular theorem is called Dudley theorem. Dudley theorem. So the theorem is saying that-- so let F be a family of functions from Z to R. So here, actually, I relaxed this event because this theorem is more general. It can work for even in functions that are not bounded. So the Rademacher complexity is bound by the following. Let me write it down. It doesn't look very intuitive in the beginning, but I will explain. So it's an integral. So the variable is epsilon. So you are integrating a function of epsilon from zero to infinity and you look at the covering number for different epsilons and you divided by n. So the integrand is square root of the log of the cover number over square root n. And the first time, it's not even clear whether this is a stronger theorem than before because it's not trivial to compare with the previous one. But actually, you can compare it if you do some work. So probably-- what I'm going to do is I'm going to show the proof and then I'm going to interpret this. Because I think from the proof, it's pretty obvious that you have a kind of a stronger statement. But if you just compare the form, it's not that trivial to compare. But from the proof, you can see that this is-- the proof technique is the extension of the previous proof technique, and you should kind of like-- it's pretty obvious that you should expect a stronger theorem. And then later, I'm going to compare them and also interpret this because this form by itself is still somewhat kind of hard to use, right? How do we know whether I can integrate something good out of this, right? So I'm going to give you several cases where you can integrate a good number out of this integration. So that's the fun. All right. So now, let's dive into the proof. So how do we prove this and what's intuition? So let's start with the intuition. The intuition is that-- this is actually probably one of the pretty technical proof in this course. So intuition is that you have this-- I'm thinking about whether I should draw a single figure. I've drawn a lot of figures on my lecture notes, but I think it's going to be challenging for the scribe notetakers to produce all of them in the notes. So I'm thinking if I should draw one. Yeah, maybe I'll draw multiple and let the scribe notetakers to figure out how to merge them if they want. So the intuition is-- let me draw this again. So you have this set Q, and what we have done was that you create a cover, an epsilon cover, right? It covers this, and every center is one point in C, and you want all of these balls to cover your set, right? And what we have done was that you have a vector v here and you say that I'm going to approximate v by v prime plus the distance. So basically, you approximate v by v prime plus the difference z. So this is all fine. The problem is that how do you-- so you have this formula. Let me just write again. So the tricky thing is that, how do you deal with this error, v times sigma, right? So what we did before was that we have a very brute force inequality saying that this is less than 2 norm of z times 2 norm of sigma over-- and when this can happen, this can happen only if z is perfectly correlated with sigma, which just cannot happen always, right? Because z is a vector, which is a difference between v and v prime. It could be correlated with sigma if your ball is-- so by the way, this ball is like-- I draw it like a ball, but this could be of different shape, right? Because if every-- everything is not really a ball, right? So suppose this is really just a-- including ball, then everything will become too trivial for us. So Q is a set and there is some metric defined on it, and this metric is potentially somewhat complicated, which we don't really know. The metric is-- sorry. Sorry, the metric is trivial, but the set itself could be complicated because you don't really know what a set looks like, right? It's the image of a function on some set of vectors, on a set of points, right? So this set is-- these are all balls, but the set itself could be kind of like weirdly shaped. So that's why this z may not always be correlated with sigma. So in the worst case, it can, but not always possible. So basically, the question is that, can we strengthen this inequality here? Why this has to be worst case? So if you think about this, right, what is the-- if you think about it, what is the sup expectation of the-- so basically, what you really care about. What you care about is the following. So you can take this-- so let me just write it down. Let me do it a little bit slowly so that-- so you care-- So you do this inequality. You first say that this is less than the expectation of the sup of the first term plus the expectation of the sup of the second term. This is because-- I guess we have claimed that expectation of sup A plus B is always less than expectation of sup of A plus expectation of sup of B. All right. So the first thing you can do is as follow. And then you care about this. And before-- as I said, we have a very worst case inequality for the inner product, but, actually, this point itself may not be that worst case, right? Because here, z is, in some sense, in this ball around v prime, right? So we have this ball v prime here, which is the ball, and the z is in this ball. So if this ball is not like a-- and sometimes this z is in the-- and you can create this-- you can make this cover of a certain shape so that it is in this ball. And sometimes this is the ball intersect with Q. If it's really a ball, I think you-- the worst case inequality is tight. But actually, you are intersecting this ball with the Q, and Q could be weirdly shaped. So if you look at this, then this one could still be possibly small because if this ball intersect with this Q, it's of a small complexity, right? So basically, the idea is that what you do is that-- for the first inner thing, you just do the log of the covering number. But for the second thing, you do another round of discretization. Because you don't want to say that z can be worst case. I want to say that z probably cannot be worst case. z have some structure. So I'm going to discretize it again. Sorry. I need-- how do I turn this off? OK. Wait, why am I-- why I'm having this? Sorry. And that. I'm not using my address, right? So everyone select-- everyone in Zoom meeting can hear me right? Could hear me right. Sorry, I forgot to take off the headphones. My bad. Class, can you hear me? OK, I hope you can hear me. OK, thank you. Thanks. OK, cool. Sorry. My bad. I forgot. Take off it. OK, so basically, the kind of idea is that this is still a Rademacher complexity of v v prime intersect with Q. And you can do another round of discretization for this set so that you get an even tighter inequality. So that's kind of the rough idea. So basically, you have nested layers of discretization to make it stronger and stronger. So that's the basic idea. And now, let's do a li-- let's make it a little more formal so that I can define something and explaining some more. So let's say we have-- so I guess maybe just to briefly draw this, another vessel. So what you do is you do another discretization of this yellow ball, and then you say that this z cannot be worst case. It has to be something like z can be approximated by this plus this. I'm not sure whether this is 2. I would draw a bigger figure, but basically, this point z is not-- you approximated z by its nearest neighbor, again, and then you look at the difference. And then you approximate difference by something else. I will draw this more formally in a moment. To do that, let's define epsilon0 as the sup over fsup over i. Max over i FCi. So this is just the maximum possible value that you can output. And you can see that this is just some preparation, which is almost trivial. So you can see that this is always bigger than this because each entry epsilon is bigger than each of the FCis, and this is equal to square root 1 over n the 2 norm of v square for every v and Q. So basically, epsilon0 is an upper bound of the entire set. You don't have to talk about any epsilon bigger than this because everything is in this ball of epsilon0. And now, I'm going to create this nested-- or this-- technically, it's not nested, but I think I've always thought about it as a nested family of discretizations, but technically, you don't really need a nested part. So let me draw this. OK, let me define things first. So I'm going to consider epsilon1 to be half times epsilon0. Epsilon2 is a quarter of epsilon0. So in general, epsilon j is 2 to the minus j, epsilon0. So these are the kind of like the radius from the epsilon cover. And let Cj be an epsilon j cover of the set Q. Of Q. So I have this family of epsilon covers. And intuitively, you can think of-- kind of think of epsilon j plus 1 cover-- Cj is nestled in C. Cj plus 1 is nested in Cj in some sense, but I don't-- but this is not necessary for the proof. And also, it's not the entire-- but not necessary. I just like to think like that just to give me some kind of intuition. So what's really happening-- if I draw this, what's really happening is that I have this set Q. Maybe I shouldn't draw a ball so that it's kind of more interesting. So this is the set Q. And there is ep-- biggest thing, which is the epsilon0, which covers everything. Let me now draw that. So if you use the epsilon 0 cover, then it's trivial because epsilon 0, you can just use a trivial cover to cover-- you just need one point to cover everything. So you just need the origin. Let's now draw that. Let's draw something, maybe epsilon 1. So what happens is that you use-- you have a very coarse-grained cover at the beginning. Something like this. All right. So this is your epsilon1. And I have a point. This is something really hard to draw. So I need to follow my notes exactly so that I don't have any issues with it. So I guess suppose I have a point, let's say, here. This is my point v that I want to approximate by the cover. So suppose this is the origin. So before, what I do is that-- maybe let's draw this v somewhere else. Sorry. Let me just draw v here. So this is-- let's call this u1. This is the closest point in the first level of the epsilon cover. So before, I just use u1 to approximate v. And now, what I'm going to do is that I'm going to first use u1 and then I consider the second level of the epsilon cover, which is of a smaller size, which is of the-- actually a size half. Whether this is-- by the way, this number 2 is nothing magical. You can make it something like 3 or 4. It's just for convenience. You just need a constant vector smaller at every level. So you have this, for example, right? So this is the second level. And what you do is you say I'm going to take the point u2 here. u2 is the nearest neighbor of v in the second level. And then what I'm going to do is I'm going to approximate v by u1 plus this vector between u2 and u1. Then I have a small distance between v and u2 right? And then I'm going to have the third level. Wait, I only draw three levels. So suppose in the third level what happens is that you have another thing here, and this is u3. And then you also consider this vector between u2 and u3. So basically, you approximate v by this red vector plus the green vector plus the yellow vector, and then you continue to do this until you get to v. So any questions so far? So basically, I'm going to approximate v by u1 plus u2 minus u1 plus u3 minus u2 until infinity. Because I'm going to have an infinite number of these coverings. It doesn't have to be exactly an infinite number of them. If you have fun doing enough, approximation can stop, but for simplicity, let's just say we have an infinite sequence of epsilon covers and you can do this. So more formally, what I'm going to do is that for every v and Q, like-- I guess this is just a formal definition. So its nearest neighbor-- nearest-- neighboring Cj, right? So that's why, by definition, because uj has to be covered that Cj-- so that's why-- so v has to be covered by Cj. So that's why v can-- the distance between v and uj is less than epsilon j, right? So in other words, 1 over square root n has to be Cj2 norm is less than epsilon j. And also, because epsilon j goes to 0, we know that uj goes to v eventually as j goes to infinity. As j goes to infinity. So that's why you can write this nested sum-- you can write this as u1 plus u2 minus u1 plus u3 minus u2, so and so forth, right? And if you like u0 to be 0, then you can write this as u1 minus u0 plus u2 minus u1. This is just to make it look nicer so that we can write it as sum. So this is sum of ui minus u1 minus 1 from 1 to infinity. And you can check the convergence if you really want, just so because I have this. So if you look at the partial sum, then it's um minus u0. And because um is partial sum, this goes to v as m goes to infinity. So this could cover it. And technically, you-- actually, if you really want to have a proof, you don't actually have to use infinite sum. I'm just trying to make it simpler. So you can just say I'm going to choose an m that is big enough, and then I have some small error at the end. That's also fine. OK, so-- and once we do this, and what-- as we kind of planned, so we have this kind of better and better approximation, right? So now, let's deal with each of these vector. So what we have is that expectation of the sup. This becomes the expectation times sum of ui minus ui minus 1 sigma. That's from 1 to infinity, right? And then you switch the sum with the sup. So you get expectation less than expectation sup. Maybe this takes-- sum. Sup. Right. And then this is equals to sum expectation of sup. OK, so-- and here, the constraint is that ui needs to be in Ci and u1 minus 1 needs to be Ci minus 1, right? So in some sense, this quantity-- each of this quantity is kind of like some kind of Rademacher complexity, but this is the finite class because u1 and u1 minus 1, no, are not arbitrary vectors. They have to come from a finite set. And then we just have to deal with-- we just have to see what's the Rademacher complexity of this set and then continue with the derivation. OK, so let's try to deal with each of these term. So we are trying to use Massart Lemma, all right? So Massart Lemma is dealing with-- is trying to deal with these kind of terms for finite set. So first of all, Ci-- so the combination of u-- u1 and u1 minus 1 are the variables, right? So they are in Ci times Ci minus 1. And Ci times Ci minus 1. The size is equal to the size of Ci times Ci minus 1. So this is something you can compute. Let's simplify that in a moment. And you can also have a-- By the way, for the Massart Lemma-- let's just go back to real quick. So I think we had this in the beginning. So for Massart Lemma, you have to check how large-- you have to check how large the vectors are, right? So this m doesn't matter, right? If all the vectors are super big, then your complexity will be big. And if all the vectors are extremely small, then your complexity will be small. So let's check what's the value of M here. So the value of M is the bound on the 2 norm of the vectors. The normalized 2 norm of the vectors, right? So basically, we need to check 1 over square root n times ui minus ui minus 2, 2 norm. How large this can be eventually. So this can be-- if you upper bound this, this is at most-- you just do a trivial triangle inequality and-- wait, sorry. My bad, my bad. You cannot do a triangle inequality. That would defeat the purpose. So what I'm going to do is that-- yeah, sorry. So you are going to do a slight more careful triangle inequality because you want to say u1 and u1 minus 1 are close, right? But u1 and u1 minus 1 themselves, each of them could be big if you look at this, right? So u1 and u2, as vectors, they are probably big, but their differences is small and smaller and smaller as you have bigger and bigger i's. And how do you control that? I think there is actually easy way. You just write this as u1 minus v because you can always compare with v. That's something you know, right? And then you use triangle inequality because both ui minus v is somewhat small and ui minus 1 v is somewhat small. And how small they are? So you know that the first term, 1 over square root n times ui minus v, this is less than epsilon i. And the first term-- and the second term is less than epsilon i minus 1. This is just by the definition of the epsilon cover, right? And epsilon i is 2 to the minus i times epsilon 0. So epsilon i is smaller than epsilon i minus 1. So by a factor of 2, this is actually 3 times epsilon i just because epsilon i minus 1 is 2 times bigger than epsilon i, OK? So with all of this preparation, we can apply the Massart Lemma. Then what you have is that sup is less than-- so we got square root 2 times the M square. This is the M, right? So you have M square, which is 3 epsilon i square, and then times the log of the covering number. And the covering number-- sorry, the log of the size of the set. The size of the set is Ci times Ci minus 1. And over n. All right. And let's try to simplify this a little bit. So you get 3 epsilon i outside over square root n, and you have square root log Ci plus log Ci minus 1. And times 2. And then you say that this is less than-- so Ci is probably bigger than-- is always bigger than Ci minus 1 because Ci is a more fine-grained epsilon cover discretization than Ci minus 1. So if you have more fine-grained, you should have more set, more points. This is just by definition. So you get-- you just bound Ci minus 1 by Ci. So you get 6 epsilon i over square root n times square root log Ci because we just replaced this term by log Ci. OK. So the constant doesn't really matter that much anyways. All right. So now, let's see what we have achieved, right? So we have found each of these term, and let's go back to this formula. So we just plug it in. So what we got is that-- so we got expectation sup 1 over n, v sigma. This is our target, which is less than the sum of this over i. i from 1 to infinity. 6 epsilon i over square root n, square root log Ci. So this is still not really an integration, right? So how do you turn this into integration, right? But this is kind of like a flavor of integration in which we have a lot of terms, right? In some sense. So how do we see this, right? So there are-- I think the way I see this is the following. So what was the-- maybe let me just write down what's the final formula you want to achieve. The final formula I want to achieve is-- recall that this is something like 12 times 1 over square root n times square root log N epsilon F, L2Pn, d epsilon. And this is the final formula we want to achieve. By the way, in some sense, actually, you don't really have to try this integration if you just care about applying this to some cases because this is enough for you to apply it. It's just like a disintegration. It looks so nice and it's kind of like-- it's a good interface in a mathematical sense. So how do we see these two are almost the same. And the way I see it is the following. So if you think about what this integration is-- so they have epsilon on this dimension. And let's plot the covering number. The covering number will be the lo-- this is the log covering number. Log-- maybe let's say square root-- square root log N, epsilon, F, L2Pn. So you plot this. And at some point, this covering number will be 1. And so the log of the covering number will be 0. This is just because when a radius is big enough, you can just use one thing to cover everything. So the log covering number can be 1, right? In particular, in our notation, when you read it, it's epsilon0, then your covering number becomes 1 and the log covering number becomes 0. So square root of that is also 0. And this covering number will go to infinity eventually as epsilon goes to 0 because you need more and more points in covers as you have more and more fine-grained covers. And you have this sequence of points. You have, for example, epsilon1 is here, right? So which is half of epsilon0. But let's look at epsilon i. So let me try to draw this exactly as my notes. So suppose this is epsilon i and-- if this is epsilon i, then half of it will be epsilon i plus 1 by definition. So this is epsilon i plus 1. And what is this value? This is the corresponding covering number, right? So this is square root log Ci, right? That's our notation, right? And now, let's compare these two quantities, right, this quality and this quality. This is what we are trying to link, right? So the quantity below is just the area under this curve, right? That's the definition. OK, I guess I'm ignoring the 1 over square root n, which is easy, right? So if you don't have the 1 over square root n or the 12 or the-- so this integral is just the error on the curve. And now, what is the finite sum? And if you look at the finite sum, then this epsilon-- if you look at this thing, the area of this triangle-- sorry, this is not a triangle. This is a rectangle. My bad. So the area of this rectangle then is-- the area-- the mass of the-- this is epsilon i minus epsilon i plus 1 times the height, which is square root log Ci. And epsilon i and epsilon i plus 1 are just the-- this is just the-- let me see what's the best-- I think this is epsilon i over 2 times log of Ci. And this is just the multiple of this term, right? So basically, the finite sum is, in some sense, just dealing with all of these rectangles and the integral is doing everything. So that's why the sum of the rectangles will be smaller than the integrals up to a constant factor. So basically, what you know is that you know epsilon i over 2 square root log Ci, because this is the-- this area is less than the integral of this part, right? This is less than the integral of this part. It's less than integral from epsilon i plus 1 to epsilon i and square root log N, epsilon, L2Pn, d epsilon, OK? And with this, we can just take sum over all i's. So you have sum of epsilon i over 2 square root log Ci is less than sum over i from 1 to infinity. Right. Sorry, this is not right. And now, you can see that each of these integral has the matching upper bound, lower bound. So you get-- this is from 0 to epsilon0 square root log N, epsilon, L2Pn, d epsilon. And the upper bound is still not infinity, but that doesn't really matter because this really literally equals to e. You can extend it to infinity because everything beyond epsilon 0-- bigger than epsilon 0 will be 0. So that's what we have, OK? So now, if you just multiply-- this is the essential thing, right? So with this inequality, you just link these two quantities. So I think you just have to work out the constant. I think there's a constant 2 there. So that's why you get from 6 to 12. So with this, you get expectation. And this is actually the Rademacher complexity of F is equal to this. It's less than 6 epsilon i over square root n, square root log Ci, and this is less than 12 times this integral. d epsilon. OK? So any questions? OK, great. OK, and I think from this figure, you can also kind of see that, in some sense, the essence here is that how fast epsilon goes to infinity. That's what's important here, right? Because if epsilon goes to infinity very fast, then your integration problem could be even infinity. So then you don't have any bound. And if this thing goes to infinity, like here, slower, then you get a variable. [INAUDIBLE] Yeah, so the question is-- I chose this level by a factor of 2, right? So it's 2 to the minus j times epsilon0. So what if I change that 2 to 3 or something like that? I never tried that myself, but I think very, very likely, you would just get a similar constant. Maybe you get better than 12, maybe you get worse than 12, but anyway, this constant is not that important for us. But I think it's very unlikely you can gain anything by-- that you can gain anything more than a constant. OK, so now, let's try to interpret this theorem a little bit more because, in some sense, this theorem-- this form is kind of hard to use, right? Because if I got a log covering number bound and-- OK, what's the intended use of this theorem? So the way to use this theorem is that you get some log covering number bound and then you do this integral. You get the Rademacher complexity, right? But it's kind of hard to use it because before you get the-- so you don't know how does this translation work explicitly. But, actually, the translation from the covering number to the Rademacher complexity is actually relatively simple, as I'll show. So this integration doesn't have-- you would see like a-- actually, you don't even need to-- I never compute this integral myself after I've done it once, in some sense. So here's how it works. So for the-- yeah, so basically, the question is, when this is finite square? So when this thing is finite? And when it's finite, what's the dependency, right? So and so forth. So when is finite? So I think there are several cases. Let's do kind of-- this is a case study. So of course, it depends on what the log covering number will be. So we have-- I have a few cases here. So a, if the covering number is exponential in epsilon phase of the form and is of the form-- something like 1 over epsilon to the power-- some power R. R is just a variable. Like a placeholder. So suppose it's exponential epsilon in the sense that 1 over epsilon is in a base. Then you can do this computation. You get-- and this equals to something like 1 over square root n, square root R log 1 over epsilon, right? So because you take a log covering number. And you will see that-- and you take the epsilon and you will see that the log 1 over epsilon integrate to some constant from zero to infinity. Oh, by the way, I think-- maybe I should say-- I forgot to take-- yes. There's a small thing-- I don't want to always integrate from zero to infinity because sometimes it's actually annoying. So-- I forgot to mention this. So let's assume the F is bounded between, let's say, minus 1 and 1 so that this integral only have to do-- you only have to do it between 0 and infinity. So epsilon0, let's say, is 1. Something like this. Or maybe a constant. So we only have to integrate between 0 to 1, let's say. Right. So this is just because you have a bounded function. After that, the log covering number become zero. And now, let's integrate. OK, going back to this, we integrate between 0 and 1 this log 1 over epsilon. And you will see that this log 1 over epsilon actually integrates to something of strictly a constant. So this will be just O. Maybe let's write one notation. I should write this like-- this is just on the order of square root R over n because the epsilon integrates to a constant. The dependency on epsilon is called, OK? So that's good. So you got this thing. And let's look at another case. So this is actually a case where the dependency on epsilon is very mild because it's not 1 over epsilon. So that's why it's pretty mild. But sometimes you never get-- you don't get this. So if N, epsilon F, L2Pn is of the form a to R over epsilon-- so now, the epsilon is the exponent, but-- yeah, and it's 1 over epsilon in the exponent. And in this case, if you look at this 1 over square root n integral log covering number, this will be 1 over square root n, R over epsilon-- square root R over epsilon log a, right? And this is still-- so d epsilon. And still, square root 1 over epsilon is integrated to 1. d epsilon. This is a constant. A universal constant you can compare. I guess we don't care about constants. So it's some constant. And this equals to-- so basically, if you ignore the log factor, this equals to square root R over n. So still of this form. I still got-- And now, it comes to the tricky thing, which is kind of like it's-- kind of on the boundary between what we can do and what we cannot do. So if this is of the form something like a to the R over epsilon squared-- so now, I have a even worse dependency on epsilon, right? So it's an exponent and also, it's 1 over epsilon square. So it goes to infinity as epsilon goes to 0 faster. And in this, case this becomes a little tricky because-- and but, actually, this is the most common case, right? If you really do the work, I don't really expect you to prove any generalization among yourself that often, but if you really do the work in many of the cases, you get this kind of covering number. And this is actually tricky because if you integrate the thing, what you get is that-- you take the log of this and you take square root. So what you get is-- maybe that's-- so you get square root R times 1 over epsilon times square root log a. Right, so this is d epsilon. So this is square root R, square root log a, square root n, 1 over epsilon d epsilon. And this thing is actually infinity. I guess this is because the-- how do you see this? Like 1 over epsilon integrates to log epsilon. And then log epsilon0 is infinity. So this goes to infinity too fast at zero so that it integrates to infinity. So this is actually-- no, this is not good news for us, right? So how do we-- but, actually, this can be fixed. How do we fix this? So this can be fixed by our improved version of Dudley's theorem. And this improved version, in some sense-- I'm not going to prove it, but it actually is kind of almost expected. So what you can show is that-- so basically, the idea is that you don't do the discretization all the way to 0. You do it until a certain level so that you can pay the worst case bound. So basically, you do it only to the level of alpha. So you bound it by this. I think there's a-- actually, I'm not sure whether there's a 2 here, but let me have the 2 here anyway so that-- for safety. I mean, the constant is not very important. So basically, when you do the integration, you are not integrating from zero to infinity. You are integrating from alpha to infinity. And below alpha, you just pay this alpha bound. So in some sense, you can see that this is an interpolation of the two bounds we had, right? So recall that one bound we had was-- this would first integrate brute forcing where we pay this epsilon, right? So this is just because we have a worst case bound for the epsilon error. And the other case we had is integration. We don't pay anything in the worst case. And this is basically saying that you do this nested or iterative discretization into alpha and then you pay the small error alpha at the end. And why this is useful? This is useful because it kind of avoid this tricky regime where you are very, very close to zero. So what you can do is that-- I think this theorem, you can probably prove it yourself. So I'm not going to show the proof. And if you use it-- so you can take alpha to be something like 1 over poly n. So something super, super small, right? And so that it's 4alpha. So that 4alpha is negligible. And so that here, on the right-hand side, you don't integrate to infinity. So basically, 4alpha is negligible. And the question is, what does the integration look like? So this is something like inverse poly n, which is negligible, and then you have square root R, square root log a, square root n, and integrate between and alpha and 1, and you have 1 over epsilon d epsilon. And unfortunately, this one, even though it goes to infinity as epsilon goes to 0-- as alpha go to 0, but this is actually something that depends on alpha very, very weakly. So this is this, right. I'm not sure why this is done that way. You know what my notation means, right? OK. Sometimes I think in different calculus book I see different notations for this. So sometimes I get confused. Now, this thing is really just the-- times-- this is like log 1, which is 0, and minus log alpha. So you got log 1 over alpha. And this is logarithmic in alpha. And the alpha is poly n. So this is logarithmic in n. So this is log n. So basically, eventually, this is still O to the square root R over square root n if you hide all the logarithmic function. OK, so in summary-- so the covering number of the form 1 over epsilon to the R, a to the R over epsilon, a to the R over epsilon squared all lead to some-- all leads to something like a Rademacher complexity bound of this form. And these are probably basically pretty much the only cases I know of that can lead to this. For example, if you suppose hypothetically your covering number is something like a to the alpha R, R over epsilon Q, I think this will break because-- so here, if this is epsilon cube-- and I think here, it's going to be epsilon to the 1.5. 1 over epsilon to 1.5. And when it's ep-- so maybe let's do a quick heuristic. So suppose this epsilon to the 1.5. And, of course, you still have to integrate from alpha so that you try to avoid the blow up, right? But it wouldn't be as effective because 1 over epsilon to the 1.5, the integration of this is, I think, 1 over square root epsilon instead of log epsilon. And I think this will be 1 minus 1 over square root alpha. I think it's-- see that? Yeah. I think there's a minus here, I think, right? So it's going to be something like 1 over square root n minus 1. Something like this. Maybe there's a half here. Anyway. Let's take another constant. I don't know what the constant is, but the problem is that this is not log alpha. This is 1 over square root alpha. And now, you cannot take alpha to be inverse poly n because if it's inverse poly n, you'll pay too much here. So then it's going to be a very tricky balance between this 4alpha term and this term, right? And I think-- at least I'm not aware of any cases where you can balance them in a nice way so that you get still a good bound. I think it's going to be probably not even possible. But on the other hand, for the case when you have this thing, right? This is log 1 over alpha here. So the balance is tricky. It's kind of like you have a free lunch if you pay a log factor. So it's almost always possible, right? So that's the difference. Right. And actually, most typically, you're going to get some covering number of this form. That's the most typical case. OK, any questions? OK, so now-- I have 15 minutes today. So the rough plan for the rest of the 15 minutes and the next lecture is that we are going to talk about covering upper bound for linear models and deep nets, and those will imply Rademacher complexity bounds. And I think today, I'm going to talk about linear models. But for linear models, I'm not going to give you the proof because I think the proof is a little bit too technical. In most of the cases, you wouldn't need to prove it yourself. You just have to invoke it. So basically, I'm just going to-- I'm going to just state some theorems and tell you that, actually, for linear models, this is almost a kind of-- well, I think it's kind of almost all done. Like you know everything about it, and I think they are pretty much matching upper bound, lower bound. So this is actually from a paper by Tom Zhang in 2012. Sorry 2002. So he's saying that this is for linear models. So linear models. Yeah, so suppose your x1 up to xn and Rd are n data points and p and q is this so-called concrete pairs. I guess-- I hope that probably you have seen these kind of things. So you hold the inequalities if 1 over p plus 1 over Q is equal to 1. And also we also assume that p is better than 2 and less than infinity. But in most of cases, you're going to just can think of p and q are both 2. That's the most important thing. And assume that p norm of x is less than C for every i. And then let's consider this hypothesis class F index by q. So this is the family of linear models where the norm of the linear model is bounded by B, right? Recall that we have actually talked about these kind of models, where p is 2 and q is 2, or maybe p is 1 and q is infinity. These kind of things. And before, we prove the Rademacher complexity bound. And now, we prove the covering number bound, which will also be for Rademacher complexity bound. And this rho is equal to L2Pn. L2Pn. This is the same thing as we have defined before. And then the log covering number epsilon, Fq-- sorry. Times rho is less than B square, C square over epsilon square. The ceiling doesn't matter. It's just trying to deal with the corner cases where this is 0 or something like that. So log 2 should be plus 1. And when p-- and when p is 2, q is 2, you can strengthen this. You can strengthen this slightly to something like log N, epsilon, F2, rho is less than B square, C square over epsilon square times log 2. I guess the base is also not important because it only change the constant. It's just copied from-- the base of the log doesn't really matter that much. I'm prepping here just for the sake of preciseness. So you can improve the B difference into something that depends on n or d, which doesn't matter that much, at least for our purpose. For other cases, if you care about the bound, it absolutely doesn't depend on e, then this matters. Otherwise, it doesn't matter that much, OK? And the way to remember this is just that this gives the same Rademacher complexity, right? So basically, if you use the discussion above, use the conversion above like we have done-- so this is of the form-- which form? This is of the form-- this thing. a to the R over epsilon square, right? Because-- here, you have a log, right? After taking a log is our epsilon square. And the R is B square C square. So using this conversion, you got that the Rademacher complexity is less than square root R over n, and where R is B square, C square. This is BC over square root n up to logarithmic factor, and this was very similar to-- this is the same thing as we have done before, right? So B was the norm of the classifier and C was the norm of the data. So you get multiplication of them over square root n. There are some small differences in terms of the logarithmic factor, which let's ignore just for simplicity. OK, and you can also show this for multilinear models-- sorry, multivariate. Multivariate linear functions. And I'm showing this just because this will be useful as a building block-- as building block of our future. Because when you have a li-- when you have networks, a multivariate linear model is the building block for a layer of network. And this-- in some sense, there's nothing really intelligent here, but I just have to state it so that I can use it later. So suppose you have-- OK, first, let's have a small definition. So definition. So suppose M is a matrix of this form, is m by n matrix. Let's define the 2 norm, 2, 1 norm. This is not the operative norm. This is just some arbitrary norm. So this is the 2, 1 norm, which is the sum of the 2 norm of the columns. So Mi is of dimension m. And you take the-- so basically, you first take the 2 norm of the column and then you take the 1 norm to group them. Right. And then, in this definition, M transpose 2 1 norm. This is basically the sum of the two norms of rows of n. Here's this definition. And then we're going to use this in the statement. So here is a theorem. The theorem is that if you consider-- here, I'm not going to do a p and q just for simplicity. So you just do the two-norm version. So p and q are both 2. So you consider the multivariate function, which outputs multiple outputs. And this W, let's say, of dimension m by d, and let's constrain the W, the 2 to 1 norm of W to be less than B. And, again, let C to be the average of the norm of the data. And then you get log N, epsilon, F, L2Pn is less than C square, B square over epsilon square, ln 2d times m. So it's kind of the same thing. The norm of the parameter times the norm of the data over epsilon square. But the norm of the parameter is measured by this 2 to 1 norm. Oh, sorry, 2 to the norm of double transpose. I think I have a typo here. So what's the 2 to 1 norm of W transpose? As I said, it's the sum of the two norms of the rows of W. So in some sense, there is nothing surprising here. In some sense, you just glue all the dimension-- you just treat all dimensions independently, in some sense. Like, for example, if you think about-- suppose you try W-- let's use a different color. Suppose you read W as W1 transpose up to Wm transpose, where you have m vectors, row vectors, and then Wx is really just-- you multiply W1 transpose with x up to Wm transpose x. So you can view this linear layer as m different linear functions, one-dimensional linear functions, and then the 2 to 1 norm is just the sum of the Wi 2 norm. So in some sense, you just sum-- you take the sum of the complexity measure. Sum of complexity measure of each of the model Wi transpose x. Right, so Wi 2 norm is the complexity measure of the linear function and you take the sum. So the proof is actually just the-- yeah, there's nothing more there. I think I have five minutes. Let me also mention another thing, which is useful for preparation for the deep nets. So this is also related how do we deal with bounding the log covering num-- the covering number. So you can also have the Lipschitz composition. This is a useful tool for us to deal with the covering number. And this is actually-- recall that we had this Talagrand Lemma, right? We had the Talagrand Lemma, which was like a Rademacher complexity, right? So you say something like a Rademacher complex of phi composed with H is less than some Lipschitzness of phi times the Rademacher of H. Something like this. So this was the Lipschitz composition for Rademacher complexity. And it turns out that for log covering number, the Lipschitz composition is even trivial. The Talagrand Lemma, I didn't prove it for you. I just said this is a fact. This is a theorem. And actually, proving it doesn't sound easy, as I mentioned. It's actually sometimes pretty complicated. It's pretty-- I think it's a challenging theorem to prove. And here, the Lipschitz composition becomes trivial for covering number. I think the fundamental intuition of the spirit is the same. It's just, for covering numbers, somehow this becomes super intuitive and explicit. So let me say the Lemma, but yeah, I think I have, so this is almost a trivial thing. So suppose phi is kappa-Lipschitz. And then let's say rho is this L2 norm thing. Then the log covering number of phi epsilon, phi composed with F-- I messed up my order of this argument in my notes for every occurrence after a certain point. So I have to fix it later. So epsilon-- so if you look at the log covering number of the composed function class phi composed with F, this is less than the log covering number of the original one, but you have a different radius or different granularity. So you basically have to cover the original one with epsilon over kappa granularity so that you can turn that into a epsilon cover of the new composed function. And this is pretty much just trivial because if you just take-- I guess I'll just take epsilon over kappa cover for F. And then-- so suppose this is-- let's call it C. And then phi composed with C is epsilon cover of phi composed with F because for every phi composed with F in this class, you can just first find f prime in C such that this rho f f prime is less than epsilon. So you first find this cover in C, and then you just compose it. So phi composed with f prime claim that this is actually a neighbor of phi composed with F. This is because if you look at the distance between this two thing, this is square root 1 over n, sum of phi of f prime zi minus phi of fzi. And you use the Lipschitzness. So this is less than 1 over n times kappa square. And then because f prime and epsilon over kappa close, so this is kappa times epsilon over kappa epsilon. So we are done. Yeah, so-- OK, I guess that's a good stopping point, and we'll continue next lecture about deep nets. Cool. Any questions? So the Lipschitzness being [INAUDIBLE]?? Yeah, yeah. Yes, so I should-- yes, that's right. The Lipschitz. [INAUDIBLE] Yes, so far, I have actually a one-dimensional function. Phi is a one-dimensional thing. I have output one-dimensional thing, and then you have a 1 to 1, R to R function phi. So there's no metric, but, yes. But if I have outputs of vector and then your phi is a vector to vector function, then you have to make a norm. Everything incompatible. The Lipschitz just has to be the same thing, compatible with the norm. [INAUDIBLE] Yes, yes, we're going to use just L2. OK, OK. Sounds good, OK. I guess see you on Wednesday.
Stanford_CS224N_Natural_Language_Processing_with_Deep_Learning_2023
Stanford_CS224N_NLP_with_Deep_Learning_2023_Lec_19_Model_Interpretability_Editing_Been_Kim.txt
today I'm delighted to introduce us our final guest speaker um Bean Kim um being Kim is a staff research scientist at Google brain if you're really into googleology you know those funny words the beginning like staff sort of says how senior you are um and that means that being's a good research scientist um um so uh I I discovered at lunch today that bean started out um studying mechanical engineering at Seoul national university but she moved on to uh I don't know if it's better things or not but she moved on to computer science and did a PhD um at MIT and there she started working on the interpretability and explainability of machine learning models um I think she'll be talking about some different parts of her work but a theme that she's had in some of her recent work that I find especially appealing as an NLP person is the idea that we should be using higher level human interpretable languages for communication between people and machines and so welcome Bean looking forward to your talk um and go for it thank you thank you thanks for having me it's honored to be here it's the rainiest Stanford I've ever seen last night I got here last night but then I'm I live in Seattle so this is pretty common so I still was able to see the blue sky today I was like this works I really like it here so today I'm going to share some of my dreams chasing my dreams to communicate with machines so if you're in this class you probably agree you don't have to that large language models and generated models are pretty cool they're impressive but you may also agree that they're a little bit frightening not just because they're impressive they're doing really good job but also we're not quite sure where we're going with this technology in 10 years out will we look back and say that technology was net positive or we will say ah that was catastrophic we didn't know that that would happen and ultimately what I would like to do or maybe hopefully what we all want to do is to have this technology benefit us humans I know in 10 years time or maybe well 20 years or earlier he's gonna ask me he's gonna be like Mom did you work on this AI stuff I watched some of your talks and did you know that how this will profoundly change our lives and what did you do about that and I have to answer that question and I really hope that I have some good things to say to him so my initial thought or an instill so or current thought is that if we want our ultimate goal to be benefit Humanity why not directly optimize for it why wait so how can we benefit there's lots of different ways we can benefit but one way we can benefit is to treat this like a colleague you know a colleague who are really good at something it's called it's not perfect but it's good at something enough that you want to learn something from them one difference is though in this case is that this colleague is kind of weird this colleague might have very different values it might has very different experiences in the world it may not care about surviving as much as we do maybe mortality isn't really a thing for this colleague so you have to navigate that in our conversation so what do you do when you first meet somebody there's someone so different what do you do you try to have a conversation to figure out what how do you do what you do how are you solving decades-old protein folding problem how are you so how are you beating the world gold champion so easily What It Seems are you using the same language the science knowledge the language that we use atoms molecules or do you think about the world in a very different way and more importantly how can we work together I have a one area that I really want to talk to and it's alphago so alphago beat world of gold Champion Isador in 2016. Isidore is from South Korea I'm from South Korea I watched every single batch it was such a big deal in South Korea and worldwide I hope and in one of the matches alphago played this move called move 37. how many people watched alphago match matches and how many people remember move 37. yeah a few people right and I remember the nine Don commentator who's been like talking a lot throughout the matches suddenly got really quiet and he said hmm that's a very strange move and I knew then that something really interesting has just happened in from my eyes that this is gonna change something the South Fargo has made something that we're gonna remember forever and sure enough this move turned around the game for alphago and leading Alpha go to win one of the matches so go players today continue to analyze this move and still discuss people talk about this is not the move a human would Phantom so the question is how did alphago know this is a good move my dream is to learn something new by communicating the machine with machines and having a conversation and such that Humanity will gain some new angle to our important problems like medicine and Science and many others and this is not just about discovering new things if you think about reward hacking you have to have a meaningful conversation with somebody to truly figure out what their true goal is so in a way solving this problem is a superset of solving AI safety too so how do we have this conversation conversation assumes that we share some common vocabulary between uh that that exchange to exchange meaning and ultimately the knowledge and naturally a representation plays a key role in this conversation on the left and we can visualize this on the left we say what this is a representational space of what humans know on the right what machines know here in left Circle there will be something like this dog is Fluffy and you know what that means because we all share somewhat similar recovery but on the right we have something like move 37 where we humans yet to have a representation for so how do we have this conversation our representation space needs overlap and the more overlap we have the better conversation we're going to have humans are all good at learning new things like here everyone is learning something new so we can expand what we know by learning new Concepts and vocabularies and doing so I believe will help us to build machines that can better align with our values and our goals so this is the talk that I gave if you're curious about some of the work we're doing towards this direction I highly recommend it's a YouTube video I clear keen on half an hour you can fast uh do a best feed but today I'm going to talk more about my hopes and dreams and hopefully at the end of the day your hopes and dream is there so first of all I'm just gonna set the expectation so at the end of this talk we still don't know how the move 37 is made okay sorry that's going to take a while in fact the first part of this talk is going to be about how we move backwards in this progress in in terms of making this progress in our journey and still very very small portion of our entire Journey towards understanding move 37 and of course this journey wouldn't be like a singular path there will be lots of different branches coming in core ideas like Transformer helped many domains across they will be similar here so I'm going to talk in the part two some of our work on understanding emerging behaviors in reinforcement learning and all the techniques that I'm going to talk about is going to be in principle applicable to NLP so coming back to our move our dreams and hopes and dreams 37. so let's first think about how we might realize this dream and taking a step back we have to ask do we have tools to First estimate what even machines know there has been many development in machine learning last decade now to develop tools to understand and estimate this purple circle so is that accurate unfortunately many Recent research showed that there's a huge gap between What machines actually know and what we think the machines know and identifying and bridging this Gap is important because these tools will form basis for understanding that move 37. so what are these tools how many people familiar with sale in cmaps a lot but you don't have to explain what it is so saliency map is one of the popular interpretability methods for Simplicity let's say an imagenet you have an image like this you have a bird the explanation is going to take a form of the same image but where each pixel is numb with associated with a number that is supposed to imply some importance of that pixel for prediction of this image and one definition of that importance is that that number indicates how the function looked like around this pixel so for example if I have a pixel I XJ maybe around XJ the function moves up like the yellow curve or function is flat or a function is going down like the green curve and so if it's flat like a like a blue curve or red curve maybe that feature is irrelevant to predicting bird maybe it's going up then it's maybe more important because the value of X increases and the function value goes up function value here like a prediction value so let's think about what are the few ways why this Gap might exist there are fewer is not exhaustive they're overlap a little bit but helpful for us to think about maybe assumptions are wrong so this alien again these machines that we train Works in a completely different perhaps completely different representational space very different experiences about the world so assuming that it sees the world that we do just like we do like having the gesture phenomenon there's few dots humans are have tendency to connect them maybe machines have the two maybe not so maybe our assumptions about these machines were wrong maybe our expectations are mismatched we thought it was doing X but it was actually doing y or maybe it's beyond us maybe it's showing something superhuman that humans just can't understand I'm going to take a deeper into one of uh some of these our work this is more recent work so again coming back to the earlier story about Salient cement we're going to play with some of these methods now uh in 2018 we stumbled upon this phenomenon that was quite shocking which was that we were actually trying to write some different people again people of Christians here but we were testing something and we realized that train Network and untrained network has the same very similar as alien cmap in other words random prediction and meaningful prediction were giving me the same explanation so that was puzzling we thought we had a bug but it turned out we didn't it actually is in this thing indistinguishable qualitatively and quantitatively so that was shocking but then we wondered maybe this one-off case maybe it still works somehow in practice so we tested that in a follow-up paper okay what if the model had an error one of these errors maybe it has a labeling error maybe it has a spheres correlation maybe that had Auto distribution at test time if we intentionally insert these bugs can explanation tell us that there's something wrong with the model it turns out that that's also not quite true you might think that oh maybe superior's correlation another follow-up work also showed that this is also not the case so we were disappointed but then still we say you know maybe there is a there's no theoretical proof of this maybe this is again a lab setting test we had grad students to test this system maybe there's still some hope so this is more recent work where we theoretically prove that some of these methods very popular methods cannot do better than random so I'm going to talk a little a little bit about that I'm missing one person oh I'm missing Tang way in the author list I just realized this is also work with pangwei so let's first talk about our expectation what is our expectation about this tool now the original paper that developed this method IG and Shop talks about how IG can be used for accounting the contributions of each feature so what that means is that when the Tool assigns zero attribution to a pixel we're gonna say okay well pixel is on used by the function and that means that F will be insensitive if I perturb this X and in fact this is how it's been used it in practice this is a paper published in nature they use the shop to figure out the eligibility criteria in a medical trial what we show in this work is that none of these inferences that seemed pretty natural were true and in fact just because popular attribution methods tell you anything about it attribution is X you cannot conclude anything about the actual Model Behavior so how does that work how many people here do Theory proof or few great I'll tell you I I learned about Theory proving from this project as well so I'll tell you like the way that that we pursued this particular work is that first think about this problem and we're going to formulate into some other problem that we know how to solve so in this case we formulate this as hypothesis testing because once you formulate in the hypothesis testing yes or no there are lots of tools in statistics you can use to prove this so what is hypothesis the hypothesis is that I'm a user I got an attribution value from one of these tools and I have a mental model of ah this feature is important or maybe not important then the hypothesis is that whether that's true or not and what we showed is that given whatever hypothesis you may have we can you cannot do better than random guessing invalidating or invalidating this hypothesis testing and that means yes it's sometimes it's right but you don't do hypothesis testing if you cannot validate yes or no you just don't because like what's the point of doing it if you just don't know if it's as good as random guessing right and the result is that yes for for this for this graph it's just a visualization of our results if you plot true negative and true positive and line is random guessing because this is the worst method that's the best method all the equal distance is this line methods that we know shop in IG all all falls under this line of random guessing that's bad news but maybe maybe this still works in practice for some reason maybe there were some assumptions that we had that didn't quite meet in the practice so does this phenomenal hold in practice the answer is yes we did we now have more image graphs and more bigger models but here we test two concrete and tasks that people care about in interpretability or use these methods to do recourse or spiritual correlation so recourse for those are not familiaries you're getting a loan and you wonder whether if I'm older I would have a high chance of getting a loan so I tweak this one feature and see if my value goes up or down very reasonable task have people do all the time pretty significant implications socially so for two of these concrete and tasks both of them boil down to this hypothesis testing framework that I talked about they're all around the random guessing line over worse than random guessing so you might say oh no this is not good A lot of people are using these tools what do we do we have very simple idea about this so people like developing complex tools and I really hope you're not one of those people because a lot of times simple methods work who comes Razer but also simple methods are elegant there's a reason perhaps a lot of times why they work they're simple that you can understand them they make sense so let's try that idea here so again your goal is to estimate a function shape what do you do well the simplest thing you do is you have a point of interest you sample around that point and evaluate the function around that point if it goes up maybe functions going up if it goes down maybe functions coming down right so that's the simplest way you can kind of brute force it but then the question is how many samples do we need so here this is the equation that you're boosting you're lifting this line upwards that way by adding that additional term uh it's proportional to number of samples the more samples you have the better estimation you have makes sense and differences in output how much resolution do you care do you care point zero point one to point point one to point two or do you only care zero slope to like slope one that's resolution that you care about and number of features of course so if you worry about making some conclusion based on function shape sample easy so can we infer the Model Behavior using this popular methods the answer is no and this holds both theory and practice we're currently working on even bigger models to to show just like again you know again empirical evidence that yes it just really doesn't work please you know think of think twice and three times before using these methods and also a model dependent sample complexity if your function is kind of crazy of course you're going to need more samples so what is the definition of how do we characterize these functions and finally we haven't quite given up yet because these methods have a pretty good root in economics and and sharply values and all that so maybe they're a lot narrower condition where these methods work and we believe such such condition does exist we just have to figure out when once we figure out what that condition is then in given function I can test it and say yes I can use shop here yes I can use IG here or no I can't that would be still very useful so ongoing work before I go to the next one any questions yes do the findings you have about the these models like does it only applied in computer-bit models or does it applies any model and that has a function yeah very simple simple actually simplest proof that can show simply any function this holds any other questions it's wonderful yeah this relate to you a lot but like it's almost seems like for the last couple of years they're being at least dozens maybe hundreds of people writing to people through the Shipley values I mean it if you're guessed that most of that work that's invalid or that a lot of it might be okay because the the point of a condition where it's all right right off of you being there so two answers to that question my hypothesis testing results shows that it's random right so maybe in the optimistic case optimistic case 50 of those papers you hit it and on the other side on the second note even if maybe shop wasn't perfect maybe it was kind of wrong but even if if it helped human at the end task whatever that might be Health doctors to be more efficient identifying bugs and whatnot and if they did the validation correctly with the right control testing setup then I think it's good you know you figure it out somehow how to make this noisy tools together work with human interlude maybe and that's also good and I personally really like shop uh paper and I'm a I'm a good friend with Scott and I love all his work it's just that I think we need to narrow down our expectations so that our expectations are better aligned all right I'm going to talk about another word that's a kind of similar flavor now it's an NLP so this is one of those papers just like the many other papers that that we we ended up writing one of those Serendipity paper so initially Peter came up as an intern and we thought we're gonna locate ethical knowledge in this large language models and then maybe we're gonna edit them to make them a little more ethical so that was a goal and then we thought oh the wrong paper from David Bowie and I also love David's work and let's use that so that's the start of this work but then we start digging into and implementing the realm and like things didn't quite line up so we do like sanity check experiment after sanity check and we ended up writing completely different paper which I'm going to about to talk to you about so the this paper the Rome for those who are not familiar which I'm going into detail a little more detail in a bit is about editing a model so you first locate a knowledge in a in a model like the Space Needle is in Seattle that's a factor knowledge you locate them you edit them because you can locate them you can mess with it to edit that fact that's like the whole promise of it in fact that's a lot of times how localization or editing methods were motivated in their literature but what we show is that this assumption is actually not true and to be quite honest with you like I still don't quite get why this is not related and I'll talk more about this because this is like a big question uh to us this is a pretty pretty um active work so substantial fraction of factual knowledge is stored outside of layers that are identified as having no knowledge and you can you can you can see you can you will see this a little more detail in a bit in fact the correlation between where the location where where the facts are located and how well you will edit if we edit that location is completely correlated uncorrelated so they have nothing to do with each other so we thought well maybe it's the problem with the definition of editing what we mean by editing can mean a lot of different things so let's think about different ways to edit a thing so we try a bunch of things with a little success we couldn't find an editing definition that actually relates really well with localization methods like in particular with ROM so let's talk a little bit about Rome how Rome Works super briefly there's a lot of details missed out on this side but roughly you will get the idea so Rome is Magneto 2022 uh they have what's called causal tracing algorithm and the way it works is that you're going to run a model on this particular data set now counter effect data set that has this Tuple subject relation and object the space needle look is located in Seattle and so you're going to have a clean run of the Space Needle is in Seattle one time you stole every single module every single value activations and then in the second run which they call corrupted run you're going to add noise in those Space Needle is or or the space then then you're going to intervene at every single one of those modules as if from by copying this module to the corrupted run so as if that particular model was never interrupted never a noise was never added to that module so it's a typical like intervention case where you pretend everything else being equal if I change just this one module what is the probability of having the right answer so in this case probability of the right answer Seattle given that I know it's the model and I intervened on it so at the end of the day you'll find graph like that where each layer and each token has a score How likely it is if I intervene on that token in that layer how How likely is it that I will recover the right answer because if I recover right answer that's the model that's the module that's stored on knowledge really reasonable algorithm I couldn't find technical flow in this algorithm I quite like it actually so but but when we start looking at this using the same model that they use GPT gptj we realized that a lot of these facts so so Rome uses just layer 6 to edit because that was the supposedly the best layer across this data set to add in most of the factual knowledge is stored in layer 6 and they showed uh editing success and whatnot but we realized the truth looks like the graph on the right so the red line is the layer 6 their extension paper called memet and it's multiple layers that's the Blue Line blue region the black bars are histogram of where the knowledge was actually peaked if you test every single layer and as you can see not a lot of facts fall into that region so in fact every single fact has like different regions that where it peaked so layer six for a lot of facts weren't the best layer what the editing really worked it really works and we did we were able to duplicate that results so we thought what do we do to find this ethics ethical knowledge how do we find the best layer to edit so that's where we started but then we thought you know what take a step back we're going to actually do alternative check first to make sure that tracing effect the the tracing effect is the localization rip implies better editing results and that's when everything started to falling apart so let's define some metrics first the edit success this is the rewrite score same score as roam paper used that's what we use and the tracing effect this is localization is probably you can beat the due to the slide so when we plotted the relation between tracing effect and rewrite score the local uh the the editing method Redline applies the perfect correlation and that was our assumption that there will be perfectly correlated and which is why we do localization to begin with the actual line was yellow it's close to zero it's actually negative in this particular data set that is not even on correlated it's like anti-correlated and we didn't stop there we were like we were so puzzled we're gonna do this for every single layer and we're gonna find R square value so how much of the choice of layer versus the localization the tracing effect explains the variance of successful edit if you're not familiar with r squared r squares like a think about it as an importance of a factor and it turns out that layer takes 94 dressing effect is zero zero one six and so we were really opposed that we were like scratching our head why is this true but it was true across layer we tried all sorts of different things we we tried different model we tried different data set it was all like roughly the case so we were at this point we contacted David and we start talking about and and we resolve them they acknowledge that this is a phenomenon that that exists you know so apart from the layer the other way in which localization can happen is are you looking at the correct token is that the other like corresponding yeah yeah in this graph the token so the added benefit of the rest of the localization could only help you look at which is the correct subgroup token is that it yeah yeah and so looking at any of the software tokens it sort of finds what I should think of yeah yeah just layer layer is the most biggest thing that's the only thing you should care if you care about editing layers in fact don't worry about localization at all it's extra wasted carbon uh climate effect yeah so that was that was our conclusion but then we thought you know maybe the particular definition of edit that they used in the room was was maybe different maybe maybe there's exists a definition of editing that correlates a lot better with localization because there must be I'm still puzzled why is this not correlated so we tried a bunch of different definitions of edits you might inject an error you might uh you might invert reverse the tracing you might want to erase effect you might we might want to amplify the fact all these things like maybe one of these will work you did it so the craft that you're seeing down here is R square value for four different methods and this wasn't just the case for Roman memory it was also the case for fine tuning methods that you want to look at the difference between blue and orange bar represents how much the tracing effect influenced our Square value of the tracing effect as you can see it's ignorable they're all the same you might feel the effect forcing the last one has a little bit of Hope but still compared to the impact of layer choice of layer it's ignorable so at this point we said okay well we can't locate the ethics no ethical knowledge at this project we're going to have to switch the direction and we end up doing a lot more in-depth analysis on on this so in summary does localization help editing no the relationship is actually zero for this particular editing method that from what I know is pretty state-of-the-art and the counter of counter effect data it's not true are there any other editing methods that correlate better no but if somebody can answer this question for me that will be very satisfying like I feel like there should start something still be something there that we're missing but causal tracing I think what it does is it reveals the factual information when the Transformer is passing forward I think it represents where's the fact when you're doing that but what we found here is that it has nothing to do with editing success those two things are different and we have to resolve that somehow but a lot of insights that they found in their paper is still useful like the early to mid-range NLP representation last token there they represent the factual something we didn't know before but it is important not to validate localization methods using the editing method now we know and maybe not to motivate editing methods using via localization those are the two things now we know that we shouldn't do because we couldn't find a relationship any questions on this one before I move on to the next one you're not shocked by this I am shocked by this I'm still so puzzled like it should be there should be something I don't know all right so in summary of this first part we talked about why there the Gap might exist and what she what machines know versus what we think machines now there are three hypothesis there are three ideas assumptions are wrong maybe our expectations are wrong maybe it's beyond us there's a good quote that says good at good artist still I think good researchers doubt we have to be really suspicious of everything that we do and that's maybe the biggest lesson that I've learned over many years that once you like your results so much that's a bad sign like come back like go home have a beer go to sleep and next day you come back and like put your paper in on your desk and think okay now I'm gonna review this paper how do I criticize this one what do I not like about this paper right that's the one way to look at criticize your own research and and that will improve your thinking a lot so let's bring our attention back to our hopes and dreams it keeps coming back so here I came to realize maybe instead of just building tools to understand perhaps we need to do some groundwork what do I mean well this alien that we've been dealing with trying to generate explanations seems to be a different kind so maybe we should study them as if they're like new species in the wild so what do you do when you observe a new species in the wild you have a couple ways but one of the ways is to observational study so you saw some species in the wild far away first you just kind of watch them you watch them and see what are they like what are their habitat how they what what do they what are their values and whatnot and second way you can actually intervene and do a control study so we did something like this with reinforcement learning setup I'm going to talk about these two papers first paper emergent behaviors in multi-agent systems has been so cool who who saw this hide and seek video by open AI yeah it's so cool if you haven't seen it just Google it and watch it it's so fascinating I'm only covering the tip of an iceberg in this but at the end of this hide and seek episode at some point the agents reveal a discover a bug in this physical system and start like anti-gravity flying in the air and like shooting hiders everywhere a super interesting video you must watch so lots of that and also humanoid football and capture the flag from deepmind lots of interesting behaviors emerging that we observed here's the my favorite one but but these labels so here these are labels that are provided by open AI running and chasing for building and ramp use but and these ones were that oh human or humans when painstakingly one by one watch all these videos and label them manually so our question is can we is there better way to discover this emergent behaviors perhaps some nice visualization can help us explore this complex uh complex domain a little better so that's our goal so in this work we're going to again treat the agents like an observational study like us new species then we're going to do observational study and what that means is that we only get to observe State and action pair so where they are what are they doing or uh yeah what are they doing and we're going to discover agent Behavior by basically kind of like a clustering the data that's all we're gonna do and how do we do it pretty simple a generative model have you have covered the Bayesian generator graphical no gotcha okay so think about hi then also what you teach yeah so this is a graphical model um think about this as a fake or hypothetical data generation process so how does this work like I'm generating the data I created this system I'm going to first generate a joint latent embedding space for that represents all numbers that represents all the behaviors in the system and then I'm gonna for each agent I'm going to generate another embedding and each embedding when it's conditioned with State it's going to generate policy it's going to decide what it's going to do what action is given the state and the embedding pair and then what that whole thing generates is what you see the state and action pair so how does this work well and then given this you build a model and you do inference to learn all these parameters kind of same business as neural network but it's just have a little more structure so this is completely made up right this is like my idea of how these new species might work and our goal is to we're going to try this and see if anything useful comes up and the way you do this is one of the ways you do this is you optimize for a variation lower bound you don't need to know that it's very cool actually if if one gets into this exponential family business uh it's very cool CS 228 okay so here's one of the results that we had it's a domain called mujoko here we're going to pretend that we have two agents one controlling back leg and one controlling the front leg and on the right we're showing that joint embedding space Z Omega and z alpha while video is running I'm going to try to put the video back okay so now I'm going to select this is a visualization that we built or online you can you can go check it out you can select a little space in agent one space and you see it maps to pretty tight space and Agent Zero and it shows pretty decent running ability so that's cool and now I'm going to select somewhere else in agent one that maps to kind of disperse area in Agent Zero it looks like it's not not doing as well and this is just an Insight that we gain for this data only but like I was quickly able to identify ah this type mapping business kind of represents the good running behavior and bad running behaviors that's something that you can do pretty efficiently and now I'm going to show you something more interesting so of course we have to do this because we have the data it's it's here it's so cool so we apply this framework in the when ai's hide and seek this has four agent it looks like a simple game but it has pretty complex structure 100 dimensional observations uh five-dimensional action space so in this work remember that we pretend that we don't know the labels given by open AI we just shuffle them in the mix but we can color them our results with respect to their labels so again this is the result of Z Omega and z alpha the individual agents but the coloring is something that we didn't know before we just did it after the fact you can see in the Z Omega there's nice kind of pattern that we can roughly separate what human what makes sense to humans and what makes sense to us but remember the the green and gray kind of everywhere they're mixed so in this particular run of open AIS hide and seek it seemed that those two representations were kind of entangled the running and chasing the blue dots it seems to be pretty separate and distinguishable from all the other colors and that kind of makes sense because that's basis of playing this game so if you don't have that representation you have a you have a big trouble but in case of like orange which is fort building it's a lot more distinguishable in hiders and that makes sense because hiders are the ones building the fort then Seekers don't build the fort so we're in just a little more entangled in Seekers perhaps if Seekers had built more separate for building uh representation maybe they would have win this game so this work can we learn something interesting emerging behaviors by just simply observing the system the answer seems to be yes at least for the domains that we tested a lot more more complex domains should be tested but these are the ones we had but remember that these methods don't give you names of these clusters so you would have to go and investigate and click through and explore and if the cluster represents super superhuman concept this is not going to help you and I'll talk a little more about the work that that we do try to help them but this is not for you this is not going to help you there and also if you have access to the model and the reward signal you should use it why why dump it so next part we do use it I'm going to talk about let's work with Nico and Natasha and Shay again so here this time we're going to intervene we're going to be a little intrusive but hopefully we'll learn a little more so problem is that we're going to build a new multi-agent system we're going to build it from scratch such that we can do control testing but at the same time we shouldn't sacrifice the performance so we're going to try to match the the performance of the overall system and we do succeed I had this paper collaboration with Folks at Sanford actually here in 2020 where we propose this pretty simple idea which is you have on your own network why don't we embed Concepts in the middle of the bottleneck where one neuron represents three the other represents stripes and just train the model end to end and why are we doing this well because then at inference time you can actually intervene you can pretend you know predicting zebra I don't think three should matter so I'm gonna zero out this neuron and feed forward and see what happens so it's particularly useful in the medical setting where there are some features that doctors don't want we can cancel on and test so this is the work to extend this to RL setting it's actually not as simple extension then as we thought it came out to be pretty complex but essentially we're doing that and we're building each of the concept bottleneck for each agent and at the end of the day what you optimize is what you usually do typical PPO just think about this as make the make daughter system work plus minimizing the difference between the true concept and estimated concept that's all you do why are we doing this you can intervene you can pretend now agent 2 pretend that you can't see agent 1. what happens now that's what we're doing here we're going to do this in two domains first domain how many people looked at the uh saw this cooking game before yeah it's a it's a pretty commonly used cooking uh domain in reinforcement learning very simple we have two agents yellow and blue and they're going to make soup they can bring Three Tomatoes they get a war they wait for the tomato and bring the dishes a dish to the cooking pot they get a reward finally their goal is to deliver as many soups as possible given given some time and here Concepts that we use are agent position orientation agent has tomato it has Dish etc etc something that's immediately available to you already and you can of course tweak the environment to make it more fun so you can make it that they have to collaborate like you can build a wall between them so that they have to work together in order to serve any tomato soup or you can make them freely available you can work independently or together whatever your choice first uh just kind of send you the check was that you can you can detect the emerging behavior of coordination versus non-coordination so when the impassable environment when we made up that environment and suppose that RL system that we trained worked they were able to deliver some soups then you see that when we intervene uh this graph let me explain this is a reward of an agent one when we when there's no intervention so this is perfectly good world and when there was an intervention this is the average value of intervening on all Concepts but I'm also going to show you each concept soon if you compare left and right you can tell that in the right when we intervene reward deteriorated quite a lot for both of them and that's one way to see yeah they are coordinating because somehow intervening and at this concept impacted a lot of their performance but this is what what uh what was really interesting to me and I'm curious anyone can guess so this is the same graph as the one you saw before but except I'm plotting for intervention for each concept so I'm intervening team position team orientation team has tomato etc etc it turns out that they are using or rather when we intervene on team orientation the degradation of performance was the biggest to the extent that we believe that orientation had to do with subcoordination does anyone can guess why this might be the position there's orientation yes just a clarification question on the orientation is that like the direction that the teammate is producing yes so it seems like orientation would let you yes yes that's right yes where were you when I was when I was pulling my hair hair over this question yes that's exactly right and initially I was really puzzled like why not position because I expect it to be positioned but exactly that's exactly right so the orientation is the first signal that an agent can get about the next move over the other Asian because they're facing the pot they're going to the pot they're facing the Tomato they're going to get the tomato really interesting intuition but some too obvious to some but I needed this graph to work that out and of course you can use this to identify lazy agents if you look at the rightmost uh yellow agent our friend just chilling in the in the background and he's lazy and if you train our religion there's always some agents just hanging out they just not do anything and you can you can easily identify this by using this graph if I intervene it it just doesn't impact any any of their Rewards so the second domain we're going to look at a little more complex domain so this is uh it's studying inter-agent social dynamics so in this domain there is a little bit of tension this is called a cleanup we have four agents they only get rewards if they eat apples just yellow things or green things or apples uh but if you don't clean the river then Apple stops through all so somebody has to clean the river and you can see if there are four people trying to collect apples you can just stay someone else's to wait until someone else to to clean the river and then collect the apples and in fact that's sometimes what happens and Concepts here again are pretty uh pretty uh pretty common things position orientation and and pollution positions Etc so would we first plotted the same graph as the previous domain it it it it tells a story so the story here is that when I intervene on Asian one it seems to influence Asian too quite a lot if you look at these three different uh graph reward how reward was impacted when I intervened on Asian one it's agent three and four are fine but it seems that only agent two is influenced same with idle time same with the Intel agent distance so we were like oh maybe that's true but we keep wondering there's like a lot going on in this domain like how do we know this is the case so we decided to take another step so we're going to do a little more work here uh but but not a lot we're going to fill the graph to discover interagent relationships this is simplest dumbest way to build a graph but again I like simple things so how do you build a graph well suppose that you have you're building a graph between movies this is like not what we do but just to describe what we're trying to do we have each row if we want to build a matrix each row is a movie and each column consists of features of these movies so length Jungle of the movie and so on and the simplest way to build a graph is to do a regression so exclude I I throw and then we're going to regress over everyone else and that gives me beta which is kind of coefficient for for each of these and that beta represents the strength between uh strengths of the edges so this movie is more related to this movie and not the other movie and ta-da you have a graph it's like dummy story there's a lot of caveats to you shouldn't do this with them a lot of times but you know this is the simplest way to do it so we did the same thing here instead instead of movie we're going to use intervention on concept C on agent n as our node and for to build this Matrix we're going to use intervention outcome which wouldn't happen to be available without our framework for reward resource collected and and many other things and when you build this graph at the end of the day you get betas that represent relationship between these interventions okay so I had a graph of that Matrix apparently I removed before I came over but imagine there was a matrix there is a nicely highlighted between agent 1 and 4 and that only contradicting the original hypothesis that we had and this is the video of it so when we stared at that Matrix it it turns out that there's no High Edge strong edges between agent one and two so we were like that's weird but there is strong edges between agent one and four so we like dig deeper into it watched a lot of uh a lot of sessions to validate what's happening and it turns out that the story was a lot more complicated the ones orientation was important for four but when that fails agent 1 and 2 kinda gets cornered in and you can see that in the graph agent 4 kind of get a get agent one and four uh sorry one and two blue and yellow agent kind of gets in the corner together they kind of get stuck and this is simply just accidental because of the way that we built this environment it just happened but but the true the raw statistics wouldn't have told us this story that this was completely accidental in fact there was no correlation no coordination between agent one and two but only after the graph we realized this was the case now this might be one-off case but you know what a lot of emerging behaviors that we want to detect a lot of them will be one-off case and we really want to get to the truth of that rather than having some surface level statistics so can we build multi-agent system that enables intervention and performs as well the answer is yes there's a graph that shows the red line and blue line roughly a line that's good news we are performing as well um but remember these Concepts you need to label them or you should have some way of getting those Concepts positions and orientation there might be something that we would love to extend in the future before I go on any questions you shy [Music] cool all right so I did tell you that we're not gonna know uh move uh the solution to move 37 I still don't okay I still don't but I'll tell you a little bit of work that I'm currently doing I'm really excited about uh that we started thinking you know what will this understanding move 37 happen before within my lifetime and I was like oh maybe not but I kind of want it to happen so we start this is all about research right you started carving out a space where things are a little resolvable and you try to attack that problem so this is our attempt to do exactly that to get a little closer to our ultimate goal or my ultimate goal of understanding that move 37. so before that how many people here know Alpha Zero from T my yes Alpha zero is a self-trained uh self-trained chess playing machine that beats that has higher yellow rating than any other humans and beats stockfish which is arguably no existing human can beat stock fish so in the previous paper we try to discover human chess Concepts in this network so when does concept like material imbalance appear in its Network which layer and when in the training time and which we call what when and where plots and we also compare the evolution of opening moves between humans and Alpha zero these are the first couple moves that you make when you play chess and as you can see there's a pretty huge difference left is human right is Alpha zero it turns out that Alpha zero can master or supposedly Master a lot of variety of different types of openings openings can be very aggressive openings can be very boring could be very long range targeting for long range strategy or short range very different so that begs a question what does alpha zero know that humans don't know don't you want to learn what that might be so that's what we're doing right now we're actually almost um we're about to about to evaluate so the goal of this war is please teach the world chess champion on new chess superhuman chess strategy and we just got yes from Magnus Carlson who is the world chess champion he just lost the match I know but but you know he still he's still champion in my mind he's still championed in two categories actually so the way that we're doing this is we're going to discover new chess strategy by explicitly explicitly for getting existing chess strategy which we have a lot of data for and then we're going to learn a graph this time a little more complicated graph by uh using the the existing relationships between existing Concepts so that we can get a little bit of more idea of what the New Concept might look like and Magnus Carlson uh so my favorite part about this work I talk about carving out my favorite part about this work is that the evaluation is going to be pretty clear so it's not just like Magnus coming in inside say oh your work is kind of nice and and say nice things about our work no Magnus actually has to solve some puzzles and we will be able to evaluate him whether he did it or not so it's like a kind of success and fail but I'm extremely excited this kind of work I can only do because of Lisa who is a champion herself but also a PhD student at Oxford and like she played against Magnus in the past and many others chestplates in the world and she's going to be the ultimate uh pre-super human filtering to filter out these Concepts that will eventually get to Magnus so I'm super excited about this I have no results but it's coming up I'm excited yes generator because it's already so many puzzles out there so I'm assuming that there's probably something new what are the problems puzzles are actually pretty simple so the way that we generate concepts are within the embedding space of alpha zero and given that because Alpha zero has really weird architecture so every single latent layer in Alpha zero has the exact same position as a chessboard that's just the way that they decide to do it so because of that we can actually identify or generate the board positions that corresponds to that concept and because we have MCTS we can predict what move it's going to make given that board position because at inference time it's actually deterministic of the whole lot plus zero thing so these we have a lot of board positions and that's all you need for puzzles you give up board position and then ask Magnus to make a move we explain the concept and then give Magnus more board positions and see if we can apply that concept that he just learned for example right but it seems like you're kind of underneath yeah so the if I were to ask stockfish to solve those puzzles that were a different question because we're interested in whether we can teach human not stockfish stockfish might be able to do that's actually interesting uh thing that we could do now I think about but our goal is to just teach one superhume like if I have for example 10 000 superhuman Concepts and only three of them are digestible by Magnus that's a win that would be a big win for for this type of research questions all right yeah so oh so wrap up small steps towards our hopes and dreams we talked about the gap between What machines know versus what we think machines know three ideas why that might be true the three different maybe angles we can try to attack and answer those questions and the the bridge that Gap we talked about studying aliens these machines in observation study or control study there are many other ways to study your species uh and I'm not an expert but anthropology and other Humanity studies would know a lot better more about this and maybe just maybe we can try to understand move 37 at some point hopefully within my lifetime through this chess uh project that I'm very excited about thank you [Applause] you talked about interprecility research that costs NLP vision and RL um do you think there's much about first taking certain interpretability techniques from one modality into other modalities all right so it depends on your goal I think like think about fairness research which uh Builds on strong mathematical foundation and that's like applicable for any questions around fairness or hopefully applicable but then once you if your goal is to actually solve a fairness issue at hand for somebody the real person in the world that's completely different question you would have to customize it for a particular application so there are two venues and I think similar is true interoperability like the theory work that I talked about shop and IG are used across domains like Vision texts so that theory paper would be applicable across the domain things like RL and the way that we build that generative model you would need to test a little bit more to make sure that this works in NLP uh I don't even know how to think about agents in NLP yet so it will need a little bit of tweaking but both directions are fruitful John has a question I saw the recent work in which some amateur go players found a very tricky strategy to trick up I think it was alphago and that seemed like a concept that humans know that machines don't in that Venn diagrams about that yeah actually it's funny you mentioned that Lisa can beat Alpha zero pretty easily and it's a similar idea because uh if you you kind of know what are the most unseen out of distribution moves are and and he she can break Alpha zero pretty easily at least I guess that if Isa Dole had known something more about AI then maybe he would have tried to confuse alphago but the truth is you know it takes a lot it's a high stake game like he said oh it's like a the famous star worldwide so he wouldn't want to make a a move that would be seen as a complete mistake like the one that Magnus made couple of days ago that got on the news feed everywhere that he made this Taco century-wide mistake and that's that's probably hurts any other questions zero for example I just like building machine learning lazy's games really well um well these work that I've presented are pretty you um but there has been a bit of discussion in in the robotics applying potentially these two Robotics and of course I can't talk about details but um uh things that reinforcement learning in the wild people worry about or some of the surprises right if you have a test for it like if you have a unit test for it you're never going to fail because you're going to test before you deploy I think the biggest risk for any of this deployment systems is the surprises that you didn't expect so my work around the visualization and others aim to help you with that so we may not know names of these surprises but here's a tool that help you better discover those surprises before someone else does or someone else gets harm um this is kind of an open independent question but I was wondering we're talking about a lot of ways in which we try to kind of visualize or understand what's going on in the representation inside the machine but I was wondering whether we could turn it around and try to teach machines to tell us like what using our language is what they're doing and their representations like illegal representations of ours and then get the machine to do the translation for us instead of us going into the English yeah great question so it's a really interesting question because um that's something that I kind of tried in in my work previous work called testing with Concept activation vectors so that was to map human language into machine space so that they can only speak our language because I understand my language and just talk to me in my language the challenge is that how would you do that for something like Alpha zero like we don't have a vocabulary for it like move 37 then there's going to be a lot of missing valuable knowledge that we might we might not get from the machine so I think the approach has to be both ways we should leverage as much as we can but acknowledging that even that mapping that trying to map our language to machines is going to is not going to be perfect because it's a kind of proxy for what we think like a penguin is there's a psychology research that says everyone thinks very differently about what penguin is like if I like a picture of penguin everyone was thinking different penguin right now right Australia has the cutest penguin the fairy penguin I'm thinking that right I don't know how many people are thinking that so given that like we give we're so different machine's gonna think something else so how do you bridge that Gap extend that to 100 Concepts and composing those Concepts it's gonna go out a while very soon so there's pros and cons I'm into both of them I think some applications exclusively exclusively just using human concepts are still very helpful it gets you uh halfway but my ambition is that we shouldn't stop there we should benefit from them by having us having them teach us new things that we didn't know before but like um I don't know but like trying to locate like specific strategies in the embedding space What are the alternatives I guess I don't know the Alternatives just because I feel like the wrong thing that's possible so like it's like some transformed space of our embedding space in Alpha zero maybe it's a function of uh applied to that embedding space so thinking about that as a raw Vector is is a is a dead end could be uh we'll see how this chess project goes in a couple months I might I might rethink my strategy but interesting thought yeah so I'm a Psychology major and I do realize that a lot of the stuff will be trying to hear like at least this is how we can figure out how our brains work and so I think that this would there be um stuff that um use that's applicable to internal networks and on the contrary youth English means interpretability it's in the studies of neural network will help us understand and stuff for our own brain yeah I talked to Jeffrey Hinton uh you know he would really like this so I believe I believe you probably know about this history I think that's how it all started right the whole neural network is to understand human brain um so so that that that's that's the answer to your question interesting however in my view there is some biases that we have in neuros Neuroscience because of the limitations of tools like physical tools and availability of humans that you can poke in I think that influences interpretability research and I'll try to give you an example what I mean so in you know cat near the the line or the horizontal line and vertical line neuron in cat brain so they put the prop in and figure out this one neuron detects vertical lines and you can like validate it's really cool if you look at the video the video is still online yeah what is it yes yes yes uh so why why did they do that well because you had one cat and a four poor cat and you had uh we can only prob a few neurons at a time right so that that implied a lot of few interpreportable research actually looked at or very focused on like neuron wise representation like this one neuron must be very special I actually think that's not true that was limited by our ability like physical ability ability to prop organisms but in your network you don't have to do that like you can apply functions to embeddings you can change the whole embedding to something else override so that kind of uh is actually a uh obstacle in our thinking rather than helping yeah okay maybe we should call it there um so for Thursday when you're not having uh lecture on Thursday um there'll be Tas in me here so if you have any you know last minute panics on your project so I think we might have some straight inside to help you we probably won't actually um final lecture cs224 in today [Applause]
Stanford_CS224N_Natural_Language_Processing_with_Deep_Learning_2023
Stanford_CS224N_NLP_with_Deep_Learning_Winter_2021_Lecture_17_Model_Analysis_and_Explanation.txt
Welcome to CS224N, lecture 17, Model Analysis and Explanation. OK, look at us. We're here. Let's start with some course logistics. We have updated the policy on the guest lecture reactions. They're all due Friday, all at 11:59 PM. You can't use late days for this, so please get them in. Watch the lectures. They're awesome lectures. They're awesome guests, and you get something like half a point for each of them. And yeah, all three can be submitted up through Friday. OK, so final projects, remember that the due date is Tuesday. It's Tuesday at 4:30 PM, March 16th. And let me emphasize that there's a hard deadline on the three days from then. Friday, we won't be accepting for additional points off assignments-- sorry, final projects that are submitted after the 4:30 deadline on Friday. We need to get these graded and get grades in. So whew, it's the end stretch, week 9. Week 10 is really the lectures are us giving you help on the final projects. So this is really the last week of lectures. Thanks for all your hard work and for asking awesome questions in lecture, and in office hours and on Ed. And let's get right into it. So today, we get to talk about one of my favorite subjects in natural language processing. It's model analysis and explanation. So first, we're going to do what I love doing, which is motivating why we want to talk about the topic at all. We'll talk about how we can look at a model at different levels of abstraction to perform different kinds of analysis on it. We'll talk about out of domain evaluation sets. So this will feel familiar to the RobustQA folks. Then we'll talk about sort of trying to figure out, for a given example, why did it make the decision that it made? It had some input, it produced some output. Can we come up with some sort of interpretable explanation for it? And then we'll look at, actually, the representations of the models. So these are the sort of hidden states, the vectors that are being built throughout the processing of the model, try to figure out if we can understand some of the representations and mechanisms that the model is performing. And then we'll actually come back to sort of one of the kind of default states that we've been in this course, which is trying to look at model improvements, removing things from models, seeing how it performs, and relate that to the analysis that we're doing in this lecture, show how it's not all that different. OK, so if you haven't seen this xkcd, now you have. And it's one of my favorites. I'm going to say all the words. So person A says, this is your machine learning system? Person B says, yep, you pour the data into this big pile of linear algebra and then collect the answers on the other side. Person A, what if the answers are wrong? Then person B, just stir the pile until they start looking right. And I feel like at its worst, deep learning can feel like this from time to time. You have a model. Maybe it works for some things, maybe it doesn't work for other things. You're not sure why it works for some things and doesn't work for others. And the changes that we make to our models, they're based on intuition, but frequently-- what are the TAs told? Everyone in office hours, sometimes you just have to try it and see if it's going to work out because it's very hard to tell. It's very, very difficult to understand our models on sort of any level. And so today we'll go through a number of ways we're trying to sort of carve out little bits of understanding here and there. So beyond it being important because it's an xkcd comic, why should we care about understanding our models? One, is that we want to know what our models are doing. So here you have a black box. Black box functions are sort of this idea that you can't look into them and interpret what they're doing. You have an input sentence, say, and then some output prediction. Maybe this black box is actually your final project model, and it gets some accuracy. Now, we summarize our models. And in your final projects you'll summarize your model with sort of one or a handful of summary metrics of accuracy, or F1 score, or BLEU score or something, but it's a lot of model to explain with just a small number of metrics. So what do they learn? Why do they succeed? And why do they fail? What's another motivation? So we want to sort of know what our models are doing, OK, but maybe that's because we want to be able to make tomorrow's model. So today, when you're building models in this class, at a company, you start out with some kind of recipe that is known to work either at the company or because you have experience from this class, and it's not perfect, right? It makes mistakes. You look at the errors. And then over time, you take what works, maybe, and then you find what needs changing. So it seems like maybe adding another layer to the model helped. And maybe that's a nice tweak and the model performance gets better, et cetera. And incremental progress doesn't always feel exciting, but I want to pitch to you that it's actually very important for us to understand how much incremental progress can kind of get us towards some of our goals so that we can have a better job of evaluating when we need when we need big leaps, when we need major changes, because there are problems that we're attacking with our incremental sort of progress and we're not getting very far. OK, so we want to make tomorrow's model. Another thing that's, I think, very related to and sort of both a part of and bigger than this field of analysis is model biases. So let's say you take your Word2vec analogies solver from GloVe or Word2vec that is from assignment 1, and you give it the analogy, man is to computer programmer as woman is to, and it gives you the output, homemaker-- this is a real example from the paper below-- you should be like, wow. Well, I'm glad I know that now. And of course, you saw the lecture from Yulia Tsvetkov last week. You say, wow, I'm glad I know that now, and that's a huge problem. What did the model use in its decision? What biases is it learning from data and possibly making even worse? So that's the kind of thing that you can also do with model analysis, beyond just making models better according to some sort of summary metric as well. And then another thing, we don't just want to make tomorrow's model. And this is something that I think is super important. We don't just want to look at that time scale. We want to say, what about 10, 15, 25 years from now? What kinds of things will we be doing? What are the limits? What can be learned by language model pretraining? What's the model that will replace the transformer? What's the model that will replace that model? What does deep learning struggle to do? What are we sort of attacking over and over again and failing to make significant progress on? What do neural models tell us about language, potentially? There's some people who are primarily interested in understanding language better using neural networks. Cool. How are our models affecting people, transferring power between groups of people, governments, et cetera? That's an excellent type of analysis. What can't be learned via language model pretraining? So that's sort of the complementary question there. If you sort of come to the edge of what you can learn via language model pretraining, is there stuff that we need total paradigm shifts in order to do well? So all of this falls under some category of trying to really deeply understand our models and their capabilities. And there's a lot of different methods here that we'll go over today. And one thing that I want you to take away from it is that they're all going to tell us some aspect of the model, elucidate some kind of intuition or something. But none of them are we going to say, aha, I really understand 100% about what this model is doing now. So they're going to provide some clarity, but never total clarity. And one way, if you're trying to decide how you want to understand your model more, the thing you should sort of start out by thinking about is, at what level of abstraction do I want to be looking at my model? So the sort of very high level abstraction-- let's say you've trained a QA model to estimate the probabilities of start and end indices in a reading comprehension problem, or you've trained a language model that assigns probabilities to words in context. You can just look at the model as that object. So it's just a probability distribution defined by your model. You are not looking into it any further than the fact that you can sort of give it inputs and see what outputs it provides. So that's not even-- who even cares if it's a neural network. It could be anything, but it's a way to understand its behavior. Another level of abstraction that you can look at, you can dig a little deeper, you can say, well, I know that my network is a bunch of layers that are kind of stacked on top of each other. You've got sort of maybe your transformer encoder with one layer, two layer, three layer. You can try to see what it's doing as it goes deeper in the layers. So maybe your neural model is the sequence of these vector representations. A third option of sort of specificity is to look at as much detail as you can. You've got these parameters in there, you've got the connections in the computation graph. So now you're sort of trying to remove all of the abstraction that you can and look at as many details as possible. And all three of these sort of ways of looking at your model and performing analysis are going to be useful and will actually sort of travel slowly from 1 to 2 to 3 as we go through this lecture. OK, so we haven't actually talked about any analyses yet, so we're going to get started on that now. And we're starting with the sort of testing our model's behaviors. So would we want to see, will my model perform well? I mean, the natural thing to ask is, how does it behave on some sort of test set, right? And so we don't really care about mechanisms yet, why is it performing this, by what method is it making its decision. Instead, we're just interested in sort of the more higher level of abstraction of, does it perform the way I want it to perform? So let's take our model evaluation that we are already doing and sort of recast it in the framework of analysis. So you've trained your model on some samples from some distribution. So you've got input/output pairs of some kind. So how does the model behave on samples from the same distribution? It's a simple question and it's sort of known as in-domain accuracy, or you can say that the samples are IID and that's what you're testing on. And this is just what we've been doing this whole time. It's your test set accuracy or F1 or BLEU score. And so you've got some model with some accuracy, and maybe it's better than some model with some other accuracy on this test set, r ight? So this is what you're doing is you're iterating on your models and your final project as well. You say, well, on my test set, which is what I've decided to care about for now, model A does better. They both seem pretty good. And so maybe I'll choose model A to keep working on. Maybe I'll choose it if you were putting something into production. But remember back to this idea that it's just one number to summarize a very complex system. It's not going to be sufficient to tell you how it's going to perform in a wide variety of settings. OK, so we've been doing this. This is model evaluation as model analysis. Now we are going to say, what if we are not testing on exactly the same type of data that we trained on? So now we're asking, did the model learn something such that it's able to sort of extrapolate or perform how I want it to on data that looks a little bit different from what it was trained on? And we're going to take the example of natural language inference. So to recall the test of natural language inference, and this is through the Multi-NLI data set that we're just pulling our definition. You have a premise. He turned and saw Jon sleeping in his half-tent And you have a hypothesis, he saw Jon was asleep. And then you give them both to a model, and this is the model that we had before that had some good accuracy. And the model is supposed to tell whether the hypothesis is sort of implied by the premise or contradicting. So it could be contradicting, maybe, if the hypothesis is John was awake, for example, or he saw John was awake. Maybe that'd be contradiction. Neutral if sort of both could be true at the same time, so to speak. And then entailment, in this case, it seems like you're saying that the premise implies the hypothesis. And so you would say, probably, this is likely to get the right answer since the accuracy of the model is 95%. 95% of the time, it gets the right answer. And we're going to dig deeper into that. What if the model is not doing what we think we want it to be doing in order to perform natural language inference? So in a data set like Multi-NLI, the authors who gathered the data set will have asked humans to perform the task and gotten the accuracy that the humans achieved. And models nowadays are achieving accuracies that are around where humans are achieving, which sounds great at first. But as we'll see, it's not the same as actually performing the task more broadly in the right way. So what if the model's not doing something smart, effectively? We're going to use a diagnostic test set of carefully constructed examples that seem like things the models should be able to do to test for a specific skill or capacity. In this case, we'll use HANS. So HANS is the Heuristic Analysis for NLI Systems data set, and it's intended to take systems that do natural language inference and test whether they're using some simple syntactic heuristics. What we'll have in each of these cases, we'll have some heuristic. We'll talk through the definition. We'll get an example. So the first thing is lexical overlap. So the model might do this thing where it assumes that a premise entails all hypotheses constructed from words in the premise. So in this example, you have the premise, the doctor was paid by the actor. And then the hypothesis is, the doctor paid the actor. And you'll notice that in bold here, the doctor, OK, and then paid, and then the actor, right? And so if you use this heuristic, you will think that "the doctor was paid by the actor" implies the doctor paid the actor. That does not imply it, of course. And so you could expect the model-- you want the model to be able to do this. It's somewhat simple. But if it's using this heuristic, it won't get this example right. Next is subsequence heuristics. So here, if the model assumes that the premise entails all of its continuous subsequences, it will get this one wrong as well. So this example is, "the doctor near the actor danced." That's the premise. The hypothesis is, "the actor danced." Now, this is a simple syntactic thing. The doctor is doing the dancing near the actor. It's this prepositional phrase. And so the model sort of uses this heuristic, oh, look, the actor danced. That's a subsequence, entailed, awesome. And it'll get this one wrong as well. And here's another one that's a lot like subsequence. But so if the model thinks that the premise entails all complete subtrees, so this is sort of fully formed phrases. So the artist slept here is a fully formed sort of subtree. "If the artist slept, the actor ran." And then that's the premise. Does it entail the hypothesis, the actor slept? No. Sorry, the artist slept. That does not entail it because this is in that conditional. OK, now let me pause here for some questions before I move on to see how these models do. Anyone unclear about how this sort of evaluation is being set up? No? Cool. OK, so how do models perform? That's sort of the question of the hour. What we'll do is we'll look at these results from the same paper that released the dataset. So they took four strong Multi-NLI models with the following accuracies. So the accuracies here are something between 60 and 80 something, 80%. BERT over here is doing the best, OK. And in-domain, right, in that first sort of setting that we talked about, you get these reasonable accuracies. And that is sort of what we said before about it like looking pretty good. And when we evaluate on HANS, in this setting here, we have examples where the heuristics we talked about actually work. So if the model's using the heuristic, it will get this right. And it gets very high accuracies. And then if we evaluate the model in the settings where if it uses the heuristic, it gets the examples wrong, maybe BERT's doing epsilon better than some of the other stuff here. But it's a very different story. OK, and you saw those examples. They're not complex in our sort of own idea of complexity. And so this is why it sort of feels like a clear failure of the system. Now, you can say, though, that, well, maybe the training data sort of didn't have any of those sort of phenomena, so the model couldn't have learned not to do that. And that's sort of a reasonable argument, except, well, BERT is pretrained on a bunch of language text. So you might hope, you might expect, you might hope that it does better. OK, so we saw that example of models performing well on examples that are like those that it was trained on, and then performing not very well at all on examples that seem reasonable but are sort of a little bit tricky. Now we're going to take this idea of having a test set that we've carefully crafted and go in a slightly different direction. So we're to have, what does it mean to try to understand the linguistic properties of our models? So that syntactic heuristics question was one thing for natural language inference, but can we sort of test how the models, whether they think certain things are sort of right or wrong as language models? And the first way that we'll do this is we'll ask, well, how do we think about what humans think of as good language? How do we evaluate their sort of preferences about language? And one answer is minimal pairs. And the idea of a minimal pair is that you've got one sentence that sounds OK to a speaker. So this sentence is, the chef who made the pizzas is here. It's called it's an acceptable sentence, at least to me. And then with a small change, a minimal change, the sentence is no longer OK to the speaker. So the chef who made the pizzas are here. And this-- oops. This should be present tense verbs. In English, present tense verbs agree a number with their subject when they are third person. So chef, pizzas, OK. And this is sort of a pretty general thing. Most people don't like this. It's a misconjugated verb, and so the syntax here looks like you have the chef who made the pizzas. And then this arc of agreement in number is requiring the word "is" here to be singular "is" instead of plural "are," despite the fact that there's this noun, pizzas, which is plural, closer linearly. Comes back to dependency parsing. We're back, OK? And what this looks like in the tree structure, right, is, well, "chef" and "is" are attached in the tree. "Chef" is the subject of "is," "pizzas" is down here in this subtree, and so that subject-verb relationship has this sort of agreement thing. So this is a pretty sort of basic and interesting property of language that also reflects the syntactic, sort of hierarchical structure of language. So we've been training these language models sampling from them, seeing that they get interesting things. And they tend to seem to generate syntactic content. But does it really understand or does it behave as if it understands this idea of agreement more broadly? And does it sort of get the syntax right so that it matches the subjects and the verbs? But language models can't tell us exactly whether they think that a sentence is good or bad. They just tell us the probability of a sentence. So before, we had acceptable and unacceptable. That's what we get from humans. And the language model's analog is just, does it assign higher probability to the acceptable sentence in the minimal pair, right? So you have the probability under the model of the chef who made the pizzas is here, and then you have the probability under the model of the chef who made the pizzas are here. And you want this probability here to be higher. And if it is, that's sort of like a simple way to test whether the model got it right effectively. And just like in HANS, we can develop a test set with very carefully chosen properties, right? So most sentences in English don't have terribly complex subject-verb agreement structure with a lot of words in the middle, like "pizzas," that are going to make it difficult. So if I say, the dog runs, sort of no way to get it wrong because there's no-- the syntax is very simple. So we can create or we can look for sentences that have these things called attractors in the sentence. So "pizzas" is an attractor because the model might be attracted to the plurality here and get the conjugation wrong. So this is our question. Can language models sort of very generally handle these examples with attractors? So we can take examples with zero attractors, see whether the model gets the minimal pairs evaluation right. We can take examples with one attractor, to attractors, you can see how people would still reasonably understand these sentences, right? Chef who made the pizzas and prepped the ingredients is-- it's still the chef who is, and then on and on and on. It gets rarer, obviously, but you can have more and more attractors. And so now we've created this test set that's intended to evaluate this very specific linguistic phenomenon. So in this paper here, Kuncoro et al trained an LSTM language model on a subset of Wikipedia back in 2018, and they evaluate it sort of in these buckets that are specified by the paper that sort of introduce subject-verb agreement to the NLP field, more recently at least. And they evaluate it in buckets based on the number of attractors. And so in this table here that you're about to see, the numbers are sort of the percent of times that you get this-- assign higher probability to the correct sentence in the minimal pair. So if you were just to do random or majority class, you get these errors. Oh, sorry. It's the percent of times that you get it wrong. Sorry about that. So lower is better. And so with no attractors, you get very low error rates. So this is 1.3 error rate with a 350 dimensional LSTM. And with one attractor, your error rate is higher. But actually, humans start to get errors with more attractors, too. So zero attractors is easy. The larger the LSTM, it looks like, in general, the better you're doing, right? So the smaller model's doing worse, OK. And then even on sort of very difficult examples with four attractors, which-- try to think of an example in your head. The chef made the pizzas and took out the trash and-- sort of has to be this long sentence. The error rate is definitely higher, so it gets more difficult. But it's still relatively low. And so even on these very hard examples, models are actually performing subject-verb number agreement relatively well. Very cool. OK, here's some examples that our model got wrong. This is actually a worse model than the ones from the paper that was just there. But I think, actually, the errors are quite interesting. So here's the sentence. The ship that the player drives has a very high speed. Now, this model thought that was less probable than, the ship that the player drives have a very high speed. My hypothesis, right, is that it sort of mis-analyzes drives as a plural noun, for example. It's sort of a difficult construction there. I think it's pretty interesting. Likewise, here, this one is fun. The lead is also rather long. 5 paragraphs is pretty lengthy. So here, "5 paragraphs" is a singular noun together because it's a unit of length, I guess. But the model thought that it was more likely to say, five paragraphs are pretty lengthy, because it's referring to this sort of five paragraphs as the five actual paragraphs themselves, as opposed to a single unit of length describing the lead. Fascinating, OK. Maybe questions again? So I guess there are a couple. Can we do the similar heuristic analysis for other tasks, such as Q&A classification? Yes. Yes. I think that it's easier to do this kind of analysis for the HANS style analysis with question answering and other sorts of tasks because you can construct examples that similarly have these heuristics and then have the answer depend on the syntax or not. The actual probability of one sentence is higher than the other, of course, is sort of a language model dependent thing. But the idea that you can sort of develop kind of bespoke test sets for various tasks, I think, is very, very general and something I think is actually quite interesting. Yes, so I won't go on further, but I think the answer's just yes. So there's another one. How do you know where to find these failure cases? Maybe that's the right time to advertise linguistics classes. Sorry. You're still very quiet over here. How do we find what? How do you know where to find these failure cases? Oh, interesting. Yes. How do we know where to find the failure cases? That's a good question. I think I agree with Chris, that actually thinking about what is interesting about things in language is one way to do it. Kind of the heuristics that we saw in our language model-- sorry, in our NLI models with HANS, you can kind of imagine that if the model was sort of ignoring facts about language and sort of just doing this sort of rough bag of words with some extra magic, then it would do well about as bad as it's doing here. And these sorts of ideas about understanding that this statement, if the artist slept, the actor ran, does not imply the artist slept, is the kind of thing that maybe you'd think up on your own, but also you'd spend time pondering about and thinking broad thoughts about in linguistics curricula as well. So anything else, Chris? So there's also-- well, I guess someone is also saying-- I think it's about the sort of intervening verbs example, or intervening nouns-- sorry-- example. But the data set itself probably includes mistakes with higher attractors. Yeah. Yeah, that's a good point. Yeah, because humans make more and more mistakes as the number of attractors gets larger. On the other hand, I think that the mistakes are fewer in written text than in spoken. Maybe I'm just making that up. That's what I think. But yeah, it would be interesting to actually go through that test set and see how many of the errors the really strong model makes are actually due to be sort of observed form being incorrect. I'd be super curious. OK, should I move on? Yeah. Great. OK, so what does it feel like we're doing when we are kind of constructing these sort of bespoke, small, careful test sets for various phenomena? Well, it is sort of feels like unit testing. And in fact, this sort of idea has been brought to the fore, you might say, in NLP unit tests, but for these NLP neural networks. In particular, the paper here that I'm citing at the bottom suggests this minimum functionality test. You want a small test set that targets a specific behavior. That should sound like some of the things that we've already talked about. But in this case, we're going to get even more specific. So here's a single test case. We're going to have an expected label, what was actually predicted, whether the model passed this unit test. And the labels are going to be sentiment analysis here. So negative label, positive label, or neutral are the three options. And the unit test is going to consist simply of sentences that follow this template. I, then a negation, the positive verb, and then the thing. So if you negation positive verb, it means a negative verb, right? And so here's an example. I can't say I recommend the food. The expected label is negative. The answer that the model provided-- and this is, I think, a commercial sentiment analysis system-- so it predicted positive. And then, I didn't love the flight. The expected label was negative, and then the predicted answer was neutral. And this commercial sentiment analysis system gets a lot of what you could imagine are pretty reasonably simple examples wrong. And so what Ribeiro et al 2020 showed is that they could actually provide a system that sort of had this framework of building test cases for NLP models to ML engineers working on these products and give them that interface, and they would actually find bugs, bugs being categories of high error, right, find bugs in their models that they could then kind try to go and fix, and that this was kind of an efficient way of trying to find things that were simple and still wrong with what should be pretty sophisticated neural systems. So I really like this, and it's sort of a nice way of thinking more specifically about what are the capabilities in sort of precise terms of our models. And all together now, you've seen problems in natural language inference. You've seen language models actually perform pretty well at the language modeling objective. But then you just saw an example of a commercial sentiment analysis system that sort of should do better and doesn't. And this comes to this really, I think, broad and important takeaway, which is, if you get high accuracy on the in-domain test set, you are not guaranteed high accuracy on even what you might consider to be reasonable out-of-domain evaluations. And life is always out of domain, and if you're building a system that will be given to users, it's immediately out of domain, at the very least because it's trained on text that's now older than the things that the users are now saying. So it's a really, really important take away that your sort of benchmark accuracy is a single number that does not guarantee good performance on a wide variety of things. And from a what are our neural networks doing perspective, one way to think about it is that models seem to be learning the data set, fitting sort of the fine-grained sort of heuristic and statistics that help it fit this one data set, as opposed to learning the task. So humans can perform natural language inference. If you give them examples from whatever data set, once you've told them how to do the task, they'll be very generally strong at it. But you take your MNLI model and you test it on HANS, and it got whatever that was below chance accuracy. That's not the kind of thing that you want to see. So it definitely learns the data set well because the accuracy in domain is high. But our models are seemingly not frequently learning sort of the mechanisms that we would like them to be learning. Last week, we heard about language models and sort of the implicit knowledge that they encode about the world through pretraining. And one of the ways that we saw the interactive language models was providing them with a prompt, like, Dante was born in mask, and then seeing if it puts high probability on the correct continuation, which requires you to access knowledge about where Dante was born. And we didn't frame it this way last week, but this fits into the set of behavioral studies that we've done so far. This is a specific kind of input. You could ask this for multiple people. We could swap out Dante for other people. We could have swapped out born in for, I don't know, died in or something. And then there are test suites again. And so it's all connected. OK, so I won't go too deep into the knowledge of language models in terms of world knowledge because we've gone over it some, but when you're thinking about ways of interacting with your models, this sort of behavioral study can be very, very general, even though, remember, we're at still this highest level of abstraction where we're just looking at the probability distributions that are defined. All right, so now we'll go into-- so we've sort of looked at understanding in fine-grained areas what our model is actually doing. What about sort of why, for an individual input, is it getting the answer right or wrong? And then are there changes to the inputs that look fine to humans but actually make the models do a bad job? So one study that I love to reference that really draws back into our original motivation of using LSTM networks instead of simple recurrent neural networks was that they could use long context. But how long is your long- and short-term memory? And the idea of Khandelwal et al 2018 was shuffle or remove contexts that are farther than some k words away, changing k. And if your accuracy, if the predictive ability of your language model, the perplexity, right, doesn't change once you do that, it means the model wasn't actually using that context. I think this is so cool. So on the x-axis, we've got how far away from the word that you're trying to predict are you actually sort of corrupting, shuffling, or removing stuff from the sequence? And then on the y-axis is the increase in loss. So if the increase in loss is 0, it means that the model was not using the thing that you just removed because if it was using it, it would now do worse without it, right? And so if you shuffle, in the blue line here, if you shuffle the history that's farther away from 50 words, the model does not even notice. I think that's really interesting. One, it says everything past 50 words of this LSTM language model, you could have given it in random order and it wouldn't have noticed. And then, two, it says that if you're closer than that, it actually is making use of the word order. That's a pretty long memory. OK, that's really interesting. And then if you actually remove the words entirely, you can kind of notice that the words are missing up to 200 words away. So you don't know the order-- you don't care about the order they're in, but you care whether they're there or not. And so this is an evaluation of, well, do LSTMs have long-term memory? Well, this one at least has effectively no longer than 200 words of memory, but also no less. So very cool. So that's a general study for a single model. It talks about its sort of average behavior over a wide range of examples. But we want to talk about individual predictions on individual inputs. So let's talk about that. So one way of interpreting, why did my model make this decision, that's very popular is, for a single example, what parts of the input actually led to the decision? And this is where we come in with saliency map. So saliency map provides a score for each word indicating its importance to the model's prediction. So you've got something like BERT here. You've got BERT. BERT Is making a prediction for this mask. The mask rushed to the emergency room to see her patient. And the predictions that the model is making is thinks with 47% it's going to be nurse that's here in the mask instead, or maybe woman or doctor, or mother, or girl, OK. And then the saliency map is being visualized here in orange. According to this method of saliency called simple gradients, which we'll get into, "emergency," "her," and the SEP token-- let's not worry about the SEP token for now, but "emergency" and "her" are the important words, apparently. And the SEP token shows up in every sentence, so I'm not going to-- right. And so these two together are, according to this method, what's important for the model to make this prediction to mask. And you can see maybe some statistics, biases, et cetera that it's picked up in the predictions and then have it mapped out onto the sentence. And this is-- well, it seems like it's really helping interpretability. And yeah, I think that this is sort of a very useful tool. Actually, this is part of a demo from AllenNLP that allows you to do this yourself for any sentence that you want. So what's this way of making saliency maps? We're not going to go-- there's so many ways to do it. We're going to take a very simple one and work through why it sort of makes sense. So the sort of issue is, how do you define importance, right? What does it mean to be important to the model's prediction? And here's one way of thinking about it. It's called the simple gradient method. Let's get a little formal. You've got words x1 to xn, OK, and then you've got a model's score for a given output class. So maybe you've got, in the BERT example, each output class with each output word that you could possibly predict. And then you take the norm of the gradient of the score with respect to each word. OK, so what we're saying here is the score is sort of the un-normalized probability for that class, OK? So you've got a single class, you're taking the score. It's, like, how likely it is, not yet normalized by how likely everything else is sort of. Gradient, how much is it going to change if I move it a little bit in one direction or another? And then you take the norm to get a scalar from a vector. So it looks like this. So salience of word "I," you have the norm bars on the outside, gradient with respect to xi. So that's, if I change a little bit locally xi, how much does my score change? So the idea is that a high gradient norm means that if I were to change it locally, I'd affect the score a lot. And that means it was very important to the decision. Let's visualize this a little bit. So here on the y-axis, we've got loss, just the loss of the model. Sorry, this should be score, should be score. And on the x-axis, you've got word space. The word space is like sort of a flattening of the ability to move your word embedding in 1,000-dimensional space. I've just plotted it here in one dimension. And now a high saliency thing, you can see that the relationship between what should be score and moving the word in word space, you move it a little bit on the x-axis, and the score changes a lot. That's that derivative, that's the gradient. Awesome, love it. Low saliency, you move the word around locally, and the score doesn't change. So the interpretation is that means that the actual identity of this word wasn't that important to the prediction because I could have changed it and the score wouldn't have changed. Now, why are there more methods than this? Because, honestly, reading that, I was like, that sounds awesome. That sounds great. There are sort of lots of issues with this kind of method, and lots of ways of getting around them. Here's one issue. It's not perfect because, well, maybe your linear approximation that the gradient gives you holds only very, very locally, right? So here the gradient is 0, so this is a low saliency word because I'm at the bottom of this parabola. But if I were to move it even a little bit in either direction, the score would shoot up. So is this not an important word? It seems important to be right there, as opposed to anywhere else, even sort of nearby, in order for the score not to go up. But the simple gradients method won't capture this because it just looks at the gradient, which is that 0 right there, OK? But if you want to look into more, there's a bunch of different methods that are sort of applied in these papers. And I think that it's a good tool for the toolbox, OK? So that is one way of explaining a prediction. And it has some issues, like why are individual words being scored, as opposed to phrases or something like that. But for now, we're going to move on to another type of explanation, and I'm going to check the time. OK, cool. Actually, yeah, let me pause for a second. Any questions about this? Earlier on, there were a couple of questions. One of them was, what are your thoughts on whether looking at attention weights is a methodologically rigorous way of determining the importance that the model places on certain tokens? It seems like there's some back and forth in the literature. That is a great question, and I probably won't engage with that question as much as I could if we had a second lecture on this. I actually will provide some attention analyses and tell you they're interesting, and then I'll sort of say a little bit about why they can be interesting without being sort of the end-all of analysis of where information is flowing in a transformer, for example. I think the debate is something that we would have to get into in a much longer period of time. But look at the slides that I show about attention and the caveats that I provide, and let me know if that answers your question first, because we have quite a number of slides on it. And if not, please, please, ask again, and we can chat more about it. Then maybe you can go on. Great, OK. So I think this is a really fascinating question, which also gets what was important about the input, but it actually kind of an even more direct way, which is, could I just keep some minimal part of the input and get the same answer? So here's an example from SQuAD. You have this passage, in 1899 John Jacob Astor IV invested $100,000 for Tesla. OK, and then the answer that is being predicted by the model is going to always be in blue, in these examples, Colorado Springs experiments. So you've got this passage. And the question is, what did Tesla spend Astor's money on? That's why the prediction is Colorado Springs experiments. The model gets the answer right, which is nice. And we would like to think it's because it's doing some kind of reading comprehension. But here's the issue. It turns out, based on this fascinating paper, that if you just reduced the question to, did, you actually get exactly the same answer. In fact, with the original question, the model had sort of a 0.78 confidence probability in that answer. And with the reduced question, did, you get even higher confidence. And that, if you give a human this, they would not be able to know really what you're trying to ask about, so it seems like something is going really wonky here. So here's sort of a very high level overview of the method. In fact, it actually references our input saliency methods. Ah, nice, it's connected. So you iteratively remove non-salient or unimportant words. So here's a passage again talking about football, I think. Yeah, oh, nice. So the question is, where did the Broncos practice for the Super Bowl? Has the prediction of Stanford University. And that is correct. So again, seems nice. And now we're not actually going to get the model to be incorrect. We're just going to say, how can I change this question such that it'll still get the answer right? So I'm going to remove the word that was least important according to the saliency method. So now it's, where did the practice for the Super Bowl? Already, this is sort of unanswerable because you've got two teams practicing. You don't even know which one you're asking about. So why the model still thinks it's so confident in Stanford University makes no sense. But you can just sort of keep going. And now I think here the model stops being confident in the answer Stanford University. But I think this is really interesting just to show that if the model is able to do this with very high confidence, it's not reflecting the uncertainty that really should be there because you can't know what you're even asking about. OK, so what was important to make this answer? Well, at least these parts were important because you could keep just those parts and get the same answer. Fascinating. All right, so that's sort of the end of the admittedly brief talk section on thinking about input saliency methods and similar things. Now we're going to talk about actually breaking models and understanding models by breaking them. OK, cool. So if we have a passage here, Peyton Manning became the first quarterback something Super Bowl, age 39, past record held by John Elway. Again, we're doing question answering. We've got this question, what was the name of the quarterback who was 38 in the Super Bowl? The prediction is correct, looks good. Now we're not going to change the question to try to sort of make the question nonsensical while keeping the same answer. Instead, we're going to change the passage by adding this sentence at the end, which really shouldn't distract anyone. This is well-known quarterback Jeff Dean. Had jersey number 37 in Champ Bowl. So this just doesn't-- it's really not even related, but now the prediction is Jeff Dean for our nice QA model. And so this shows as well that it seems like maybe there's this end of the passage bias as to where the answer should be, for example. And so this is an adversarial example where we flipped the prediction by adding something that is innocuous to humans. And so sort of the higher level take away is, oh, it seems like the QA model that we had that seemed good is not actually performing QA how we want it to, even though its in-domain accuracy was good. And here's another example. So you've got this paragraph with the question, what has been the result of this publicity? The answer is, increased scrutiny on teacher misconduct. Now, instead of changing the paragraph, we're going to change the question in really, really seemingly insignificant ways to change the model's prediction. So first, what H-A and then I've got this typo L been the result of this publicity. The answer changes to teacher misconduct. Likely a human would sort of ignore this typo or something and answer the right answer. And then this is really nuts. Instead of asking, what has been the result of this publicity, if you ask, what's been the result of this publicity, the answer also changes. And the authors call this a semantically equivalent adversary. This is pretty rough. And in general, swapping "what" for "what's" in this QA model breaks it pretty frequently. And so, again, when you go back and sort of re-tinker how to build your model, you're going to be thinking about these things, not just the sort of average accuracy. So that's sort of talking about noise. Are models robust to noise in their inputs? Are humans robust to noise is another question we can ask. And so you can kind of go to this popular sort of meme passed around the internet from time to time, where you have all the letters in these words scrambled. You say, according to research at Cambridge University, it doesn't matter in what order the letters in a word are, right? And so it seems like-- I think I did a pretty good job there-- seemingly, right, we've got this noise. That's a specific kind of noise. And we can be robust as humans to reading and processing the language without actually all that much of a difficulty. So that's maybe something that we might want our models to also be robust to. And it's very practical as well. Noise is a part of all NLP systems inputs at all times. There's just no such thing effectively as having users, for example, and not having any noise. And so there's a study that was performed on some popular machine translation models where you train machine translation models in French, German, and Czech, I think all to English, and you get BLEU scores. These BLEU scores will look a lot better than the ones in your Assignment 4 because much, much more training data. The idea is these are actually pretty strong machine translation systems, and that's in in-domain clean text. Now, if you add character swaps, like the ones we saw in that sentence about Cambridge, the BLEU scores take a pretty harsh dive. Not very good. And even if you take a somewhat more natural sort of typo noise distribution here, you'll see that you're still getting 20-ish very high drops in BLEU score through simply natural noise. And so maybe you'll go back and retrain the model on more types of noise, and then you ask, oh, if I do that is it robust to even different kinds of noise? These are the questions that are going to be really important. And it's important to know that you're able to break your model really easily so that you can then go and try to make it more robust. Now, let's see, 20 minutes, awesome. Now we're going to, I guess-- so now we're going to look at the representations of our neural networks. We've talked about sort of their behavior and then whether we could sort of change or observe reasons behind their behavior. Now we'll go into less abstraction, look more at the actual vector representations that are being built by models, and we can answer a different kind of question, at the very least, than with the other studies. The first thing is related to the question I was asked about attention, which is that some modeling components lend themselves to inspection. Now, this is a sentence that I chose somewhat carefully, actually, because in part of this debate, right, are they interpretable components? We'll see, but they lend themselves to inspection in the following way. You can visualize them well and you can correlate them easily with various properties. So let's say you have attention heads in BERT. This is from a really nice study that was done here where you look at attention heads of BERT and you say, on most sentences, this attention head had 1, 1, seems to do this very sort of global aggregation, simple kind of operation, does this pretty consistently. That's cool. Is it interpretable? Well, maybe, right? So it's the first layer, which means that this word, "found," is sort of uncontextualized. But in deeper layers, the problem is that once you do some rounds of attention, you've had information mixing and flowing between words. And how do you know exactly what information you're combining, what you're attending to, even? It's a little hard to tell. And saliency methods more directly sort of evaluate the importance of models. But it's still interesting to see at sort of a local mechanistic point of view what kinds of things are being attended to. So let's take another example. Some attention heads seemed to perform simple operations. So you have the global aggregation here that we saw already. Others seem to attend pretty robustly to the next token, cool. Next token is a great signal. Some heads attend to the SEP token, so here you have it attending to SEP. And then maybe some attended periods. Maybe that's sort of a splitting sentences together and things like that, not things that are hard to do, but things that some attention heads seemed to pretty robustly perform. Again, now, though, deep in the network, what's actually represented at this period at layer 11? Little unclear, little unclear, OK? So some heads, though, are correlated with really interesting linguistic properties. So this head is actually attending to noun modifiers. So you've got this, the complicated language in the huge new law, right? That's pretty fascinating. Even if the model is not doing this as a causal mechanism to do syntax necessarily, the fact that these things so strongly correlate is actually pretty cool. And so what we have in all of these studies is we've got an approximate interpretation and quantitative analysis relating, allowing us to reason about very complicated model behavior. They're all approximations, but they're definitely interesting. One other example is that of coreference. So we saw some work on coreference, and it seems like this head does a pretty OK job of actually matching up coreferent entities. These are in red-- talks, negotiations, she, her. And that's not obvious how to do that. This is a difficult task, and so it does so with some percentage of the time. And again, it's sort of connecting very complex model behavior to these sort of interpretable summaries of correlating properties. Other cases, you can have individual hidden units that lend themselves to interpretation. So here you've got a character level LSTM language model. Each row here is a sentence. If you can't read it, that's totally OK. The interpretation that you should take is that, as we walk along the sentence, this single unit is going from, I think, very negative to very positive or very positive to very negative. I don't really remember. But it's tracking the position in the line. So it's just a linear positioning unit, and pretty robustly doing so across all of these sentences. So this is from a nice visualization study way back in 2016, way back. Here's another cell from that same LSTM language model that seems to sort of turn on inside quotes. So here's a quote, and then it turns on. So I guess that's positive in the blue. End quote here, and then it's negative. Here you start with no quote, negative in the red, see a quote, and then blue. Seems, again, very interpretable, also potentially a very useful feature to keep in mind. And this is just an individual unit in the LSTM that you can just look at and see that it does this. Very, very interesting. Even further on this, and this is actually a study by some AI and neuroscience researchers, is we saw that LSTMs were good at subject-verb number agreement. Can we figure out the mechanisms by which the LSTM is solving the task? Can we actually get some insight into that? And so we have a word-level language model. And the word-level language model is going to be a little small, but you have a sentence, the boy gently and kindly greets the. And this cell that's being tracked here, so it's an individual hidden unit, one dimension, right, is actually, after it sees "boy," it sort of starts to go higher. And then it goes down to something very small once it sees "greets." And this cell seems to correlate with the scope of a subject-verb number agreement instance effectively. So here, the boy that watches the dog that watches the cat greets, you've got that cell again staying high, maintaining the scope of subject until "greets," at which point it stops. What allows it to do that? Probably some complex other dynamics in the network, but it's still a fascinating, I think, insight. And yeah, this is just neuron 1,150 in this LSTM. So those are sort of all observational studies that you could do by picking out individual components of the model, that you can sort of just take each one of and correlating them with some behavior. Now we'll look at a general class of methods called probing by which we still sort of use supervised knowledge, like knowledge of the type of coreference that we're looking for. But instead of seeing if it correlates with something that's immediately interpretable, like a attention head, we're going to look into the vector representations of the model and see if these properties can be read out by some simple function to say, oh, maybe this property was made very easily accessible by my neural network. So let's dig into this. So the general paradigm is that you've got language data that goes into some big pretrained transformer with fine tuning, and you get state of the art results. SOTA means State Of The Art, right? And so the question for the probing sort of methodology is, if it's providing these general purpose language representations, what does it actually encode about language? Can we quantify this? Can we figure out what kinds of things it's learning about language that we seemingly now don't have to tell it? And so you might have something like a sentence, like, I record the record. That's an interesting sentence. And you put it into your transformer model with its word embeddings at the beginning, maybe some layers of self attention and stuff, and you make some predictions. And now our objects of study are going to be these intermediate layers, right? So it's a vector per word or subword for every layer. And the question is, can we use these linguistic properties, like the dependency parsing that we had way back in the early part of the course, to understand sort of correlations between properties and the vectors and these things that we can interpret? We can interpret dependency parses. So there are a couple of things that we might want to look for here. We might want to look for semantics. So here in the sentence, "I record the record," I am an agent. That's a semantics thing. Record is a patient. It's the thing I'm recording, OK? You might have syntax, so you might have the syntax tree that you're interested in, that's the dependency parse tree. Maybe you're interested in part of speech, right, because you have "record" and "record," and the first one's a verb, the second one is a noun, they're identical strings. Does the model sort of encode that one is one and the other is the other? So how do we do this kind of study? So we're going to decide on a layer that we want to analyze and we're going to freeze BERT. So we're not going to fine tune BERT. All the parameters are frozen. So we decide on layer 2 of BERT. We're going to pass in some sentences. We decide on what's called a probe family. And the question I'm asking is, can I use a model from my family, say linear, to decode a property that I'm interested in really well from this layer? So it's indicating that this property is easily accessible to linear models, effectively. So maybe I train the model, I train a linear classifier, right, on top of BERT, and I get a really high accuracy. And that's sort of interesting already because you know from prior work in part of speech tagging that if you run a linear classifier on simpler features that aren't BERT, you probably don't get as high an accuracy. So that's an interesting sort of takeaway. But then you can also take a baseline. So I want to compare two layers now. So I've got layer 1 here, I want to compare it to layer 2. I train a probe on it as well. Maybe the accuracy isn't as good, and now I can say, oh, wow, look. By layer 2, part of speech is more easily accessible to linear functions than it was at layer 1. So what did that? Well, the self-attention and feed forward stuff made it more easily accessible. That's interesting because it's a statement about sort of the information processing of the model. OK, so we're going to analyze these layers. Let's take a second more to think about it and just really-- give me just a second. So if you have the model's representations h1 to ht, and you have a function family f, that's the subset linear models, or maybe you have like a feed forward neural network, some fixed set of hyperparameters, freeze the model, train the probe. So you get some predictions for part of speech tagging or whatever. That's just the probe applied to the hidden state of the model. The probe is a member of the probe family. And then the extent that we can predict y is a measure of accessibility. So that's just kind of written out not as pictorially, OK? So I'm not going to stay on this for too much longer. And it may help in the search for causal mechanisms, but it sort of just gives us a rough understanding of processing of the model and what things are accessible at what layer. So what are some results here? So one result is that BERT, if you run linear probes on it, does really, really well on things that require syntax and part of speech and named entity recognition, actually, in some cases, approximately as well as just doing the very best thing you could possibly do without BERT. So it just makes easily accessible amazingly strong features for these properties, and that's an interesting sort of emergent quality of BERT, you might say. It seems like as well that the layers of BERT have this property where, so if you look at the columns of this plot here, each column is a task. You've got input words at the sort of layer 0 of BERT here. Layer 24 is the last layer of BERT-large. Lower performance is yellow, higher performance is blue. And I know the resolution isn't perfect, but consistently the best place to read out these properties is somewhere a bit past the middle of the model, which is it's a very consistent rule, which is fascinating. And then it seems as well, if you look at this function of increasingly abstract or increasingly difficult to compute linguistic properties on this axis, and increasing depth in the network on that axis, so the deeper you go in the network, it seems like the more easily you can access more and more abstract linguistic properties, suggesting that that accessibility is being constructed over time by the layers of processing of BERT. So it's building more and more abstract features, which I think is, again, sort of a really interesting result. And now I think-- one thing that I think comes to mind that really brings us back right to day one is we built intuitions around Word2vec. We were asking, what does each dimension of Word2vec mean? And the answer was not really anything. But we could build intuitions about it and think about properties of it through sort of these connections between simple mathematical properties of Word2vec and linguistic properties that we could sort of understand. So we had this approximation, which is not 100% true, but some approximation that says, cosine similarity is effectively correlated with semantic similarity. Think about even if all we're going to do at the end of the day is fine tune these word embeddings anyway, likewise we had this sort of idea about the analogies being encoded by linear offsets. So some relationships are linear in space, and they didn't have to be. That's fascinating. It's this emergent property that we've now been able to study since we discovered this. Why is that the case in Word2vec? And in general, even though you can't interpret the individual dimensions of Word2vec, these sort of emergent, interpretable connections between approximate linguistic ideas and sort of simple math on these objects is fascinating. And so one piece of work that sort of extends this idea comes back to dependency parse trees. So they describe the syntax of sentences. And in a paper that I did with Chris, we showed that, actually, BERT and models like it make the dependency parse tree structure emergent, sort of more easily accessible than one might imagine in its vector space. So if you've got a tree right here, the chef who ran to the store was out of food, what you can sort of do is think about the tree in terms of distances between words. So you've got the number of edges in the tree. Between two words is their path distance. So you've got the distance between "chef" and "was" is 1. And we're going to use this interpretation of a tree as a distance to make a connection with BERT's embedding space. And what we were able to show is that under a single linear transformation, the squared Euclidean distance between BERT vectors for the same sentence actually correlates well, if you choose the B matrix right, with the distances in the tree. So here in this Euclidean space that we've transformed, the approximate distance between "chef" and "was" is also 1. Likewise, the difference between "was" and "store" is 4 in the tree. And in my simple sort of transformation of BERT space, the distance between "store" and "was" is also approximately 4, and this is true across a wide range of sentences. And this is, to me, a fascinating example of, again, emergent approximate structure in these very nonlinear models that don't necessarily need to encode things so simply. OK, all right, great. So probing studies and correlation studies are, I think, interesting and point us in directions to build intuitions about models. But they're not arguments that the model is actually using the thing that you're finding to make a decision. They're not causal studies. And this is for probing and correlation studies. So in some work that I did around the same time, we showed, actually, that certain conditions on probes allow you to achieve high accuracy on a task that's effectively just fitting random labels. And so there's a difficulty of interpreting what the model could or could not be doing with this thing that is somehow easily accessible. It's interesting that this property is easily accessible, but the model might not be doing anything with it, for example, because it's totally random. Likewise, another paper showed that you can achieve high accuracy with a probe, even if the model is trained to know that thing that you're probing for is not useful. And there's causal studies that sort of try to extend this work. It's much more difficult, but read this paper, and it's a fascinating line of future work. Now in my last two minutes, I want to talk about recasting model tweaks and ablations as analysis. So we had this improvement process where we had a network that was going to work OK, and we would see whether we could tweak it in simple ways to improve it. And then you could see whether you could remove anything and have it still be OK, and that's kind of like analysis. I have my network. Do I want it to-- is it going to be better if it's more complicated? Is it going to be better if it's simpler? Can I get away with it being simpler? And so one example of some folks who did this is they took this idea of multiheaded attention and said, oh, so many heads. Are all the heads important? And what they showed is that if you train a system with multiheaded attention, and then just remove the heads at test time and not use them at all, you can actually do pretty well on the original task, not retraining, at all without some of the attention heads, showing that they weren't important. You could just get rid of them after training. And likewise, you can do the same thing for-- this is on machine translation, this is on Multi-NLI. You can actually get away without a large, large percentage of your attention heads. Let's see. Yeah, so another thing that you could think about is questioning sort of the basics of the models that we're building. So we have transformer models that are sort of self-attention, feed forward, self-attention, feed forward. But why in that order, with some of the things omitted here? And this paper asked this question and said, if this is my transformer, self-attention, feed forward, self-attention, feed forward, et cetera, et cetera, et cetera, what if I just reordered it so that I had a bunch of self-attentions at the head and a bunch of feed forwards at the back? And they tried a bunch of these orderings, and this one actually does better. So this achieves a lower perplexity on a benchmark. And this is a way of analyzing what's important about the architectures that I'm building and how can they be changed in order to perform better. So neural models are very complex, and they're difficult to characterize and impossible to characterize with a single sort of statistic, I think, for your test set accuracy, especially in domain. And we want to find intuitive descriptions of model behaviors, but we should look at multiple levels of abstraction. And none of them are going to be complete. When someone tells you that their neural network is interpretable, I encourage you to engage critically with that. It's not necessarily false, but the levels of interpretability and what you can interpret, these are the questions that you should be asking because it's going to be opaque in some ways, almost definitely. And then bringing this-- this lens to your model building as you try to think about how to build better models, even if you're not going to be doing analysis as sort of one of your main driving goals. And with that, good luck on your final projects. I realize we're at time. The teaching staff is really appreciative of your efforts over this difficult quarter. And yeah, I guess there's a lecture left on Thursday, but yeah, this is my last one, so thanks, everyone.
Stanford_CS224N_Natural_Language_Processing_with_Deep_Learning_2023
Stanford_CS224N_NLP_w_DL_Winter_2021_Lecture_4_Syntactic_Structure_and_Dependency_Parsing.txt
OK. So for today, we're actually going to take a bit of a change of pace from what the last couple of lectures have been about, and we're going to focus much more on linguistics and natural language processing. And so in particular, we're going to start looking at the topic of dependency parsing. And so this is the plan of what to go through today. So I'm going to start out by going through some ideas that have been used for the syntactic structure of languages, of constituency and dependency and introduce those. And then focusing in more on the dependency structure, and then going to look at dependency grammars and dependency treebanks. And then having done that, we're then going to move back into thinking about how to build natural language processing systems. And so I'm going to introduce the idea of transition-based dependency parsing. And then in particular having developed that idea, I'm going to talk about a way to build a simple but highly effective neural dependency parser. And so this simple and highly effective neural dependency parser is essentially what we'll be asking you to build in the third assignment. So in some sense, we're getting a little bit ahead of ourselves here because in week 2 of the class we teach you how to do both assignments two and three. But all of this material will come in really useful. Before I get underway, just a couple of announcements. So again for assignment 2, you don't yet need to use the PyTorch framework. But now's a good time to work on getting PyTorch installed for your Python programming. Assignment 3 is in part also an introduction to using PyTorch. It's got a lot of scaffolding included in the assignment. But beyond that, this Friday we've got a PyTorch tutorial and I thoroughly encourage you to come along to that as well. Look for it under the Zoom tab. And in the second half of the first day of week 4, we have an explicit class that partly focuses on the final projects and what the choices are for those. But it's never too late to start thinking about the final project and what kind of things you want to do for the final project. So do come meet with people. There are sort of resources on the course pages about what different TAs know about. I've also talked to a number of people about final projects, but clearly I can't talk to everybody. So I encourage you to also be thinking about what you want to do for final projects. OK. So what I wanted to do today was introduce how people think about the structure of sentences, and put structure on top of them to explain how human language conveys meaning. And so our starting point for meaning and essentially what we've dealt with word vectors up until now is we have words. And words are obviously an important part of the meaning of human languages. But for words in human languages, there's more that we can do with them in thinking about how to structure sentences. So in particular, the first most basic way that we think about words when we are thinking about how sentences are structured is we give to them what's called a part of speech. We can say that cat is a noun, by is a preposition, door is another noun, cuddly is an adjective. And then for the words the, if it was given a different part of speech, if you saw any parts of speech in school, it was probably you were told was an article. Sometimes that is just put into the class of adjectives. In modern linguistics and what you'll see in the resources that we use, words like the are referred to as determiners. And the idea is that there's a bunch of words includes a and that. But also other words like this and that, or even every, which are words that occur at the beginning of something like the cuddly cat, which have a determinative function of sort of picking out which cats that they're referring to. And so we refer to those as determiners. But it's not the case that when we want to communicate with language that we just have this word salad, where we say a bunch of words. We just say, whatever, leaking, kitchen, tap and let the other person put it together. We put words together in a particular way to express meanings. And so therefore, languages have larger units of putting meaning together. And the question is how we represent and think about those. Now in modern work, in particular in modern United States linguistics or even what you see in computer science classes when thinking about formal languages, the most common way to approach this is with the idea of context-free grammars, which you see at least a little bit of in 103 if you've done 103. What a linguist would often refer to as free structure grammars. And the idea there is to say, well, there are bigger units in languages that we refer to as phrases. So something like the cuddly cat is a cat with some other words modifying it. And so we refer to that as a noun phrase. But then we have ways in which phrases can get larger by building things inside phrases. So the door, here is also a noun phrase. But then we can build something bigger around it with a preposition, such as the preposition and then we have a prepositional phrase. And in general, we can keep going. So we can then make something like the cuddly cat by the door and then the door is a noun phrase. The cuddly cat is a noun phrase. By the door is a prepositional phrase. But then when you put it all together, the whole of this thing becomes a bigger noun phrase. And so it's working with these ideas of nested phrases, what in context free grammar terms you would refer to as non-terminals. So noun phrase and prepositional phrase would be non-terminals in the context free grammar. We can build up a bigger structure of human languages. So let's just do that for a little bit to review what happens here. So we start off saying, OK you can say the cat and the dog. And so those are noun phrases, and so we want a rule that can explain those. So we could say a noun phrase goes to determine a noun. And then somewhere over the side, we have a lexicon. And in our lexicon we would say that dog is a noun and cat is a noun and a is a determiner, and the is a determiner. OK. So then we notice you can do a bit more than that. So you can say things like the large cat, a barking dog. So that suggests we can have a noun phrase after the determiner. It can optionally be an adjective and then there's the noun. And that can explain some things we can say. But we can also say the cat by the door or a barking dog in a crate. And so we can also put a prepositional phrase at the end, and that's optional. But you can combine it together with an adjective. For the example I gave like a barking dog on the table. And so this grammar can handle that. So then we'll keep on, and say, well, actually you can use multiple adjectives. So you can say a large barking dog, or a large barking cuddly cat. Maybe not. Well, sentences like that. So we have any number of adjectives, which we can represent with the star. What's referred to as the clinging star. So that's good. But I forgot a bit actually. For by the door, I have to have a rule for producing by the door. So I also need a rule that's a prepositional phrase, goes to a preposition followed by a noun phrase. And so then I also have to have prepositions and that can be in, or on, or by. OK. And I can make other sentences of course with this as well like, the large crate on the table or something like that. Or the large crate on the large table. OK. So I chug along and then well, I could have something like talk to the cat. And so now I need more stuff. So talk is a verb and to is still looks like a preposition. So I need to be able to make up something with that as well. OK. So what I can do is say I can also have a rule for a verb phrase that goes to a verb. And then after that, for something like talk to the cat that it can take a prepositional phrase after it. And then I can say that the verb goes to talk or walked. OK. then I can cover those sentences, whoops. OK. So that's the end of what I have here. So in this sort of a way, I'm handwriting a grammar. So here is now I have this grammar and a lexicon. And for the examples that I've written down here, this grammar and this lexicon is sufficient to parse these sort of fragments of showing expansion that I just wrote down. I mean, of course, there's a lot more to English than what you see here, right? So if I have something like the cat walked behind the dog, then I need some more grammar rules. So it seems then I need a rule that says I can have a sentence that goes to a noun phrase followed by a verb phrase. And I can keep on doing things of this sort. That's the one question that Ruthanne asked, was about what the brackets mean and is the first NP different from the second? So for this notation on the brackets here, I mean, this is actually a common notation that's used in linguistics. It's sort of in some sense a little bit different to traditional computer science notation since the star is used in both to mean 0 or more of something. So you could have 0, 1, 2, 3, 4, or 5 adjectives. Somehow it's usual in linguistics that when you're using the star you also put parentheses around that to mean it's optional. So sort of parentheses and star are used together to mean any number of something. When it's parentheses just by themselves, that's then meaning 0 or 1. And then four, are these two noun phrases different? No, they're both noun phrase rules. And so in our grammar we can have multiple rules that expand noun phrase in different ways. But actually in my example here, my second rule because I wrote it quite generally, it actually covers the first rule as well. So actually at that point, I can cross out this first rule because I don't actually need it in my grammar. But in general, you have a choice between writing multiple rules for a noun phrase goes to categories, which effectively gives you a disjunction or working out by various syntactic conventions how to compress them together. OK. So that was what gets referred to in natural language processing as constituency grammars, where the standard form of constituency grammar is a context free grammar of the sort that I trust you saw at least a teeny bit of either in CS103 or something like a programming languages, compilers, formal languages class. There are other forms of grammars that also pick out constituency. There are things like tree adjoining grammars, but I'm not going to really talk about any of those now. What I actually want them to present is a somewhat different way of looking at grammar, which is referred to as dependency grammar, which puts the dependency structure over sentences. Now actually, it's not these two ways of looking at grammar have nothing to do with each other. I mean, there's a whole formal theory about the relationships between different kinds of grammars. And you can very precisely state relationships and isomorphisms between different grammars of different kinds. But on the surface, these two kinds of grammars look sort of different and emphasize different things. And for reasons of this sort of closeness to picking out relationships in sentences and their ease of use, it turns out that in modern natural language processing, starting I guess around 2000, so really in the last 20 years, NLP people have really swung behind dependency grammar. So if you look around now where people are using grammars in NLP, by far the most common thing that's being used is dependency grammars. So I'm going to teach us today a bit about those and what we're going to build in assignment 3 is building using supervised learning a neural dependency parser. So the idea of dependency grammar is that when we have a sentence, what we're going to do is we're going to say for each word what other words modify it. So what we're going to do is when we say the large crate, we're going to say OK, well, large is modifying crate and that is modifying crate. In the kitchen, that is modifying kitchen. By the door, that is modifying door. And so I'm showing a modification, the dependency or an attachment relationship by drawing an arrow from the head to what's referred to in dependency grammar as the dependent. The thing that modifies, further specifies or attaches to the head. OK. So that's the start of this. Well, another dependency is look in the large crate. That where you're looking is in the large crate. So you're going to want to have in the large crate as being a dependent of look. And so that's also going to be a dependency relationship here. And then there's one final bit that might seem a little bit confusing to people, and that's actually when we have these prepositions. There are two ways that you can think that this might work. So if it was something like, look in the crate. It seems like the is a dependent of crate. But you could think that you want to say, look in and it's in the crate and give this dependency relationship with the sort of preposition as sort of thinking of it as the head of what was before our prepositional phrase. And that's a possible strategy in the dependency grammar. But what I'm going to show you today and what you're going to use in the assignment is dependency grammars that follow the representation of universal dependencies. And universal dependencies is a framework which actually is involved in creating, which was set up to try and give a common dependency grammar over many different human languages. And in the design decisions that were made in the context of designing universal dependencies, what we decided was that for what in some languages you use prepositions, lots of other languages make much more use of case markings. So if you've seen something like German, you've seen more case markings like genitive and dative cases. And in other languages like Latin or Finnish. Lots of Native American languages, you have many more case markings again which cover most of the role of prepositions. So in universal dependencies, essentially in the crate is treated like a case marked noun. And so what we say is that the in is also a dependent of crate, and then you're looking in the crate. So in the structure we adopt in is a dependent of crate. This in is a dependent of kitchen. This by is a dependent of door. And then we have these prepositional phrases in the kitchen by the door and we want to work out well what they modify. Well, in the kitchen is modifying crate, right, because it's a crate in the kitchen. So we're going to say that this piece is a dependent of crate. And then well, what about by the door? Well, it's not really meaning that's the kitchen by the door. And it's not meaning to look by the door. Again, it's a crate by the door, and so what we're going to have is the crate also has door as a dependent. And so that gives us our full dependency structure of this sentence. OK. And so that's a teeny introduction to syntactic structure. I'm going to say a bit more about it and give a few more examples. But let me just for a moment sort of say a little bit about why are we interested in syntactic structure? Why do we need to know the structure of sentences? And this gets into how does human languages work. So human languages can communicate very complex ideas. I mean in fact, anything that humans know how to communicate to one another, they communicate pretty much by using words. So we can structure and communicate very complex ideas, but we can't communicate a really complex idea by one word. We can't just choose a word like empathy and say it with a lot of meaning. Say empathy and the other person is meant to understand everything about what that means, right? We have to compose a complex meaning that explains things by putting words together into bigger units. And the syntax of a language allows us to put words together into bigger units, where we can build up and convey to other people a complex meaning. And so then the listener doesn't get this syntactic structure, right? The syntactic structure of the sentence is hidden from the listener. All the listener gets is a sequence of words one after another bang, bang, bang. So the listener has to be able to do what I was just trying to do in this example. That as the sequence of words comes in, that the listener works out which words modify which other words. And therefore, can construct the structure of the sentence and hence the meaning of the sentence. And so in the same way, if we want to build clever neural net models that can understand the meaning of sentences, those clever neural net models also have to understand what is the structure of the sentence so that they can interpret the language correctly. And we'll go through some examples and see more of that. OK. So the fundamental point that we're going to spend a bit more time on is that these choices of how you build up the structure of a language change the interpretation of the language. And a human listener or equally a natural language understanding program has to make in a sort of probabilistic fashion choices as to which words modify i.e., depend upon which other words, so that they're coming up with the interpretation of the sentence that they think was intended by the person who said it. OK. So to get a sense of this and how sentence structure is interesting and difficult. What I'm going to go through now is a few examples of different ambiguities that you find in natural language. And I've got some funny examples from newspaper headlines. But these are all real natural language ambiguities that you find throughout natural language. Well, at this point I should say, this is where I'm being guilty of saying for language, but I'm meaning in English. Some of these ambiguities you find in lots of other languages as well, but which ambiguities that syntactic structure partly depend on the details of the language? So different languages have different syntactic constructions, different word orders different amounts of words, having different forms of words, like case markings. And so depending on those details there might be different ambiguities. So here's one ambiguity, which is one of the commonest ambiguities in English. So San Jose cops kill man with knife. So this sentence has two meanings. Either it's the San Jose cops who are killing a man, and they're killing a man with a knife. And so that corresponds to a dependency structure, where the San Jose cops are the subject of killing, the man is the object of killing and then the knife is then the instrument with which they're doing the killing. So that the knife is an oblique modifier for the instrument of killing. And so that's one possible structure for this sentence. But it's probably not the right one. So what it actually probably was, was that it was a man with a knife and the San Jose cops killed the man. So that corresponds to the knife then being a noun modifier of the man. And then kill is still killing the man, so the man is the object of killing and the cops are still the subject. And so whenever you have a prepositional phrase like this that's coming further on in a sentence, there's a choice of how to interpret it. It could be either interpreted as modifying a noun phrase that comes before it, or it can be interpreted as modifying a verb that comes before it. So systematically in English you get these prepositional phrase attachment ambiguities throughout all of our sentences. But to give two further observations on that, the first observation is you encounter sentences with prepositional phrase attachment ambiguities every time you read a newspaper article, every time you talk to somebody. But most of the time, you never notice them. And that's because our human brains are incredibly good at considering the possible interpretations and going with the one that makes sense according to context. The second comment, as I said different human languages expose different ambiguities. So for example, this is an ambiguity that you normally don't get in Chinese. Because in Chinese, prepositional phrases modifying a verb are normally placed before the verb. And so therefore, you don't standardly get this ambiguity. But there are different other ambiguities that you find commonly in Chinese sentences. OK. So this ambiguity you find everywhere, because prepositional phrases are really common at the right ends of sentences. So here's another one. Scientist count whales from space. So that gives us these two possible interpretations that there are whales from space and scientists are counting them. And then the other one is how the scientists are counting the whales is that they're counting them from space and they're using satellites to count the sails, which is the correct interpretation that the newspaper hopes that you're getting. And this problem gets much, much more complex. Because many sentences in English have prepositional phrases all over the place. So here's the kind of boring sentence that you find in the financial news. The board approved its acquisition by Royal Trustco Limited of Toronto for $27 a share at its monthly meeting. And well, if you look at the structure of this sentence what we find is here is a verb then here is the object noun phrase. So you've got the object noun phrase here. And then after that, what do we find? Well, we find a prepositional phrase, another prepositional phrase, another prepositional phrase, and another prepositional phrase. And how to attach each of these is then ambiguous. So the basic rule of how you can attach them is you can attach them to things to the left providing you don't create crossing attachments. So in principle by Royal Trustco Limited could be attached to either approved or acquisition. But in this case, by Royal Trustco Limited it's the acquirer. So it's a modifier of the acquisition. OK. So then we have of Toronto. So of Toronto could be modifying Royal Trustco Limited. It could be modifying the acquisition, or it can be modifying the approved. And in this case, the of Toronto is telling you more about the company, and so it's a modifier of Royal Trustco Limited. OK. So then the next one is for $27 a share. And that could be modifying Toronto Royal Trustco Limited, the acquisition or the approving. And well, in this case, that's talking about the price of the acquisition. So this one jumps back, and this is now a prepositional phrase that's modifying the acquisition. And then at the end, at its monthly meeting. , Well that's where the approval is happening by the board. So rather than any of these preceding four noun phrases, at its monthly meeting is modifying the approval. And so it attaches right back there. And this example is kind of too big and so I couldn't fit it in one line. But as I think maybe you can see that none of these dependencies cross each other, and they connect at different places ambiguously. So because we can chain these prepositions like this and attach them at different places like this, human language sentences are actually extremely ambiguous. If you have a sentence with k prepositional phrases at the end of it, where here we have k equals 4. The number of parses this sentence has, the number of different ways you can make these attachments is given by the Catalan numbers. So the Catalan numbers are an exponentially growing series, which arises in many treelike contexts. So if you're doing something like triangulations of a polygon, you get Catalan numbers. If you're doing triangulation and graphical models in CS228, you get Catalan numbers. But we don't need to worry about the details here. The central point is this is an exponential series. And so you're getting an exponential number of parses in terms of the number of prepositional phrases. And so in general, the number of parses human languages have is exponential in their length, which is kind of bad news. Because if you're then trying to enumerate all the parses, you might fear that you really have to do a ton of work. The thing to notice about structures like these prepositional phrase attachment ambiguities is that there's nothing that resolves these ambiguities in terms of the structure of the sentence. So if you've done something like looked at the kind of grammars that are used in compilers, that the grammars used in compilers for programming languages are mainly made to be unambiguous. And to the extent that there are any ambiguities, there are default rules that are used to say, choose this one particular parse tree for your piece of a programming language. And human languages just aren't like that. They're globally ambiguous, and the listening human is just meant to be smart enough to figure out what was intended. So the analogy would be that in programming languages when you're working out what does an else clause modify, well, you've got the answer. That you can either look at the curly braces to work out what the else clause modifies or if you're using Python, you look at the indentation and it tells you what the else clause modifies. Where by contrast for human languages, it would be just write down else something. Doesn't matter how you do it. You don't need parentheses, you don't need indentation. The human being will just figure out what the else clause is meant to pair up with. OK. Lots of other forms of ambiguities in human languages. So let's look at a few others. Another one that is very common over all sorts of languages is coordination and scope ambiguities. So here's a sentence, shuttle veteran and long time NASA executive Fred Gregory appointed to board. Well, this is an ambiguous sentence. There are two possible readings of this. One reading is that there are two people, there's a shuttle veteran and there's a long time NASA executive Fred Gregory and they were both appointed to the board. Two people. And the other possibility is there's someone named Fred Gregory, who's a shuttle veteran and long time NASA executive and they're appointed to the verb. One person. And these two interpretations again, correspond to having different parse structures. So in one structure, we've got a coordination of the shuttle veteran and the long time NASA executive, Fred Gregory, coordinated together. In one case, these are coordinated and then Fred Gregory specifies the name of the NASA executive. So it's then specifying who that executive is. Where the what in the other one, the shuttle veteran and longtime NASA executive, altogether is then something that is a modifier of Fred Gregory. OK. So one time, this is the unit that modifies Fred Gregory. In the other one up here, just long time NASA executive modifies Fred Gregory. And then that's conjoined together with the shuttle veteran. And so that also gives different interpretations. So this is a slightly reduced example. Newspaper headlines tend to be more ambiguous than many other pieces of text, because they're written in this shortened form to get things to fit. And this is an especially shortened form, where it's actually left out an explicit conjunction. But this headline says doctor: no heart, cognitive issues. And this was I guess after Trump's first physical. And well, this is an ambiguity, because there are two ways that you can read this. You can either read this as saying doctor: no heart and cognitive issues, which gives you one interpretation. Instead of that, the way we should read it is that it's heart or cognitive. And so it's then saying no heart or cognitive issues. And we have a different narrower scope of the coordination, and then we get a different reading. OK. I want to give a couple more examples of different kinds of ambiguities. Another one you see quite a bit is when you have modifiers that are adjectives and adverbs, that there are different ways that you can have things modifying other things. This example is a little bit not safe for work, but here goes. Students get first hand job experience. So this is an ambiguous sentence. And again, we can think of it as a syntactic ambiguity in terms of which things modify which other things. So the nice polite way to render this sentence is that first is modifying hand. So we've got first hand. It's job experience, so job is a compound noun modifying experience. And it's first hand experience, so first hand is then modifying experience. And then get is the object of -- sorry, first hand job experience is the object of get. And the students are the subject of get. But if you have a smuttier mind you can interpret this a different way. And in the alternative interpretation you then have hand going together with job. And the first is then a modifier of experience and job is still a modifier of experience. And so then you get this different parse structure and different interpretation there. OK. One more example. In a way this example's similar to the previous one. It's having modifier pieces that can modify different things. But rather than it just being with individual adjectives or individual adverbs, it's then much larger units, such as verb phrases can often have attachment ambiguities. So this sentence headline is mutilated body washes up on Rio beach to be used for Olympics beach volleyball. So we have this big verb phrase here of, to be used for Olympic beach volleyball. And then again, we have this attachment decision that we could either say that that big verb phrase is modifying, i.e., is attached to the Rio beach. Or we could say no, no, the to be used for Olympic beach volleyball, that is modifying the mutilated body. And it's a body that's to be used for the Olympics beach volleyball, which gives the funny reading. Yeah, so I hope that's given you at least a little bit of a sense of how human language syntactic structure is complex, ambiguous. And to work out the intended interpretations, you need to know something about that structure. In terms of how much you need to understand, I mean, this isn't a linguistics class. If you'd like to learn more about human language structure, you can go off and do a syntax class. But we're not really going to spend a lot of time working through language structure. But there will be some questions on this in the assignment. And so we're expecting that you can be at the level that you can have sort of some intuitions as to which words and phrases are modifying other words and phrases. And therefore, you could choose between two dependency analyses which one's correct. OK. I've spent quite a bit of time on that. So better keep going. OK. So the general idea is that knowing this sort of syntactic structure of a sentence can help us with semantic interpretation. I mean, as well as just generally saying, we can understand language it's also used in many cases for simple, practical forms of semantic extraction. So people such as in biomedical informatics often want to get out particular relations, such as protein-protein interactions. And well, here's a sentence the results demonstrated that KaiC interacts rhythmically with SasA, KaiA and KaiB. And commonly that people can get out those kind of relationships by looking at patterns of dependency relations with particular verbs. So for the interacts verb, if you have a pattern of something being the subject and something else being the noun modifier of interacts, well that's an interaction relationship. But it gets a bit more complicated than that as in this example, because often there are conjunctions. So you also have another pattern where you have also interactions between the subject and the noun modifiers conjunct, which will allow us to also find the KaiA and KaiB examples. OK. So I've given an informal tour of dependency grammar to just try and quickly say a little bit more about formally what a dependency grammar is. So in dependency syntax, what we say is that the syntactic structure of a sentence consists of relations between pairs of words. And it's a binary asymmetric relation i.e. we draw arrows between pairs of words, which we call dependencies. Now normally dependency grammars then type those grammatical relation, type those arrows to express what kind of relation that there is. And so that they have some kind of taxonomy of grammatical relation. So we might have a subject grammatical relation, a verbal auxiliary grammatical relation, an oblique modifier grammatical relation. We have some kind of typology of grammatical relations. And we refer to the arrow as going between the head, is the head here, and something that is dependent of that. So the subject of a verb is the dependent of the verb. Or when you have a noun modifier, like sort of cuddly cat, we say that cuddly is a dependent of cat. And cat is the head of cuddly cat. And so normally dependencies, like in these examples form a tree, which is formal. So it's not just any graph with arrows. We have a graph which is connected, acyclic, and has a single root. So here is the root of the graph. And so that gives us the dependency tree analysis. Dependency grammars have a really, really long history. So the famous first linguist was Panini, who wrote about the structure of Sanskrit. And mainly he worked on the sound system of Sanskrit and how sounds change in various contexts, which is what linguists call phonology. And the different forms of Sanskrit words. Sanskrit has rich morphology of inflecting nouns and verbs for different cases and forms. But he also worked a little on the syntactic structure of Sanskrit sentences. And essentially what he proposed was dependency grammar over Sanskrit sentences. And it turns out that sort of for most of recorded history when people have then gone on and tried to put structures over human sentences, what they have used is dependency grammars. So there was a lot of work in the first millennium by Arabic grammarians trying to work out the grammar structure of sentences. And effectively what they used was akin to I've just presented as a dependency grammar. So compared to 2,500 years of history, the ideas of having context-free grammars and having constituency grammars is actually a really, really recent invention. So it was really sort of in the middle of the 20th century that the ideas of constituency grammar and context-free grammars were developed first by Wells in the 40s, and then by Noam Chomsky in the early 50s leading to things like the Chomsky hierarchy that you might see in CS103, or a formal languages class. So for modern work on dependency grammar using the terminology and notation that I've just introduced, that's normally attributed to Lucien Tesniere, who was a French linguist in around the sort of the middle of the 20th century as well. Dependency grandma was widely used in the 20th century in a number of places. I mean, in particular it tends to be much more natural and easier to think about for languages that have a lot of different case markings on nouns. Like nominative, accusative, genitive, dative, instrumental kind of cases like you get in a language like Latin or Russian. And a lot of those languages have much freer word order than English. In English, the subject has to be before the verb and the object has to be after the verb. But lots of other languages have much freer word order, and instead use different forms of nouns to show you what's the subject or the object of the sentence. And dependency grammars can often seem much more natural for those kinds of languages. Dependency grammars were also prominent in the very beginnings of computational linguistics. So one of the first people working in computational linguistics in the US was David Hayes. So the professional society for computational linguistics is called the Association for Computational Linguistics, and he was actually one of the founders of the Association for Computational Linguistics. And he published in the early 1960s and early perhaps the first dependency grammar --parser, sorry, dependency parser. OK. Yeah, a little teeny note just in case you see other things. When you have these arrows, you can draw them in either direction. You could either draw arrows from the head or to the dependent, or from the dependent to the head. And actually, different people have done one and the other, right? So the way Tesniere drew them was to draw them from the head to the dependent and we're following that convention. But if you're looking at something that somebody else has written with dependency arrows, the first thing you have to work out is are they using the arrow heads at the heads or the dependence. Now one other thing here is that a sentence is seen as having the overall head word of the sentence, which every other word of the sentence hangs off. It's a common convention to add this sort of fake root to every sentence that then points to the head word of the whole sentence here completed. That just tends to make the algorithm stuff easier. Because then you can say that every word of the sentence is dependent on precisely one other node, where what you can be dependent on is either another word on the sentence or the fake root of the sentence. And when we build our parses, we will introduce that fake root. OK. So that's dependency grammars and dependency structure. I now want to get us back to natural language processing and starting to build parses for dependency grammars. But before doing that, I just want to say, where do we get our data from? And that's actually an interesting story in some sense. So the answer to that is, well, what we do is get human beings, commonly linguists or other people who are actually interested in the structure of human sentences. And we get them to sit around and hand parse sentences and give them dependency structures. And we collect a lot of those parses and we call that a treebank. And so this is something that really only started happening in the late -80s and took off in a bigger way in the -90s. Until then, no one had attempted to build treebanks. Lots of people had attempted to build parsers. And it seemed like, well, if you want to build a parser, the efficient way to do it is to start writing a grammar. So you start writing some grammar rules and you start writing the lexicon with words and parts of speech, and you sit around working on your grammar. When I was a PhD student, one of my first summer jobs was spending the summer hand-writing a grammar. And it seems like writing and grammar is more efficient because you're writing this one general thing that tells you the structure of a human language. But there's just been this massive sea change partly driven by the adoption of machine learning techniques, where it's now seen as axiomatic that the way to make progress is to have annotated data, namely here a treebank that shows you the structure of sentences. And so what I'm showing here is a teeny extract from a universal dependencies treebank. And so that's why I mentioned earlier that this has been this effort to try and have a common dependency grammar representation that you can apply to lots of different human languages. And so you can go over to this URL and see that there's about 60 different languages at the moment, which have universal dependencies treebanks. So why are tree banks good? I mean, it seems like it's bad news if you have to have people sitting around for weeks and months hand parsing sentences. It seems a lot slower and actually a lot less useful than having somebody writing a grammar, which just has a much bigger multiplier factor in the utility of their effort. It turns out that although that initial feeling seems sort of valid, that in practice there's just a lot more you can do with the treebank. So why are tree banks great? One reason is that tree banks are highly reusable. So typically, when people have written grammars, they've written grammars for one particular parser and the only thing that was ever used in is that one particular parser. But when you build a treebank, that's just a useful data resource and people use that for all kinds of things. So the well-known treebanks have been used by hundreds and hundreds of people. And although all treebanks were initially built for the purposes of, hey let's help natural language processing systems. It turns out that people have actually been able to do lots of other things with tree banks. So for example these days psycholinguists commonly use treebanks to get various kinds of statistics about data for thinking about psycholinguistic models. Linguists use tree banks for looking at patterns of different syntactic constructions that occur. That there's just been a lot of reuse of this data for all kinds of purposes. But they have other advantages that I mentioned here. When people are just sitting around saying, what sentences are good. They tend to only think of the core of language, where lots of weird things happen in language. And so if you actually just have some sentences and you have to go off and parse them, then you actually have to deal with the totality of language. Since you're parsing actual sentences, you get statistics. So you naturally get the kind of statistics that are useful to machine learning systems by constructing a treebank, where you don't get them for free if you hand write a grammar. But then a final way which is perhaps the most important of all is if you actually want to be able to do science of building systems, you need a way to evaluate these NLP systems. I mean, it seems hard to believe now that back in the -90s and -80s when people built NLP parsers, it was literally the case that the way they were evaluated was you said to your friend, I've built this parser. Type in a sentence on the terminal and see what it gives you back. It's pretty good, hey? And that was just the way business was done. Whereas what we'd like to know is well as I showed you earlier, English sentences can have lots of different parsers commonly. Can this system choose the right parses for particular sentences and therefore have the basis of interpreting them as a human being would? And well, we can only systematically do that evaluation if we have a whole bunch of sentences that have been hand parsed by humans with their correct interpretations. So the rise of treebanks turn parser building into an empirical science, where people could then compete rigorously on the basis of, look, my parser has 2% higher accuracy than your parser in choosing the correct parses for sentences. OK. So well, how do we build a parser once we've got dependencies? So there's sort of a bunch of sources of information that you could hope to use. So one source of information is looking at the words on either end of the dependency. So discussing issues that seems a reasonable thing to say, and so it's likely that issues could be the object of discussing. Whereas, if it was some other word, right, if you are thinking of making outstanding the object of discussion, discussing outstanding. That doesn't sound right. So that wouldn't be so good. A second source of information is distance. So most dependencies are relatively short distance. Some of them are. Some have long distance dependencies but they're relatively rare. The vast majority of dependencies are nearby. Another source of information is the intervening material. So there are certain things that dependencies rarely span. So clauses and sentences are normally organized around verbs. And so dependencies rarely span across intervening verbs. We can also use punctuation in written language, things like commas which can give some indication of the structure. And so punctuation may also indicate bad places to have long distance dependencies over. And there's one final source of information, which is what was referred to as valency, which is for a head what kind of information does it usually have around it? So if you have a noun, there are things that you just know about what kinds of dependence nouns normally have. So it's common that it will have a determiner to the left, the cat. On, the other hand, it's not going to be the case that there's a determiner to the right, cat the. That's just not what you get in English. On the left, you're also likely to have an adjectival modifier. That's why we had cuddly. But again, it's not so likely you're going to have the adjectival modifier over on the right for cuddly. So there are sort of facts about what things different kinds of words take on the left and the right. And so that's the valency of the heads, and that's also a useful source of information. OK. So what do we need to do using that information to build a parser? Well, effectively what we do is have a sentence, I'll give a talk tomorrow on neural networks. And what we have to do is say for every word in that sentence, we have to choose some other word that it's the dependent of, where one possibility it's a dependent of root. So we're giving it a structure, where we're saying OK, for this word, I've decided that it's dependent on networks. And then for this word, it's also dependent on networks. And for this word, it's a dependent on give. So we're choosing one for each word. And there are usually a few constraints. So only one word is a dependent of root. We have a tree. We don't want cycles. So we don't want to say that word A is dependent on word B, and word B is dependent on word A. And then there's one final issue, which is where the arrows can cross or not. So in this particular sentence, we actually have these crossing dependencies you can see there. I'll give a talk tomorrow on neural networks. And this is the correct dependency parse for this sentence. Because what we have here is that it's a talk and it's a talk on neural network. So the on neural networks modifies the talk, but which leads to these crossing dependencies. I didn't have to say it like that. I could have said, I'll give a talk on neural networks tomorrow. And then on neural networks would be next to the talk. So most of the time in languages, dependencies are projective, the things stay together so the dependencies have a kind of a nesting structure of the kind that you also see in context free grammars. But most languages have at least a few phenomena where you ended up with these ability for phrases to be split apart, which lead to non-projective dependencies. So in particular, one of them in English is that you can take modifying phrases and clauses like the on neural networks here, and shift them right towards the end of the sentence, and get I'll give a talk tomorrow on neural networks. And that then leads to non-projective sentences. So a parse is projective if there are no crossing dependency arcs when the words are laid out in their linear order with all arcs above the words. And if you have a dependency parse that corresponds to a context free grammar tree, it actually has to be projective because context-free grammars necessarily have this sort of nested tree structure following the linear order. But dependency grammars normally allow non-projective structures to account for displaced constituents. And you can't easily get the semantics of certain constructions right without these nonprojective dependencies. So here's another example in English with question formation with what's called preposition stranding. So the sentence is, who did Bill buy the coffee from yesterday? There's another way I could have said this. It's less natural in English, but I could have said, from who did Bill buy the coffee yesterday? In many languages of the world, that's the only way you could have said it. And when you do that, from who is kept together and you have a projective parse for the sentence. But English allows and indeed it much prefers you to do what is referred to as preposition stranding, where you move for who but you just leave the preposition behind. And so you get who did Bill buy the coffee from yesterday? And so then we're ending up with this non-projective dependency structure as I've shown there. OK. I'll come back to non projectivity in a little bit. How do we go about building dependency parsers? Well, there are a whole bunch of ways that you can build dependency parsers. Very quickly, ought to say a few names and I'll tell you about one of them. So you can use dynamic programming methods to build dependency parsers. So I showed earlier that you can have an exponential number of parsers for a sentence and that sounds like really bad news for building a system. Well, it turns out that you can be clever and you can work out a way to dynamic program finding that exponential number of parsers and then you can have an O(n) cubed algorithm. So you could do that. You can use graph algorithms and then I'll say a bit about that later. But that may spill into next time. So you can see, since we're wanting to kind of connect up all the words into a tree using graph edges, that you could think of doing that using a minimum spanning tree algorithm of the sort that you hopefully saw in CS161. And so that idea has been used for parsing. Constraint satisfaction ideas that you might have seen in CS221 have been used for dependency parsing. But the way I'm going to show now is transition based parsing, or sometimes referred to as deterministic dependency parsing. And the idea of this is one's going to use a transition system, so that's like shift reduce parsing. If you've seen a shift reduce parsing in something like a compilers class or a formal languages class, that's shift and reduce transition steps. And so you use a transition system to guide the construction of parses. And so let me just explain about that. So let's see. So this was an idea that was made prominent by Joakim Nivre, who's a Swedish computational linguist who introduced this idea of greedy transition based parsing. So his idea is well, what we're going to do for dependency parsing is we're going to be able to parse sentences by having a set of transitions, which are like shift reduce parse. And it's going to just work left or right, bottom up and parse the sentence. So we're going to say we have a stack sigma, a buffer beta of the words that we have to process. And we're going to build up a set of dependency arcs by using actions, which are shift and reduce actions. And putting those together, this will give us the ability to put parse structures over sentences. And let me go through the details of this. And this is a little bit hairy when you first see it. That's not so complex, really. And this kind of transition based dependency parser is what will use in assignment 3. So this is our transition system. We have a starting point, where we start with a stack that just has the root symbol on it, and a buffer that has the sentence that we're about to parse. And so far, we haven't built any dependency arcs. And so at each point in time, we can choose one of three actions. We can shift, which moves the next word onto the stack. We can then do actions that are the reduce actions. So we do two reduce actions to make it a dependency grammar. We can either do a left arc reduce or a right arc reduce. So when we do either of those, we take the top two items on the stack and we make one of them a dependent of the other one. So we can either say, OK, let's make w_i a dependent of w_j. Or else we can say OK, let's make w_j a dependent of w_i. And so the result of when we do that is the one that's the dependent disappears from the stack. And so in the stacks over here, there's one less item. But then we add a dependency arc to our arc set so that we say that we've got either a dependency from j to i or a dependency from i to j. And commonly when we do this, we actually also specify what grammatical relation connects the two, such as subject, object, noun modifier. And so we also have here a relation. That's probably still very abstract, so let's go through an example. So this is how a simple transition based dependency parser, what's referred to as an arc standard, transition-based dependency parser would parse up I ate the fish. So remember these are the different operations that we can apply. So to start off with, we have root on the stack and the sentence in the buffer, and we have no dependency arcs constructed. So we have to choose one of the three actions. And when there's only one thing on the stack, the only thing we can do is shift. So we shift. And now the stack looks like this. So now we have to take another action. And at this point we have a choice, because we could immediately reduce. So we could say, OK, let's just make I a dependent of root and we'd get a stack size of 1 again. But that would be the wrong thing to do because I isn't the head of the sentence. So what we should instead do is shift again and get I ate on the stack and fish still in the buffer. Well, at that point we keep on parsing a bit further. And so now what we can do is say, well, wait a minute. Now I is a dependent of ate and so we can do a left arc reduce and so I disappears from the stack. So here's our new stack. But we add to the set of arcs that we've added that I is the subject of ate. OK. Well after that, we could reduce again because there's still two things on the stack but that would be the wrong thing to do. The right thing to do next would be to shift fish onto the stack. And then at that point, we can do a right arc reduce saying that ate is the object of fish and add a new dependency to our dependency set. And then we can one more time do a right arc reduce to say that ate is the root of the whole sentence, and add in that extra root relation with our pseudo root. And at that point, we've reached the end condition. So the end condition was the buffer was empty and there's one thing, the root on the stack. And at that point, we can finish. So this little transition machine does the parsing up of the sentence. But there's one thing that's left to explain still here, which is how do you choose the next action? So as soon as you have two things or more on the stack, what do you do next? You've always got a choice. You could keep shifting, at least if there are still things on the buffer or you can do a left arc or you can do a right arc. And how do you know what choice is correct? And well, one answer to that is to say well, you don't know what choice is correct. And that's why parsing is hard and sentences are ambiguous. You can do any of those things. You have to explore all of them. And well, if you naively explore all of them, then you do an exponential amount of work to parse the sentence. So in the early 2000s, Joakim Nivre's-- and that's essentially what people had done in the 80s and 90s is explore every path. But in the early 2000s, Joakim Nivre's essential observation was, but wait a minute. We know about machine learning now. So why don't I try and train a classifier which predicts what the next action I should take is given this stack and buffer configuration. Because if I can write a machine learning classifier, which can nearly always correctly predict the next action given a stack and buffer, then I'm in a really good position. Because then I can build what's referred to as a greedy dependency parser, which just goes bang, bang, bang, word at a time. OK, here's the next thing. Run classifier, choose next action. Run classifier, choose next action. Run classifier, choose next action. So that the amount of work that we're doing becomes linear in the length of the sentence rather than it being cubic in the length of the sentence using dynamic programming, or exponential in the length of the sentence if you don't use dynamic programming. So at each step, we predict the next action using some discriminative classifier. So starting off he was using things like support vector machines, but it can be anything at all like softmax classifier that's closer to our neural networks. And there are either for what I presented three classes if you're just thinking of the two reducers in the shift, or if you're thinking of you're also assigning a relation and you have a set of R relations, like 20 relations, then there'd be sort of 41 moves that you could decide on at each point. And the features are effectively the configurations I was showing before. What's the top of the stack word? What part of speech is that? What's the first word in the buffer? What's that word's part of speech? Et cetera. And so on the simplest way of doing this, you're now doing no search at all. You are just sort of taking each configuration and turn, decide the most likely next move and you make it. And that's a greedy dependency parser, which is widely used. You can do better if you want to do a lot more work. So you can do what's called a beam search, where you maintain a number of fairly good parsed prefixes at each step. And you can extend them out further and then you can evaluate later on which of those seems to be the best. And so beam search is one technique to improve dependency parsing by doing a lot of work. And it turns out that although these greedy transition based parsars are a fraction worse than the best possible ways known to parse sentences that they actually work very accurately. Almost as well and they have this wonderful advantage that they give you linear time parsing in terms of the length of your sentences and text. And so if you want to do a huge amount of parsing, they're just a fantastic thing to use, because you've then got an algorithm that scales to the size of the web. OK. So I'm kind of a little bit behind so I guess I'm not going to get through all of these slides today and we'll have to finish out the final slides tomorrow. But just to push a teeny bit further, I'll just say a couple more on the sort of what Nivre did for the dependency parser and then I'll sort of introduce the neural form of that in the next class. So conventionally you had this stack and buffer configuration and you wanted to build a machine learning classifier. And so the way that was done was by using symbolic features of this configuration. And what kind of symbolic features did you use? You use these indicator features that picked out a small subset normally one to three elements of the configuration. So you'd have a feature that could be something like the thing on the top of the stack is the word good, which is an adjective. Or it could be the thing on the top of the stack is an adjective and the thing that's first in the buffer is a noun. Or it could just be looking at one thing and saying the first thing in the buffer is a verb. So you'd have all of these features. And because these features commonly involved words and commonly involve conjunctions of several conditions, you had a lot of features. And having mentions of words and conjunctions of conditions definitely helped to make these parsers work better. But nevertheless, because you had all of these sort of 1 0 symbolic features that you had a ton of such features. So commonly these parsers were built using something like a million to 10 million different features of sentences. And I mentioned already the importance of evaluation. Let me just sort of quickly say how these parsers were evaluated. So to evaluate a parser for a particular sentence, our test set was hand parsed in the treebank. So we have gold dependencies of what the human thought were right. And so we can write those dependencies down as statements of saying the first word is a dependent of the second word via subject dependency. And then the parser is also going to make similar claims as to what's a dependent on what. And so there are two common metrics that are used. One, is just are you getting these dependency facts right. So both of these dependency facts match. And so that's referred to as the unlabeled accuracy score, where we just sort of measuring accuracies, which are of all of the dependencies in the gold sentence. Remember, we have one dependency per word in the sentence. So here we have five. How many of them are correct and that's our unlabeled accuracy score of 80%. But a slightly more rigorous evaluation is to say, well, no, we're also going to label them and we're going to say that this is the subject. That's actually called the root. This one's the object. So these dependencies have labels and you also need to get the grammatical relation label right and so that's then referred to as labeled accuracy score. And although I got those two right for that-- I guess according to this example, actually this is wrong, it looks like I got-- Oh, no. This is wrong there. Sorry, that one's wrong there. OK. So I only got two of the dependencies correct in the sense that I both got what depends on what and the label correct. And so my labeled accuracy score is only 40%. OK. So I'll stop there now for the introduction for dependency parsing. And I still have an IOU, which is how we can then bring neural nets into this picture and how they can be used to improve dependency parsing. So I'll do that at the start of next time before then proceeding further into neural language models.
Stanford_CS224N_Natural_Language_Processing_with_Deep_Learning_2023
Stanford_CS224N_NLP_with_Deep_Learning_2023_Lecture_16_Multimodal_Deep_Learning_Douwe_Kiela.txt
So today, I'm to introduce our first invited speaker who's Douwe Kiela. Douwe has also been-- as well as being invited and I'll tell his background, he's also in the symbolic systems program, has been an adjunct professor, and has been involved with some students in that role as well. But in his invited role, he's originally from the Netherlands where he even learned some logic among other things back in the old days. But in more recent times, he's been a prominent deep-learning researcher. For a number of years, he worked at Facebook, now Meta in the FAIR unit and was involved in various ideas, including retrieval augmented generation. After that, he then spent some time at Hugging Face. He's become interested in looking at multimodal models, which is what he's going to be talking about today. And welcome, Douwe. It's great to have you. Thank you very much. [APPLAUSE] All right. That works, right? Yes. Thanks, everyone, for coming. I understand that you get points for being here, so you're not really here for me. [LAUGHTER] But thanks for coming anyway. So I'm going to talk about multimodal deep learning. It's going to have an NLP focus. Of course, that's for this course. But it's also because, otherwise, I would really be talking for many more hours than I have time for here. So I'll try to really keep it focused on the things that I think will be most useful for you to learn. And so the first thing you should understand is that this whole concept of multimodality is kind of ill-defined, actually. So if you go to the dictionary, you'll see that it means having or involving several modes or modalities or maxima. And so what mode here really means is-- so it could be mode in the very generic sense or it could be a very precise sense of the mode of a statistical distribution. And so depending on the paper you're reading, in some cases, people really mean the statistical sense, in other cases, people really mean this sort of very vague concept of a modality where it really means the type of information that you're getting. So an example of modality, in that case, is an image or speech signal or audio, in general, or even olfaction, so smell or things like that. So in this lecture, we're just going to focus mostly on text because this is an NLP course, and we're going to focus on images mostly as the other modality to keep it simple. All right, so why does it matter? Why do we care about multimodality? And so there are a couple of really good reasons in general for this. The first one is about faithfulness. So if you look at how we humans understand the world, how we make sense of what happens in the world, that is very multimodal, right? So we perceive the world, not just using vision or just audio, but we synthesize information across all of these different modalities and that's how we understand the world and each other. There's also a very practical argument for doing it. It's because the internet is multimodal, right? So if you go to, I don't know, Facebook or something like that, it rarely happens that it's just text or just an image, it's usually a combination of multiple modalities. And then the final good reason that we're just starting to hit now if you're really following where the field is going, we're kind of running out of text data for these large language models. So one interesting way to keep scaling on the data side is to make use of all of these other modalities. So if you can have your language model also watch all of the videos of cats in the world, it's going to understand the concept of cat much better. And that's what we want to have in these models. We want them to understand the world in the same way that humans understand it. So right now multimodality is really one of the main frontiers of this new foundation model drive that we're all in right now. There's a thing called the McGurk effect. Let's see if it loads up. So what we'll see when this loads is this guy over here. And we'll have the same audio effect being played. So the audio is exactly the same. And this man is going to say something like ba, ba, ba. And so you're hearing a B there I think if you look at my mouth because that's what I said. But if you then change the video to where he says fa, fa, fa, with exactly the same audio, you're going to hear the other version. So unfortunately, I can't really swap in the different audio here, so you have to trust me for it. We might suddenly start hearing a guy saying fa, fa, fa, and then-- [LAUGHTER] All right. So multimodal applications. So when we have multiple modalities, we can do all kinds of interesting things. And as I said, most of the use cases we have on the internet, they're all multimodal. And there are some really kind of obvious things we would be interested in if we have information from these different data sources, right, from different modalities. So obviously, we might want to do retrieval. So maybe given a bit of text we want to find the right image or maybe given some image we want to find the right text for it so we can match them up. Obviously, we can also do this in a generative setting. So then we have image captioning, which you probably heard of, we can do text-to-image generation. So that's image synthesis, so stable diffusion. Everybody in the audience here has probably seen that. Then we can do visual question answering where we have an image and text and then we need to generate some new text. We have multimodal classification where we have image and text and we need to have a label, for example, whether something is hate speech or not. And then, in general, we want to be able to have a richer understanding of information, which means that we combine images and text and then use it for downstream applications that require better understanding or better generation. So this field really is super hot right now. So there's this nice paper title. I predict that this paper is going to do really well in terms of citations just because it has such a citable title. I think a lot of people are not actually going to read it. And so, I mean, I've been in this field for quite a while now and people have been saying that for a really long time. I think Chris would agree, though. So for decades, people have been saying that multimodal is the next big thing, but now it's really true, I think. [LAUGHTER] All right. So the outline for what we're going to be talking about. So first, I'm going to tell you a little bit about early models, then we're going to do a bit of a deep dive on some of the specifics, then we're going to go over a particular type of fusion, contrastive models, or late fusion. Then we're going to go through a little bit of the history of multimodal foundation models. Then we're going to talk a little bit about evaluation, a little bit about other modalities, and then I'll make some predictions for the future, and hopefully, maybe give you some cool research ideas or things to talk or think about. All right. So, obviously, there's a lot of work that happened before deep learning. But I think if you want to start from the deep learning revolution and what was happening in images and text, then a good starting point is, for example, WSABI or DeVise or Richard Socher, who you've probably heard of, has done some really cool early work in this that really pioneered a lot of these ideas. And the basic gist of this is that we have a vision model. On the one hand, we have a language model. So this really, I mean, the first lecture of this course I think was about word embeddings, right? So that's just your basic word embedding model. And now we need to figure out how to align them in the same multimodal space. So the way you do that is you get some sort of similarity metric, right, this score function or like a kernel function if you're thinking about this from a support vector machine literature perspective and now you need to figure out in a max margin or margin loss, how you want to align these two points in your embedding space. So things that are similar, you want to bring them closer together, things that are not, you want bring them further apart. And if you do that in this multimodal embedding space that means that you can do interesting cross-modal transfer where you can take the word embedding for something like auto or like horse, and then you can find close images in the embedding space to that thing and now you've solved the retrieval problem. So this is a really nice early application. And I think a lot of the stuff that I'm going to talk about in the early slides you're going to see this thing come over and over again. You're going to see it get kind of reinvented with fancier models, but it's basically all the same stuff. So you can do cross-modal transfer where you have images and text, but you can also combine them together so that you get a multimodal word embedding. And so this just gives you a more accurate representation of how humans understand word meaning because when we think about the word moon or cat or something, we can go to Wikipedia and read that a cat is a small carnivorous mammal that people like to keep as pets, or we can just go and look at pictures of cats and now we understand what a cat is, right? And I would argue actually that for a lot of people, the picture of the cat is much closer to the meaning of the concept of cat. So some early work where people were trying to do this is from Bruni et al. where they did multimodal distributional semantics using this very elegant approach called bag of visual words. So who has heard of bag of visual words? Very few people. OK. So it's surprisingly simple. So I like it. It's nicely elegant. So you take a picture of the moon, in this case. I think you can see it in the back, too, right? So we use an algorithm like SIFT to find interesting key points, so sort of where the difference between the pixels and the pixels next to it, where the difference is big, those are the spots you want to be looking at. And for each of these key points, you get feature descriptors. So relatively small vectors like 32 dimensional. It depends kind of on the implementation of this. And what you can do now with these feature descriptors is you can cluster them using k means and then you assign every one of these points so you can count how often they occur, right? So in this picture of the moon, we have actually the count is-- oh yeah, so there are three red dots, right? So that's why the red dot one is three. So what that gives you is an idea of the visual words, very similar to the original bag of words model that you hopefully have heard about maybe in the first lecture. So that's the visual equivalent of the textual thing. And so if you do this and you then concatenate or you apply SVD to fuse the information, what you get is a word embedding that is much more representative of human meaning, as reflected in the data sets that people use to care about at the time. So after that, there were a couple of people, me included, who tried to take these ideas and then really apply deep learning to them. So some of the very early versions of this use convolutional neural networks, and then you can transfer the features from your ConvNet and you take your word embeddings which you've seen in the first lecture, and then you can concatenate them now you have a multimodal word vector, or you can do something slightly fancier. So you've seen the skip-gram model. You can also try to do skip-gram predictions onto image features, right? So when you see a word like cat in some context like the cute little cat sat on the mat, then when you see cat you also want to predict cat pictures. So super easy ideas, but it turned out that this gives you much richer word representations. So that's kind of cool. But obviously, words are very limited. What we really care about is not words but sentences. So then people started really looking into sentence representations and how can we figure out how to get compositional understanding in these sentence representations and how do we align that with images. So the loss here is very similar to what we saw with words and pictures but now we just have a sentence encoder, right? And so there's some really cool early papers from Andrej Karpathy, and Richard Socher also had some work here. So the basic idea is just that instead of having these word embeddings we now have an LSTM in these papers or some other kind of recurrent neural network, or in the case of this one, recursive neural network, and then we try to align the features together. And so these three or four papers are actually very important. This one by me is less important but it's still kind of interesting because we showed here that grounded sentence representation. So if you actually just use this part here as a sentence encoder for NLP tasks, the ability to just predict pictures from it already gives you a really good sentence representation, right? So just by predicting pictures you can imagine what things look like and that gives you a really good meaning representation which you can then transfer to, I don't know, sentiment classification or something else. And then of course, once we have sentence encoders then we also have decoders. And so when the sequence-to-sequence architecture came out, which you've probably also heard about in this course, what you can do instead of having a text encoder for your source language if you're doing machine translation is you can plug in a ConvNet instead of an LSTM encoder, and now you can generate captions. So that's exactly what people did. We used to have all of these fancy diagrams in our papers then where we explained the LSTM and how that works. Probably people don't learn that anymore these days. They do? Yeah. [INAUDIBLE] Very good. They might make a comeback, I think, at some point. Transformers are going to go away. We'll see. [LAUGHTER] And so one of the things that people figured out in machine translation very early on is that you can do alignment of words between your source language and your target language. And you can do the same thing actually with images. So if you want to align a word in your generated sequence with something in your picture, then you can use the same approach for that. And that approach, of course, is called attention. So you've learned a lot about attention probably in this course. And so yeah, that was one of the building blocks of these systems as well where you can do very interesting things and really see that when it has to generate stop for the stop sign, that it's really actually looking at the stop sign. So there's a really cool alignment going on there in these models. And so the final early model we should talk about a little bit is GANs. Who here has heard of GANs? OK. That's a lot more than bag of visual words. I guess that makes sense. So yeah, the basic idea of a GAN is really that you have this generator and discriminator and you want to have the generator generate images that the discriminator cannot distinguish. So it cannot distinguish fake and real images, right? And if you do that, you can actually condition that on a piece of text, and then you can generate images using some text prompts. So that's what the first versions of stable diffusion were doing things like this and it's all a natural progression to that model. So those were the early models. Do people have any burning questions about this or does this all make sense? All right. So let's do a bit of a deeper dive then, in particular, on features and fusion. So those are really the core building blocks for all of this multimodal stuff. But before we go there, maybe very briefly if all of this multimodal stuff is cool and sort of useful and doesn't look that difficult, why aren't we all doing multimodal things? So why do we focus on specific modalities? And I think there are a couple of problems just to be aware of. So one is modalities can sometimes dominate, especially text is much more dominant than vision or audio in many use cases. So you can already just have a model that picks up on the text signal and basically learns to ignore the image completely, which actually happened embarrassingly for visual question answering. We'll get to that. So visual question answering you could do that without actually looking at the picture. The additional modalities can add a lot of noise, so it makes your machine-learning problem more difficult. You don't always have full coverage. So as I said if you look at Facebook posts, sometimes you have text, sometimes you have pictures, sometimes you have both, but you don't have a guarantee that you always have both. So how do you deal with that? In many cases, we just really weren't ready, it was too complicated to implement stuff. And also just in general how to design your model really to combine all the information is actually quite complicated. So in order to maybe drive the point home a little bit, so featurizing text. I guess we all know how to do that by now, especially sort of in the age of transformers and before in LSTMs where we just have your batch by your sequence. So batch size by sequence length by embedding size, right? So it's always like a 3D tensor, and that's how you encode your textual information when you pump it through your neural net. And so with images, it's slightly trickier because you can just kind of look at the patches. But then if you do convolutions, you're shifting over the image and then you're aggregating, right? And in many cases, you don't really want to be this uniform. You want to have something that actually looks at the things in the picture, right? So this is called region features where you would use an object detector as a first step for processing your image, and then you would have a ConvNet backbone that encodes the features for that particular sub-image like these guys, like skateboard or something, it has its own vector representation, right? And then in terms of dense features, we now also have Vision Transformers. So we'll just very quickly go over that to make sure we're on the same page. So there are all these models, like YOLO is a really good one if you haven't heard of that yet. So we're at YOLOv7 now, I think, or 8, I don't know. So there's a new one coming out every other year or something. But the basic idea is that we get these bounding boxes for things in the images or actually segmentations with the bounding boxes is what people tend to use and they have labels, right? So this is labeled like backpack or something. And so you can do this as a pre-processing step on your image to get a much richer representation of what is really in that image, which you can then pump into your system as we'll see later. So then how you encode the information that is in these little bounding boxes or actually in the image itself in general? We just use a standard ConvNet for that. And so this probably feels super obvious now, but in 2014 when people were starting to discover this, it was really very surprising that you could just use off-the-shelf ConvNet features to really replace the entire computer vision pipeline. So people used to do all of this very fancy sophisticated stuff and people spent decades on trying to refine this and then it was all thrown away and replaced by a ConvNet that does all of that stuff for free. And so the cool thing you get there is that you can transfer very easily across different tasks. So you can have a very generic ConvNet and then use it to all kinds of very specialized things like spotting buildings in Paris, for example, or flowers, or other stuff. And then of course in the age of transformers, how far are we? We're already quite a while. And this is only the first transformer actually in the slide deck. So we're making good progress. So Vision Transformers are what we would use these days to encode the images where you have these flattened patches and then you would do kind of the standard BERT architecture maybe as you would know it from this course, and then you do classification, right? So this is all a standard transformer. Everything is standard, except now your input here is not words or tokens, it's patches of an image. And then you classify that. All right. So then we have a bunch of features and now how do we combine the information, right? So let's say we have two vectors u and v. So it sounds easy how we could combine them. It turns out that there are actually very many ways to combine them. So I don't think it's really useful to go over all the different ways here, but you can do very simple things. So obviously, inner product or similarity is what you would use if you want to do cross-modal things. So if you want to embed things in the same vector space. But you can do sort of fancier projections on top or different combinations that are linear, or you can do multiplicative things where you multiply the components element-wise or you do some sort of gating over the different features. You can do attention. You can do fancier bilinear things. You can do very fancy compact bilinear things. So there's really a wealth of literature on all the different ways you can combine two vectors. And so this is called multimodal fusion. And most of the literature on multimodality is essentially about this question-- what is the best way to do fusion? And that's it. So I think within that discussion, it's maybe useful to distinguish between different levels of fusion. So you can do it very early where basically you make sure you have the different features and then you just in the modern sense of attention you would attend to everything in all the features from the beginning. You can first treat them separately and then combine them, or you can treat them as completely separate and then you only combine the final scores, right? So that's what we would call early fusion. And then my invention for calling the middle part would be sort of middle fusion, and then you have late fusion where you really just combine the scores or the logits, but you don't really have any interaction between the information from the different modalities. So you can do really fun stuff with multimodal fusion. So this is a paper I really like, FiLM, where you have this very special feature map, this sort of F here and it gets modulated by a multiplicative factor. So this gamma and an additive sort of bias vector, this beta, and you have a different one for every layer of a ResNet that is conditioned on some encoding of the thing you're after. So in this case, are there more cubes than yellow things? So we have some vector representation for that. And we use that vector representation to modulate the ResNet blocks at every layer of the ConvNet. So you can really do very fun things where you're sort of modulating one network with the other one and really try to have them learn as much as possible from that. All right. So let's talk about late fusion then. So late fusion is what we would now call contrastive models. But the basic idea is that we have this similarity score. So we have these to kind of-- we process the modalities completely independently and then at the very end, we do some combination. And the most famous instance of that these days is CLIP. So who's heard of CLIP? OK. So CLIP from OpenAI. So it's, again, exactly the same contrastive loss that we've seen. And all these early approaches, it does kind of negative sampling but then in batch. So you just have a batch. You have two things that are aligned. So like this is the first piece of text and the first image, they are aligned. So this is the right answer. And I just want to make sure that I rank this thing higher than all the alternatives, right, and I want to make sure I rank this thing higher than all the alternatives. So it's a very, very simple idea. Really nothing special about this architecture that was invented here, but what made this thing so cool was first of all, it was transformers and it was transformers all the way. So your text encoder would be a transformer and your image encoder would be a ViT image encoder, so also a transformer. And it was trained on lots and lots of web data. So Alec Radford is really a genius at creating very high-quality data sets. And he created I think 300 million image text pairs for this data set, trained a bigger model on it than people used to do, and then we got this amazing model out of it. And so moving away from the words there to the sort of text that you would see on the internet. So the caption for an image on the web is not going to say dog or cat, it's going to say a photo of a cat doing something, something. So that means that you can do zero-shot label predictions where you have a photo of the-- and then you need to figure out what the right label is for a given image using this kind of prompt. So you probably all know about prompting large language models, so you can prompt vision and language models in very much the same way and do zero-shot generalization. So if you want to read a really good paper, I would recommend that you read this paper. This is really one that's going to teach you how to write really good papers. It's thorough. It's really worth a very close read I think if you're interested in this field. And so I think when it came out, actually, on ImageNet itself, it didn't really outperform ResNet, right? So you might think, oh yeah, actually, it's not all that special. But what really made it special was that it generalized much better to these other data sets. So this ResNet thing here is pretty terrible at some of these kind of adversarial versions of ImageNet, and CLIP is super robust to that. So it's just a way better image encoder in general. So very, very quickly after CLIP, there was this paper from Google using ALIGN which was basically exactly the same idea. The field is not really that creative at all. It's the same idea, but then you just keep throwing more data and more compute at it and that often works much better. So that's what they found here, too. 1.8 billion image text pairs instead of 300 million gives you a better model. Surprise. So still very cool. And what is really cool, I think, is that there's this organization called LAION, where they've started this open-source collective to create really high-quality data sets. And so the LAION, the initial data set, was-- how many examples in the initial LAION? 400 million. 400 million. He knows. I know that he knows. So now there's a much bigger version of LAION that's even multilingual and it has 5 billion examples. So Stable Diffusion was trained on sort of the English subset of this thing. And that's one of the reasons that it's so awesome is because it's just seen a ton of data and that really makes your system a lot better. So if you're looking for the ultimate data set to play around with your own ideas, if you have enough compute, obviously, then you should really look at this data set. All right. Any questions about up until this point? No? All right. So then we'll move on from late fusion to kind of middle fusion, early fusion. And this really is the core of what I think a lot of people in the field right now or if you're interested in getting in this field or if you're going to go into industry and you're going to be using this stuff, this is what you should really understand. And again, the ideas sort of stack onto each other. So I've kind of sequenced the slides to give you an idea of how the scientists came up with the next step. And you can really see the architecture just get slightly more and more advanced but basically, a lot of it is just more data and more compute again. So who knows how BERT works? [LAUGHTER] Everybody should raise their hands in this. So yeah. So BERT is so canonical, I think everybody gets how BERT works. So I don't think we need a real refresher, but I think you can think-- and so the reason I have this slide is because I want you to think about if you have a BERT model and you have a bunch of images, how are you going to turn that BERT model into something multimodal? So there are a bunch of obvious things you could do given the kind of features I told you about and the fusion process. So how are you going to do that? Does anybody want to say something? If you're doing classification, you can take the models from there and then just concatenate it to whatever encoder, maybe an ANN or whatever you're training on the data concatenating and training. Yeah, exactly. So you can take the ConvNet features and the classifier token from BERT, concatenate them, and then classify for cat or something like that or whatever the thing is you're interested in. So that's one thing. You could also take the ConvNet features and give them to the BERT model in lots of different ways. We can use the region features. So I think a lot of people when BERT came out who were working in vision and language processing were thinking exactly about OK, so do we do middle fusion, late fusion? Do we do early fusion? How do we do the fusion? And so there were a lot of papers all coming out basically at around the same time where people were doing versions of this. So BERT was really the innovation and then everybody just plugged it into their own thing because of Hugging Face transformers and things like that. So the first thing is visual BERT. This was one of the very early ones where you have this image and people would do object detection on this. So you get like a hat and a racket and a shirt and things like that. So you can just really take these features and then plug them into your transformer model and then you try to recover the features. And so this really is probably the simplest way to do it. And so this is what we call a single-stream architecture where you have all of these concatenating the original input features and then putting them through the same transformer. What you can also do and that's something that this model called ViLBERT did is where you have two different streams. So you essentially have these two parallel transformers, but at every layer, you give them cross-attention, or co-attention as they call it. But it's basically like-- so you just make sure you have an attention map that spans both and then you just do your full normal transformer layer again. So this, you can train just like your regular BERT. So you have your masked language model here and here you do some equivalent of that. And then you also have your next sentence prediction, which you probably remember from your BERT lecture, but instead, here, we're saying, OK, is this image aligned with this piece of text or not? There's also LXMERT. I could go on forever. There are like 100 papers that came out that did this all at the same time. So LXMERT had a different cross-modal output encoder, a bunch of different ways of encoding the positional information. So you could say, OK, I just have a bunch of bounding boxes that are featurized but I don't care about where they are in the image. So it's just a bag of bounding boxes. Or you could say, I found it here like this is the particular top left and bottom right coordinate and that's what you featurize into your network. You can also do something even dumber. And I can say that because this is my paper-- [LAUGHTER] --where you just take the image itself, you put it through a ResNet, and then you do a little bit of pooling on the final feature maps and you just give those feature maps to BERT. And so you then need to distinguish between your text segment embeddings, right, and your vision segment embeddings. So this actually works surprisingly well. You don't have to do any additional training. You can just take BERT out of the box. Initially, you freeze it. You learn to project into BERT token space, then you unfreeze your ResNet, and then finally you unfreeze your BERT, and now you have a very good multimodal classifier on the problem you care about. So a lot of these other papers, they're doing what they call multimodal pretraining where first, you have a BERT model and a ResNet. So they're unimodal pretrained. And then you couple them together and then you have a multimodal sort of intermediary pretraining step before you fine-tune it on the problem you care about. And what we showed here is that you don't really need that actually in many cases. So that's a very strong baseline. You can also go to the pixel level completely. So that's what they did in this other paper called PixelBERT where they-- it's basically exactly MMBT. So the previous supervised one, but here they do the multimodal pretraining step and show that for VQA it helps a little bit. So there are many of these BERTs doing sort of visual things. People really tried everything. Here's another one called UNITER, where they added a bunch of different losses. We can really talk about this for a very long time. We're not going to do that. I'm just going to talk you through some of the more interesting ones. So this one I think is quite interesting, ViLT, because here this is really the first instance where we are completely gone from ConvNet features. So we don't do any pre-processing on the image, no region features, no backbone, then it featurizes the parts of the image we care about. We just have these patches of the image. So really in a grid. We flatten those patches. We just pumped them into the transformer straight away. So this really is sort of BERT and ViT together in one model and this worked really very well. So that's been the trend. So here's a nice very long list of all of these different models and what they do. And so really the distinctions are just in what is the text encoder that you use. So do you use BERT or something fancier or better, RoBERTa? What is your vision encoder? So in many cases, you have these region features. So you would do an R-CNN style thing, or you could just do a ResNet or a ViT. You have different kinds of fusion. So either single or dual stream, as we talked about. So visual BERT or ViLBERT. Different pretraining tasks. So masked language modeling, image text matching. There's a bunch of funkier ones you can do. And then finally, you can do multimodal pretraining on all of these different data sets that have aligned data. So you are probably wondering like, OK, so what is really the interesting difference between a lot of these? And so I have another recommended paper that if you're interested in this space you should really take a look at. This is also a really well-done paper, where they unmasked multimodal pretraining. So basically, they say if you take all of these little model inventions and you train these different models on exactly the same data in exactly the same way, it turns out that they're all basically the same. [LAUGHTER] So that's a lot of wasted effort on the part of the field because everybody is saying, oh, my model is better, but it's actually just because you trained it on different data and there's no real model innovation going on in a lot of these things. So I don't mean to sound discouraging or anything like that, but I think that's why this paper is really nice and really important is because it just shows us what really matters. So this is also work that I did myself called FLAVA with my team, where we wanted to take these ideas really to the limit. So a lot of the things that you've seen now, so the visual BERTs and the FILBERTs and things like that, they're all about multimodal questions. So how can we do visual question answering, something like that, where we just have these two modalities. We only care about problems that always involve these two modalities. And where we want to go, and this is the basic premise I think of foundation models in general, is that we have one model to rule them all. So this one model can consume data from all of these different modalities and it can synthesize across all of these different modalities and then do useful things with that information. So with FLAVA, that's exactly what we try to build. So we wanted to have one foundation model that is good at vision and language, and computer vision and natural language processing, is jointly pretrained on all of these different data sources. So it's also trained on just CCNews, Common Crawl, and BookCorpus. So it's very good at things you would expect BERT to be good at. It's trained on ImageNet for image data, so it's good at the things that you would expect a basic image model to be good at. And then you have this PMD data set that we created out of publicly available image text pairs that we also trained it on. So this PMD data set is really just-- if you take all the data sets that were ever created that have image text pairs that are publicly available. So unfortunately, the CLIP data and the Google align data and all of these data sets, they haven't been open source. So this is before LAION. So now there's a good alternative to this. But so this PMD data set, if you combine all of these image text pairs, you get 70 million of them. So that's still a pretty decent size. And then you can take all of this data basically to solve all of these problems that we know and we care about in these different fields. So you can do multimodal reasoning, you can do language understanding, you can do visual recognition all with exactly the same model. And that's a very powerful idea. I think if you work at a company like Facebook you don't want to have different models for all kinds of different things, you want to have one model that you can really use for everything that's going to really make your life a lot easier. So the exact architecture here is that on the one hand, we have this image encoder where we take the image, we encode it as patches, and we just do what we call masked image modeling, but it's basically masked language modeling just on the image tokens. And then on the other side, we have the masked language modeling on the language. So your regular sort of BERT thing. And then we have a multimodal part where all of this information gets combined. So we have a masked multimodal modeling loss term where you can also do image text matching. So this is like your BERT next sentence prediction thing. And then we also have a global contrastive loss, which is exactly like a CLIP. So if you do all of this stuff, it's just all transformers all the way down. It's sort of a very elegant way I think to combine a lot of this information. And when you do that, you get something that can really do a lot of things very well. So we're not going to talk about that table, it's just way too many numbers. So just trust me we were pretty thorough in generating the table here. [LAUGHTER] So over 35 different tasks if you compare FLAVA to all kinds of different ablations in terms of CLIP models, then this is just a much better way to get to this information. So I think this is a nice example of where we're probably going to go with the field in the near future. So the other trend that we see very obviously in the field right now is that everybody cares about generative models, right? So language models and image generative models. There's just a trend where we want to be generative. We want to move away from this contrastive discriminative stuff to the more interesting, more richer representations maybe that you get out of generating sequences or images. So this SimVLM paper was one of the first ones where they really had this separate decoder that was trying to generate or complete captions, which they showed gives you a lot richer representations. I think this is actually the current state of the art now. It's called CoCa. So a lot of these models. They all again look very similar, but in this case, now we're starting to really see these text decoders. So initially, with CLIP, I think that's also what they were trying to go for, like OpenAI being a company that really likes generative models, but they couldn't really get it to work. And I think it took us a while as a field to really figure out how to do this the right way. And so right now we're really in the age of language models, right? So one of the interesting things you can do with language models is just keep them frozen and then learn how to project into the language models. So the MMBT architecture I talked about where we had this BERT model, we kept it frozen, and we learn to project into the BERT token space. You can do exactly the same thing but then with a much fancier model or something like T5 even where you just have an encoder-decoder or some kind of generative part of this. You keep that thing frozen, and then you learn to project into the token space of that frozen language model, and then you can do lots of fun stuff it turns out. So what they show in this paper is that you then get few-shot learners. So all of the things you see with GPT-3 where you can just give it some in-context examples and it's going to figure out binding on the fly. So it says like this is a dax and this is a blicket. So what is this? And then it gives you the answer that it's a dax. So it really learns in context how you decide the feature mappings, which is really solving the grounding problem that a lot of this multimodal stuff started with. So I think that's very cool. Then probably one of the coolest papers right now or models right now that you might have heard of if you follow the field is Flamingo, out of DeepMind, where they take a Chinchilla language model. So this is really an optimal language model. And now you have this vision encoder that encodes multiple different images that you can then do reasoning over and then autocomplete. So what this gets you is just a much more powerful model because you can do your generative over lots of different images. So it's really like step-wise. You can see it. We started off with very simple transformers and now we're actually at something that is starting to get pretty complicated because we have these building blocks like a perceiver resampler where we have a bunch of different images that we featurized and now we need to compress the information because sometimes we have three images, sometimes we have five images so we want to make sure that we can compress it so that it's always ready for consumption by the next layer of the language model. So this paper again is a really good paper to read because they actually-- so this is not me. This is not my code. This comes from the actual paper. So they just have the diagram together with the code so that you can really understand what it's doing, which I think is really great. And so once you have your perceiver resampling step, what you then do is you do gated cross-attention. This is how you implement it. And so this gated cross-attention you do that before your frozen language model layer. So you really just have a frozen Chinchilla language model and you learn to modulate the information that goes into that language model. You propagate the gradients all the way back, you just don't update the language model. So you're really trying to figure out like, how am I going to design my signal so that my language model can do the most with it, right? How am I going to combine the information? So you'll notice that now we do it before the layer. And a lot of this other stuff you would do the attention after the layer, but here you do it before. So Karpathy, I think, more than 10 years ago had this image. It's Barack Obama setting his foot here on the scale to make somebody think they're a lot heavier than they really are. So this is obviously funny to us, but not to an AI system, I think, unless it really understands the scene. And so that's why Karpathy at the time said this would be a really good visual Turing test. If a system can figure this out, then it's actually really smart. And so obviously, it's been a bit of a challenge for everybody working in the field than to get something that actually works on this. And so Flamingo, as it turns out, kind of gets the joke. But yeah, so it's a bit unclear if it really gets the joke because if you read this conversation, it's sort of getting steered in the right direction, right? But at least we're making progress, let's put it that way. And then so in Flamingo, you still have a lot of moving parts, but you can really take this almost to the full extreme where you try to freeze almost everything. And you just want to learn this kind of mapping between your image encoder and your language model, or your image encoder and your encoder-decoder architecture, and all you really do is just the projection between the two, right? So there's this nice model called BLIP2, where they experiment with OPT for the language model and FlanT5 for the encoder-decoder architecture. And this just gives you amazing results. It gives you really complex captions and things like that without any real direct supervision on the captions itself, which is pretty impressive, I think. So that just shows you the power of language models in general. So here are some examples. So it can really do different things from captioning to reasoning to visual question answering to location detection. So you can have a long conversation with this system. This really is the future of where we're going, where we're going to have a ChatGPT but it's also going to be able to see the world in a way. And so I think an interesting thing. So you've probably heard of chain of thought prompting and things like that where you ask the language model like let's think step by step. And you can tell a vision and language model, generate a rationale for why something might be the case. So you generate a potential explanation for what your answer might be. And then after that, you ask it to answer the question. And it turns out that if you do that multimodal chain of thought prompting, then the system gets much better. And so this is the new state of the art on ScienceQA or benchmarks like that just because it learns to unpack the information, right? And so I think we're really as a field just starting to figure out what the potential is of this. And I think this paper is where they also show that multimodal chain of thought prompting really gets you pretty amazing results. And they show very nice results on Raven matrices and very complicated IQ tests sort of things that humans are supposed to be really good at but you have to be a pretty smart human to really be good at this and the system just nails it. So we're making super fast progress. We started off from a very simple BERT model that was able to look at some pictures and now we're getting to these very sophisticated foundation models. So that was my short history of multimodal foundation models. So how much time do I have left? So after 5:50. 25 minutes. All right. OK. Plenty of time. We have some questions. Yeah, please questions. Do we really do much pre-processing of images to these models anymore? So I noticed a lot of the images that just looked like they were boxes, like square images passed through, kind of no sense of shape in them. Yeah. So I think the history of computer vision has been very similar to the history of natural language processing where we thought we needed all of this structure and all of these different things. And it turns out you can just throw it all away and just have a big transformer over the patches. Sorry, yes. [LAUGHTER] It's CS231 in 1 minute. Save you time. [LAUGHTER] You mentioned a couple of times like models being frozen. What does that mean? Yeah. Sorry, I should have explained that better, maybe. So it just means that we are not updating the weights. So if we go to this area I think is a nice example. So we have frozen self-attention. So that just means that when we do a forward pass, we go all the way to whatever we want to predict. We get some gradients, we take them all the way down, but we only update the non-frozen layers, right? So here the gradients actually do get updated but these just never change. And so the reason you want to do that is because, otherwise, you're going to drift way too far. So then you're going to destroy all of the cool stuff your language model has learned because you're just going to focus on this small data set that you're training it on. So you want to preserve the abilities of the language model, but you want it to become good at the thing you care about. Other questions. In terms of multimodal fusion, is there a benefit to doing that earlier middle fusion as opposed to only doing the late fusion? Yeah. So we're going to talk about evaluation next. So it really depends on the task that you care about. And so I would say the earlier is always the better if you can afford it. And so CLIP is very efficient to train. It's very late fusion, right, at the very end. So there's no interaction between the different modalities. So that's really good if you want to be very efficient and if you want to be-- for training, it's much nicer. But if you want to have a richer understanding of the multimodal signal, then you want to do earlier fusion. So yeah, there's always a trade-off. So it seems like images are just a lot more data than text. So how much more difficult are these to train, and how much bigger does the image processing have to be compared to the language model? Yeah. So images are more complex in a way, but they're also higher bandwidth representations. So there's a lot of just pixels that our brains just abstract away. It's really about the scene that you're seeing and you're not really thinking too much about the pixels themselves. So like Yann LeCun likes to say that language is just a low bandwidth, a proxy for a language of thought, which is much richer and much higher bandwidth and he thinks probably visual, I'm not so sure. But so, yeah. I don't think that there's necessarily a difference between the scaling laws that you see in these systems, or at least we still have to figure that out. We'll talk about that towards the end as well. Can these models also have certain social and cultural biases just like the natural language inference? Oh yeah, they have terrible biases. Yeah. [LAUGHTER] So yeah. So some people are actually working on this who are in this very room. So these models can be very racist also in what they generate or the kind of predictions they make. So if you have an Asian basketball player standing like this with a basketball very obviously there, then the model will think that he's playing ping pong because he's Asian. [LAUGHTER] I'm not joking. [LAUGHTER] So these models-- yeah, just like all neural networks, right, this is really a big problem. And one of the most interesting problems that you should be working on if you're a student and you want to make a difference is how do we get these systems to be much better at these sorts of things. So in one of the examples you showed that the model interpret from the content of an image. So if we want to understand the content of a video, so what are challenges you might see along this path, and what improvements we can make towards this goal? Yeah. So you're asking about the attention mask sort of, right? So you can use the same idea for videos and you just look at the video. And so these systems are so good now, the object detectors are so good, you can really track objects kind of real-time as they go through your video, and so you can try to check how that aligns with your attention mask in your model. So videos I think are interesting, but they're also not really interesting because you can very often just subsample images and solve the images rather than having to deal with the complex video. But yeah. All right. Maybe one more question and then we'll go do some evaluation. So these multimodal models when you only provide-- let's say you only provide a single source of media, so say only text or vision, how does it perform in that case? Because obviously, it's more geared for multimodal cases. Yeah. So that's one of the giant shortcomings of a lot of these models is that they're really just built for multimodal stuff, and so what if I don't have an image, right? And so that's why we did FLAVA because we want to have one model that can do all of that stuff. And that's why in MMBT, so the supervised multimodal by transformer, we actually have an analysis of how robust is this model to missing images or missing text. So I think a lot of folks working on these early visual BERT models that were myopically focused on VQA, which is actually a great segue to what I want to talk about next. So it really depends on the task that you care about as I said. And so I think if I'm going to tell you about multimodality, I also have to tell you how you're going to check that the multimodal system is actually good at multimodal things. And so that's the topic of evaluation, which actually is a super important topic. And a lot of people they want to be cool and build big models, but I think it should be way cooler to do a proper evaluation of these models, especially if you're in academia because you only have limited GPUs anyway. So what can you do? [LAUGHTER] Sorry. I don't want to rub it in, but-- [LAUGHTER] So how do you check? Well, there's this amazing project. So ImageNet really changed the history of deep learning, I think. And this other data set CoCo, I think, also really changed, especially vision and language, but also I think vision, in general, where they have just a bunch of main sort of multimodal tasks. So these images are very richly annotated with all kinds of different things. So like the segmentation of the objects, the bounding boxes, the labels of the bounding boxes they come at different pixel granularities. It's a huge data set. It's very fine-grained annotated in terms of the categories that it has, and then you have five captions for each of these images. And so this really was the first data set that unlocked a lot of sort of vision and language processing at scale because you had your picture and you had your caption and now you need to figure out, OK, how do I give the right caption for this image? So that's image captioning. Or can I retrieve given some piece of text the right image or the image for the piece of text? So there's a bunch of very impactful data sets that do this stuff. We already talked about LAION, with CoCo really is the main one still I think that a lot of people use as the canonical instance of this data set category. And then the other thing that people really care about in vision and language processing is visual question answering. And so there really are a bunch of academic groups who are or have been so focused on this task that they didn't really care about anything else. And that's why you see a lot of models that are really optimized just for multimodal and nothing else. And you can see that reflected in the citation counts as of last night at 3:00 AM. So VQA just has way more citations than image captioning data sets even, right? And so what you do here is you just have an image and then people ask very simple questions. So annotators, they ask these simple questions, they give the answers, and now we want to be able to answer these questions with machines. And as I alluded to earlier, one of the embarrassing backstories of this data set was that the initial version of the data set was actually found to have images not really matter at all. So you could just look at the question and it could have something like, how many slices of pizza are there? Well, not in that particular case, but in almost all of the data set the right answer for how much or how many question was 2. So if you just predicted 2 to every how much or how many questions, you got 70% accuracy on the counting category. So careful data set or evaluation benchmark design is also really a skill and you really need to think about what you're doing. You can't just set some data aside and evaluate it on, you have to really think about what you're doing. And so there's VQA by Chris actually, which is also just I think a better-designed version of this data set maybe. So you might want to use that these days. There are also kind of very targeted data sets that really try to measure one particular thing. And I think one of the things we really want to get at with these models is what we would call compositionality. So we want to be able to really take the parts and reason about the whole and understand the relationships between the different concepts. So CLEVR was a very clever data set that was designed really to measure the compositionality, both on the language side and on the vision side. So you have to understand the relationships between all of these different objects in the images. So that's been a pretty impactful data set I think for really forcing people to think about compositionality. But a lot of these data sets really had big problems. So one of the problems is they were too easy. So VQA is plateauing out. We can talk about that a little bit, too. It wasn't really realistic, so you could solve VQA and that's probably going to make some people's lives better. You're all trying to process the means. I can see everybody. [LAUGHTER] OK. Let's get to the memes first then. So obviously, these memes are not actually in the data set. So I could put some really hateful memes about sort of Hitler or something which are in the data set but that would be less fun. So these are mean meme examples to demonstrate how the data set was constructed. And so one of the problems we had as I said like VQA, the V didn't really matter. What we want to have is the data set. If we care about multimodality specifically, it's like, how do we get a data set that you can only get right if you are good at multimodal reasoning? And otherwise, you're just going to screw it up. And so this is what we came up with. If you have a meme like this one, love the way you smell today, I mean, that's not very nice if you send this to your friend. [LAUGHTER] So it turns out that if you just swap out the background, now it's a very nice thing to say. And this one is, I don't know, maybe a bit weird if you like this, but-- [LAUGHTER] --there's nothing wrong with it, right? And so it's the same for this one here, like, look, how many people love you with the tumbleweed that's really sad. If you change just one word suddenly it's like a really nice thing to say. [LAUGHTER] So if you want to solve this, if you want to classify this correctly for the meanness, then you have to really understand multimodal reasoning. You have to understand the relationship between the image and the text in order to get to the right label. And so it was really constructed by design to do that. So how we did it exactly is we used some really highly trained annotators. And then one of the big problems with a lot of these data sets is that nobody really knows who owns the meme, for example. So somebody makes this meme now they technically own the copyright. And so when I made this data set, I was working at Facebook and they were very afraid of copyright things. So what we actually had to do is we had to pay people to make new memes. [LAUGHTER] So not from scratch. So we could show them the actual examples and then they had to try to find images that were corresponding to the original source image and try to recreate the meme but now with an image that we could buy from Getty. And so we gave a lot of money to Getty so that we could then release the data set to the public so that people could do actually research on this and understand for their multimodal models whether they're good or not. And so we really tried to make it so that we had these benign confounders. Sorry, I start the word with co-founders. So the confounder here is obviously that you have your original meme and then you have your confounder where you swap out one of the modalities and here you have the other one, right? So we had our annotators do that as well. And so this led to a really nice data set, I think, because it showed some of the intuitions that I think a lot of people in the field had, which is that multimodal pretraining doesn't really work. Is that an alarm? [LAUGHTER] So multimodal pretraining doesn't really work. And so all of this stuff that people have been doing with all their fancy visual BERT models actually turned out maybe to not really be that useful anyway. So maybe it got you one point extra from visual BERT to a different visual BERT, like less than a point just by doing that multimodal pretraining. So that means we still have to figure this stuff out. This data set is far from solved and we still have a long way to go despite all of these fancy models and a new paper coming out every week that does something new like we're not there yet. And I think that's encouraging, especially for you. You can go out and solve it. So what we did with this data set is we organized a competition. We had 100K in prize money to try to see what people could come up with. And so there was a lot of nice work coming out of that and we really managed to crank the numbers up by quite a lot. But the solutions were slightly disappointing. So I don't know if you've ever used Kaggle, but if you want to really win on Kaggle you just have to ensemble the hell out of all of the different models that are the current state of the art and then you're very likely to win, right? And so that's what happened here. There wasn't really the fundamental breakthrough we had maybe been hoping for. So that still needs to be built, I think. So this other data set I just want to briefly talk about. So the theme of this section is like if you make a data set, think about it very carefully, because you can really be very creative with this and really, really measure the things you're trying to get at. So this data set Winoground, we were trying to figure out, OK, how good is CLIP actually? So it looks really amazing and it's way better than things that were previously there, but does it understand compositional relationships in the same way that humans would understand it, or is it just sort of fitting onto the data distribution and it can be very good at the head of the distribution but is terrible at the tail? And you can probably already guess where this is going. So just to give you an illustration of what is in this data set, you would have some plants surrounding a light bulb or you would have a light bulb surrounding some plants. So notice that the words here are exactly the same words but in a different order. So the visual depiction of these words is very, very different. So if your contrastive model is actually good at understanding the visual semantic or the visual linguistic compositionality of these examples, then it can get it right. But again, if it's actually just overfitting on the data distribution that is seen and is biased toward what it sees often, then it doesn't really get it. And so one paper that we use as a source of inspiration for this work is this paper here, "Order Word Matters Pre-Training for Little." So we actually found that the order of words doesn't even matter that much for general pretraining very often, which is also kind of a scary thing, right? So this is deep learning for NLP. We think that language is really important, but these models can reason about language even if you shuffle all the words. And so that's probably not what we want to have. And so that doesn't tell you something about how great we are as researchers, it tells you something about how terrible our evaluation benchmarks are. And that's what we need to fix. So what we did with this data set, here are some other nice examples. There's a mug in some grass or there's some grass in a mug. These are very different pictures. And so for us, these are trivial. So what's the difference between a truck fire and a fire truck? They're pretty important I think also to get that distinction right. So guess what? State-of-the-art models often perform below random chance. [LAUGHTER] So as I said, we still have a lot of work to do, which is good. And so when this paper came out, I think the reaction was really nice. So when DALL-E2 came out-- so you've probably heard of DALL-E2, right? So it's like Stable Diffusion but then before Stable Diffusion. And so this was really the first model that really showed just how impressive these generative models can be when they're creating images. So there's a mug in some grass. You do have to kind of cheat a little bit because you have to add digital art here. If you don't add that then it breaks down completely. [LAUGHTER] So it's sort of prompt hacking, I think, or sort of tuning on the test set, but OK. So this is pretty good. So it's definitely is better than I think a lot of people would have expected even a couple of years ago. But it's not perfect because people on the internet like to take more pictures of spoons than forks. So if you say there are fewer spoons and forks or there are fewer forks and spoons, it just really like spoons more. [LAUGHTER] And so maybe it's like The Matrix or something, I don't know. Spoons are just nicer. So again, what you can see here is that these models really are just reflections of the data that they're trained on. So models are getting better, but if you've looked at Stable Diffusion, it still can't count fingers and things like that. So again, there's still a lot of cool work to be done. Any questions on evaluation? No? OK. So let's talk about other modalities then because-- so we've really just been focused on images and images are great. There are lots of images on the internet. And so that makes it an obvious thing to focus on. It's also I think if you look at our brain, vision is a very dominant modality, right? So how we understand the world is very vision-driven. But it doesn't have to be the case. So there's all these other interesting problems that involve different modalities. And so the most obvious one is just speech or audio. So after seeing comes hearing. And really we could do another lecture just like this just on speech and audio and there's lots of interesting stuff to talk about. Obviously, we don't have time, but I'll give you another nice example of how amazing Alec Radford is at creating data sets. So there's this Whisper model that came out of OpenAI not too long ago, which was trained on 680,000 hours of multilingual multitask speech data. So speech with transcriptions. And they trained this very fancy thing on there, which actually is not very fancy at all, it's just the log mel spectrogram. So how you represent the audio signal. And then you feed that into a big transformer. So this is your encoder self-attention here, and then you have your decoder where you have your cross-attention, and then you just generate the sequence. So this is encoder-decoder basic transformer model but your input is one dimensional convolutions over the log mel spectrogram. And so there's lots of papers that do very similar things. There's models like Wav2Vec that try to turn the wave signal into vectors or you can discretize it in lots of different ways. So there's a wealth of literature. Then I think one of the funny observations actually is that you can just reduce audio to vision anyway, right? So that's what you could argue this log mel spectrogram does. So not to toot my own horn, but in 2017, I did this paper where we showed that you can just take a real audio sample, turn it into a spectrogram, really just a spectrogram. So what does the spectrum of the audio file look like, feed that to a regular ConvNet like an AlexNet even, and then that gives you amazing auditory features. So now you can use this to distinguish between violins or guitars and things like that. So maybe you can just reduce all of this to vision. So one question maybe you could ask is can we also reduce language to vision, or vision to language? So that's what people are thinking about. So we talked about video. There was a question about video. So a lot of these ideas also extend pretty directly to video but now you just have more data. So like Flamingo already had a bunch of different images in it. You can do Flamingo over videos. Probably, a lot of the images are pretty useless for what you're trying to do with this video model. So they're too similar. It doesn't really add all that much information. So you want to subsample the frames so that you get the most useful information out of your video. And so there's a bunch of approaches that take the keyframes and then you just do a standard joint vision and language transformer encoder thing on top of that. So this is becoming hopefully by now a very familiar recipe. And so there's this-- so MERLOT is a nice architecture that does this and then they came up with MERLOT Reserve, kind of a silly name, where they also added audio to this model. So this is now a trimodal model. And so we're going towards this foundation model that can consume all of these different modalities all in one go and that's really like a clear trend in the field. Another very interesting direction I think where-- in the field, we were very excited about this for a while, but I think it's gone now because it's too difficult to create lots of high-quality data in this setting. But what you can do is you can have simulated environments. So this is a paper from DeepMind from 2017 where they had this agent walk around in the maze and then it could have natural language instructions. It could also generalize to dax and blicks and different sort of groundings and assignments that you could do in that environment. So this is a super interesting direction I think in the long term because this is how humans learn language, right? We walk around in the world. We interact with our environments. We have all of these different perceptual observations. We synthesize them in our brain. We manipulate objects. We change our own viewpoint and that's how we learn everything we know about the world. And so our language is very intricately connected to that world and how we observe it. So I think that might make a comeback at some point in the future. You can also do other stuff. So especially with this kind of conditioning on text that we're seeing a lot of. So DALL-E2 and Stable Diffusion and all of these different things, and the original GAN we talked about at the beginning. You can do the same thing but now you're generating 3D point clouds. So this is a 3D corgi, using a corgi. And so this prompt can probably become much more complex over time and you can do AutoCAD design and just say give me a house and it's just going to design the whole house for you. So you can just tweak the prompt and things like that. That's all coming or even already here in many cases. So the final modality I just briefly wanted to talk about is olfactory embeddings. [LAUGHTER] So olfaction means smell, if you didn't know. So it turns out-- so my PhD thesis was about grounding semantics in different perceptual modalities. So a lot of my work started in vision, and then it's like, OK, now audio is the obvious next one, right? So you can learn the meaning of violin and then maybe you can learn what a violin looks like and what it is and what it sounds like and that's going to give you a richer representation. But for a lot of these words, what's actually very primitive to their meaning is what they smell like because in our brains that's really one of the core areas and one of the oldest areas in your brain. So what you can try to do if you want to complete all of your perceptual modalities is you can try to build olfactory embeddings. So that was kind of a joke paper I did, but the funny thing is it actually worked. [LAUGHTER] So there's a catalog, the Sigma-Aldrich fine flavors and fragrances catalog, where you can look up words like melon and pineapple, and then it's going to give you all of the chemical compounds that produce this smell or taste. And so if you do that, then you can count the occurrences and then you can do SVD or something like that on it to get it to be a bit more of a real embedding model. So now you get smell embeddings, smell vectors. And then you can compute similarity judgments between these smells. So it turns out apple smells like pear, and the chocolate and cocoa and sweet and coffee are sort of related. So you get these clusters of different smells just based off of their chemical compounds. So this bag of chemical compounds model gives you a very rich representation. And so if you look at all of the words that are concrete enough to have smell, so if you have a word democracy in there, that doesn't really smell like anything, right? [LAUGHTER] So you ignore democracy, you just focus on the things that smell or that could smell, I guess. So the really interesting thing to me is that this is much more correlated with human similarity judgments than the linguistic vectors we had at the time. So for a word like apple, you can just get a word vector you've learned in your first lecture. And so you can do skip-gram and things like that. But that thing is not going to be as correlated with human similarity judgments as this bag of chemical compounds model. So that's pretty interesting. So even something like smell where maybe we think this doesn't really matter. If you really want to understand how humans understand language, then maybe you want to include this in your foundation model, too. But I would start with other modalities. [LAUGHTER] All right. About time. OK. Yeah, sorry. So where to next? I think I've already said most of this actually. So one foundation model is going to rule them all. So, I mean, there will be many of these but a lot of them are going to have very similar traits, I think. We're going to be looking at scaling laws and trying to understand really what is the relationship between the different modalities, which one do we want more of, that sort of stuff. We're going to have retrieval augmentation. This thing is going to be really huge if you've heard of RAG, or if you haven't, you should look it up. So all of these parts of these models can also be multimodal. We need way better evaluation and better measurement. We already talked about that, too. And that's all I had. Thank you. [APPLAUSE]
Stanford_CS224N_Natural_Language_Processing_with_Deep_Learning_2023
Stanford_CS224N_NLP_with_Deep_Learning_Winter_2021_Lecture_7_Translation_Seq2Seq_Attention.txt
Hello, everyone, and welcome back into week four. So for week four, it's going to come in two halves. So today, I'm going to talk about machine translation related topics. And then in the second half of the week, we take a little bit of a break from learning more and more on neural network topics, and talk about final projects, but also some practical tips for building neural network systems. So for today's lecture, this is an important content full lecture. So first of all, I'm going to introduce a new task, machine translation. And it turns out that our task is a major use case of a new architectural technique to teach you about deep learning, which is sequence to sequence models. And so we'll spend a lot of time on those. And then there's a crucial way that's been developed to improve sequence to sequence models, which is the idea of attention. And so that's what I'll talk about in the final part of the class. I'm just checking everyone's keeping up with what's happening. So first of all, assignment 3 is due today. So hopefully you've all got in your neural dependency parses parsing text well. At the same time, assignment 4 is out today. And really today's lecture is the primary content for what you'll be using for building your assignment 4 systems. Switching it up for a little. For assignment 4, we give you a mighty two extra days. So you get nine days for it. And it's due on Thursday. On the other hand, do please be aware that assignment 4 is bigger and harder than the previous assignments. So do make sure you get started on it early. And then as I mentioned Thursday I'll turn to final projects. OK. So let's get straight into this with machine translation. So very quickly, I wanted to tell you a little bit about where we were and what we did before we get to neural machine translation. And so let's do the prehistory of machine translation. So machine translation is the task of translating a sentence x from one language which is called the source language to another language, the target language forming a sentence y. So we start off with a source language sentence x. L'homme, and then we translate it and we get out the translation man is born free, but everywhere he is in chains. OK. So there's our machine translation. OK. So in the early 1950s, there started to be work on machine translation. And so it's actually a thing about computer science if you find things that have machine in the name, most of them are old things. And this really kind of came about in the US context in the context of the Cold War. So there was this desire to keep tabs on what the Russians were doing. And people had the idea that because some of the earliest computers had been so successful at doing code breaking during the Second World War. Then maybe we could set early computers to work during the Cold War to do translation. And hopefully this will play and you'll be able to hear it. Here's a little video clip showing some of the earliest work in machine translation from 1954. They hadn't reckoned with ambiguity when they set out to use computers to translate languages. $500,000 simple calculator, most versatile electronic brain known, translates Russian into English. Instead of mathematical wizardry, a sentence in Russian, it could be-- One of the first non-numerical applications of computers, it was hyped as the solution to the Cold War obsession of keeping tabs on what the Russians were doing. Claims were made that the computer would replace most human translators. Then of course, you're just in the experimental stage. When you go in for full scale production what will the capacity be? We should be able to do about with a modern commercial computer about one to two million words an hour. And this will be quite an adequate speed to cope with the whole output of the Soviet Union in just a few hours of computer time a week. When do you hope to be able to achieve this speed? I our experiments go well, then perhaps within five years or so. And finally, Mr. McDaniel, does this mean the end of human translators? I say yes for translators of scientific and technical material. But as regards to poetry and novels, no, I don't think we'll ever replace the translators of that type of material. Mr. McDaniel, thank you very much. But despite the hype it ran into deep trouble. Yeah. So the experiments did not go well. And so in retrospect, it's not very surprising that the early work did not work out very well. I mean, this was in the sort of really beginning of the computer age in the 1950s. That it was also the beginning of people starting to understand the science of human languages, the field of linguistics. So really people had not much understanding of either side of what was happening. So what you had was people were trying to write systems on really incredibly primitive computers, right? It's probably the case that now if you have a USB C power brick, that it has more computational capacity inside it, than the computers that they were using to translate. And so effectively, what you were getting were very simple rule based systems and word lookup. So it was sort like, dictionary look up a word and get its translation. But that just didn't work well. Because human languages are much more complex than that. Often words have many meanings and different senses as we've sort of discussed about a bit. Often there are idioms. You need to understand the grammar to rewrite the sentences. So for all sorts of reasons, it didn't work well. And this idea was largely canned. In particular, there was a famous US government report in the mid 1960s, the ALPAC report, which basically concluded this wasn't working. Oops. OK. Work then did revive in AI at doing rule based methods of machine translation in the 90s. But when things really became alive was once you got into the mid 90s, and when they were in the period of statistical NLP that we've seen in other places in the course. And then the idea began, can we start with just data about translation i.e. sentences and their translations, and learn a probabilistic model that can predict the translations of fresh sentences? So suppose we're translating French into English. So what we want to do is build a probabilistic model that given a French sentence. We can say, what's the probability of different English translations? And then we'll choose the most likely translation. We can then found-- it was found felicitous to break this down into two components by just reversing this with Bayes' rule. So if instead we had a probability over English sentences p of y, and then a probability of a French sentence given an English sentence, that people were able to make more progress. And it's not immediately obvious as to why this should be because this is sort of just a trivial rewrite with Bayes' rule. That allowed the problem to be separated into two parts which proved to be more tractable. So on the left hand side, you effectively had a translation model where you could just give a probability of words or phrases being translated between the two languages without having to bother about the structural word order of the languages. And then on the right hand, you saw precisely what we spent a long time with last week, which is this is just a probabilistic language model. So if we have a very good model of what good fluent English sentences sound like, which we can build just from monolingual data, we can then get it to make sure we're producing sentences that sound good while the translation model hopefully puts the right words into them. So how do we learn the translation model since we haven't covered that? So the starting point was to get a large amount of parallel data, which is human translated sentences. At this point, it's mandatory that I show a picture of the Rosetta Stone which is the famous original piece of parallel data that allowed the decoding of Egyptian hieroglyphs because it had the same piece of text in different languages. In the modern world, there are fortunately for people who build natural language processing systems quite a few places, where parallel data is produced in large quantities. So the European Union produces a huge amount of parallel text across European languages. The French. Sorry. Not the French. The Canadian Parliament conveniently produces parallel text between French and English, and even a limited amount in Inuktitut, Canadian Eskimo. And then the Hong Kong parliament produces English and Chinese. So there's a fair availability from different sources. And we can use that to build models. So how do we do it though? All we have is these sentences. And it's not quite obvious how to build a probabilistic model out of those. Well, as before, what we want to do is break this problem down. So in this case, what we're going to do is introduce an extra variable, which is an alignment variable. So a is the alignment variable, which is going to give a word level or sometimes phrase level correspondence between parts of the source sentence and the target sentence. So this is an example of an alignment. And so if we could induce this alignment between the two sentences, then we can have probabilities of pieces of how likely a word or a short phrase is translated in a particular way. And in general, alignment is working out the correspondence between words that is capturing the grammatical differences between languages. So words will occur in different orders in different languages depending on whether it's a language that puts on the subject before the verb, or the subject after the verb, or the verb before both the subject and the object. And the alignments will also capture something about differences about the ways that work languages do things. So what we find is that we get every possibility of how words can align between languages. So you can have words that don't get translated at all in the other language. So in French, you put a definite article "the" before country names like Japon. So when that gets translated to English, you just get Japan. So there's no translation of the "the". So it just goes away. On the other hand, you can get many to one translations, where one French word gets translated as several English words. So for the last French word, it's been translated as Aboriginal people as multiple words. You can get the reverse, where you can have several French words that get translated as one English word. So mis en application is getting translated as implemented. And you can get even more complicated ones. So here we sort of have four English words being translated as two French words. But they don't really break down and translate each other well. I mean, these things don't only happen across languages. They also happen within the language when you have different ways of saying the same thing. So another way you might have expressed the poor don't have any money is to say the poor are moneyless. And that's much more similar to how the French is being rendered here. And so even English to English, you have the same kind of alignment problem. So probabilistic or statistical machine translation is more commonly known. What we wanted to do is learn these alignments. And there's a bunch of sources of information you could use. If you start with parallel sentences, you can see how often words and phrases co-occur in parallel sentences. You can look at their positions in the sentence. And figure out what are good alignments. But alignments are a categorical thing. They're not probabilistic and so they are latent variables. And so you need to use special learning algorithms like the expectation maximization algorithm for learning about latent variables. In the olden days of CS224N before we start doing it all with deep learning, we spent tons of CS224N dealing with latent variable algorithms. But these days, we don't cover that at all. And you're going to have to go off and see CS228 if you want to know more about that. And we're not really expecting you to understand the details here. But I did then want to say a bit more about how decoding was done in a statistical machine translation system. And so what we wanted to do is to say we had a translation model and a language model. And we want to pick out the most likely why there's the translation of the sentence. And what kind of process could we use to do that? Well, the naive thing is to say, well, let's just enumerate every possible y and calculate its probability. But we can't possibly do that because there's a number of translation sentences in the target language. That's exponential in the length of the sentence. So that's way too expensive. So we need to have some way to break it down more. And while we had a simple way for language models, we just generated words one at a time and laid out the sentence. And so that seems a reasonable thing to do. But here we need to deal with the fact that things occur in different orders in source languages and in translations. And so we do want to break it into pieces with an independence assumption like the language model. But then we want a way of breaking things apart and exploring it in what's called a decoding process. So this is the way it was done. So we start with a source sentence. So this is a German sentence. And as is standard in German. You're getting this second position verb. So that's probably not in the right position for where the English translation is going to be. So we might need to rearrange the words. So what we have is based on the translation model. We have words or phrases that are reasonably likely translations of each German word, or sometimes a German phrase. So these are effectively the LEGO pieces out of which we're going to want to create the translation. And so then inside that, making use of this data, we're going to generate the translation piece by piece kind of like we did with our neural language models. So we going to start with an empty translation. And then we're going to say, well, we want to use one of these LEGO pieces. And so we could explore different possible ones. So there's a search process. But one of the possible pieces is we could translate "er" with "he", or we could start the sentence with "are" translating the second word. So we could explore various likely possibilities. And if we're guided by our language model, it's probably much more likely to start the sentence with he than it is to start the sentence with "are" though "are" is not impossible.. OK. And then the other thing we're doing with these little blotches of black up at the top, we're sort of recording which German words we've translated. And so we explore forward in the translation process. And we could decide that we could translate next the second word goes, or we could translate the negation here, and translate that as does not. When we explore various continuations. And in the process, I'll go through in more detail later when we do the neural equivalent. We sort of do this search where we explore likely translations and prune. And eventually, we've translated the whole of the input sentence. And I've worked out a fairly likely translation. He does not go home. And that's what we use as the translation. OK. So in the period from about 1997 to around 2013, statistical machine translation was a huge research field. The best systems were extremely complex. And they had hundreds of details that I certainly haven't mentioned here. The systems have lots of separately designed and built components. So I mentioned language model and the translation model. But they had lots of other components for reordering models, and inflection models, and other things. There was lots of feature engineering. Typically, the models also made use of lots of extra resources. And they were lots of human effort to maintain. But nevertheless, they were already fairly successful. So Google Translate launched in the mid 2000s. And people thought wow, this is amazing. You could start to get sort of semi-decent automatic translations for different web pages. But that was chugging along well enough. And then we got to 2014. And really with enormous suddenness, people then worked out ways of doing machine translation using a large neural network. And these large neural networks proved to be just extremely successful, and largely blew away everything that preceded it. So for the next big part of the lecture, what I'd like to do is tell you something about neural machine translation. Neural machine translation, well, it means you're using a neural network to do machine translation. But in practice, it's meant slightly more than that. It has meant that we're going to build one very large neural network, which completely does translation end to end. So we're going to have a large neural network, we're going to feed in the source sentence into the input. And what's going to come out of the output of the neural network is the translation of the sentence. We're going to train that model end to end on parallel sentences. And it's the entire system rather than being lots of separate components as in an old fashioned machine translation system. And we'll see that in a bit. So these neural network architectures are called sequence to sequence models or commonly abbreviated seq2seq. And they involve two neural networks. Here it says two RNNs. The version I'm presenting now has two RNNs. But more generally, they involve two neural networks. There's one neural network that is going to encode the source center. So if we have a source sentence here, we are going to encode that sentence. And what we know about a way that we can do that. So using the kind of LSTMs that we saw last class, we can start at the beginning and go through a sentence and update the hidden state each time. And that will give us a representation of the content of the source sentence. So that's the first sequence model, which encodes the source sentence. And we'll use the idea that the final hidden state of the encode RNN is going to for instance, represent the source sentence. And we're going to feed it in directly as the initial hidden state for the decoder, or RNN. So then on the other side of the picture, we have our decoder RNN. And it's a language model that's going to generate a target sentence conditioned on the final hidden state of the encoder RNN. So we're going to start with the input of start symbol. We're going to feed in the hidden state from the encoder RNN. And now this second green RNN has completely separate parameters. I might just emphasize. But we do the same kind of LSTM computations and generate a first word of the sentence, "he." And so then doing LSTM generation just like last class, we copy that down as the next input. We run the next step of the LSTM, generate another word here, copy it down, and chug along. And we've translated the sentence, right? So this is showing the test time behavior when we're generating the next sentence. For the training time behavior, when we have parallel sentences, we're still using the same kind of sequence to sequence model. But we're doing it with the decoder part just like training a language model, where we're wanting to do teacher forcing and predict each word that's actually found in the source language sentence. Sequence to sequence models have been an incredibly powerful, widely used work force in neural networks for NLP. So although historically, machine translation was the first big use of them, and it's sort of the canonical use, they're used everywhere else as well. So you can do many other NLP tasks for them. So you can do summarization. You can think of text summarization as translating a long text into a short text. But you can use them for other things that are in no way a translation whatsoever. So they're commonly used for neural dialogue systems. So the encoder will encode the previous two utterances, say. And then you will use the decoder to generate a next utterance. Some other uses are even freakier but have proven to be quite successful. So if you have any way of representing the parse of a sentence as a string, and if you sort of think a little it's fairly obvious how you can turn the parse of a sentence into a string by just making use of extra syntax like parentheses, or putting in explicit words that are saying left arc, right arc, shifts like the transition system that you used for assignment 3. Well, then we could say let's use the encoder. Feed the input sentence to the encoder and let it output the transition sequence of our dependency parser. And somewhat surprisingly that actually works well as another way to build a dependency parser or other kinds of parser. These models have also been applied not just to natural languages, but to other kinds of languages, including music, and also programming language code. So you can train a seq2seq system, where it reads in pseudocode in natural language, and it generates out Python code. And if you have a good enough one, it can do the assignment for you. So this central new idea here with our sequence to sequence models is we have an example of conditional language models. So previously, the main thing we were doing was just to start at the beginning of the sentence and generate a sentence based on nothing. But here we have something that is going to determine or partially determine that is going to condition what we should produce. So we have a source sentence. And that's going to strongly determine what is a good translation. And so to achieve that, what we're going to do is have some way of transferring information about the source sentence from the encoder to trigger what the decoder should do. And the two standard ways of doing that are you either feed in a hidden state as the initial hidden state to the decoder, or sometimes you will feed something in as the initial input to the decoder. And so in neural machine translation we are directly calculating this conditional model probability of target language sentence given source language sentence. And so at each step, as we break down the word by word generation, that we're conditioning not only on previous words of the target language, but also each time on our source language sentence x. Because of this, we actually know a ton more about what our sentence that we generate should be. So if you look at the perplexities of these kind of conditional language models, you will find them like the numbers I showed last time. They usually have almost freakily low perplexities, that you will have models with perplexities that are something like 4 or even less, sometimes 2.5 because you get a lot of information about what words you should be generating. OK. So then we have the same questions as we had for language models in general. How to train a neural machine translation system and then how to use it at runtime? So let's go through both of those in a bit more detail. So the first step is we get a large parallel corpus. So we run off to the European Union, for example. And we grab a lot of parallel English French data from the European parliament proceedings. So then once we have our parallel sentences, what we're going to do is take batches of source sentences and target sentences. We'll encode the source sentence with our encoder LSTM. We'll feed its final hidden state into a target LSTM. And this one, we are now then going to train word by word by comparing what it predicts is the most likely word to be produced, versus what the actual first word, and then the actual second word is. And to the extent that we get it wrong, we're going to suffer some loss. So this is going to be the negative log probability of generating the correct next word "he" and so on along the sentence. And so in the same way that we saw last time for language models, we can work out our overall loss for the sentence doing this teacher forcing style, generate one word at a time, calculate a loss relative to the word that you should have produced. And so that loss then gives us information that we can backpropagate through the entire network. And the crucial thing about these sequence to sequence models that has made them extremely successful in practice is that the entire thing is optimized as a single system end to end. So starting with our final loss, we backpropagate it right through the system. So we not only update all the parameters of the decoder model, but we also update all of the parameters of the encoder model, which in turn will influence what conditioning gets passed over from the encoder to the decoder. So this moment is a good moment for me to return to the three slides that I skipped. I'm running out of time at the end of last time, which is to mention multilayer RNNs. So the RNNs that we've looked at so far are already deep on one dimension then unroll horizontally over many time steps. But they've been shallow in that there's just been a single layer of recurrent structure about our sentences. We can also make them deep in the other dimension by applying multiple RNNs on top of each other. And this gives us some multilayer RNN. Often also called a step RNN. And having a multilayer RNN allows us the network to compute more complex representations. So simply put the lower RNNs tend to compute lower level features, and the higher RNNs should compute higher level features. And just like in other neural networks, whether it's feed forward networks, or the kind of networks you see in vision systems, you get much greater power and success by having a stack on multiple layers of recurrent neural networks, right? That you might think that oh, there are two things I could do. I could have a single LSTM with a hidden state of dimension 2000, or I could have four layers of LSTMs with a hidden state of 500 each. And it shouldn't make any difference because I've got the same number of parameters roughly. But that's not true. In practice, it does make a big difference. And multilayer or stacked RNNs are more powerful. Can I ask you, there's a good student question here? What would lower level versus higher level features mean in this context? Sure. Yeah. So I mean, in some sense, these are somewhat flimsy terms. The meaning isn't precise. But typically, what that's meaning is that lower level features and knowing sort of more basic things about words and phrases. So that commonly might be things like what part of speech is this word, or are these words the name of a person, or the name of a company? Whereas higher level features refer to things that are at a higher semantic level. So knowing more about the overall structure of a sentence, knowing something about what it means, whether a phrase has positive or negative connotations. What its semantics are when you put together several words into an idiomatic phrase, roughly the higher level kinds of things. OK. Jump ahead. OK. So when we build one of these end to end neural machine translation systems, if we want them to work well, single layer LSTM encoder-decoder in neurals machine translation systems just don't work well. But you can build something that is no more complex than the model that I've just explained now. That does work pretty well by making it a multi-layer stacked LSTM neural machine translation system. So therefore, the picture looks like this. So we've got this multilayer LSTM that's going through the source sentence. And so now, at each point in time, we calculate a new hidden representation that rather than stopping there, we sort of feed it as the input into another layer of LSTM, and we calculate in the standard way its new hidden representation. And the output of it, we feed into a third layer of LSTM. And so we run that right along. And so our representation of the source sentence from our encoder is then this stack of three hidden layers, whoops. And then that we use to then feed in as the initial, as the initial hidden layer into then sort of generating translations, or for training the model of comparing the losses. So this is what the picture of a LSTM encoder-decoder neural machine translation system really looks like. So in particular, to give you some idea of that. So a 2017 paper by Denny Britz and others, that what they found was that for the encoder RNN, it worked best if it had two to four layers. And four layers was best for the decoder RNN. And the details here like for a lot of neural nets depend so much on what you're doing, and how much data you have, and things like that. But as rules of thumb to have in your head, it's almost invariably the case that having a two layer LSTM works a lot better than having a one layer LSTM. After that, things become much less clear. It's not so infrequent that if you try three layers, it's a fraction better than two. But not really. And if you try four layers, it's actually getting worse again. It depends on how much data, et cetera you have. At any rate, it's normally very hard with the model architecture that I just showed back here to get better results with more than four layers of LSTM. Normally to do deeper LSTM models and get even better results. You have to be adding extra skip connections of the kind that I talked about at the very end of the last class. Next week, John is going to talk about transformer based networks. In contrast, for fairly fundamental reasons. They're typically much deeper. But we'll leave discussing them until we get on further. So that was how we train the model. So let's just go a bit more through what the possibilities are for decoding and explore a more complex form of decoding than we've looked at. But the simplest way to decode is the one that we presented so far. So that we have our LSTM, we start, generate a hidden state. It has a probability distribution over words. And you choose the most probable one the argmax. And you say "he", and you copy it down and you repeat over. So doing this is referred to as greedy decoding. Taking the most probable word on each step. And it's sort of the obvious thing to do, and doesn't seem like it could be a bad thing to do. But it turns out that it actually can be a fairly problematic thing to do. And the idea of that is that with greedy decoding, you're taking locally what seems the best choice. And then you're stuck with it. And you have no way to undo decisions. So if these examples have been using this sentence about, he hit me with a pie going from translating from French to English. So if you start off, then you say, OK, "il" the first word in the translation, should be he. That looks good. But then you-- and then you say, well, hit, I'll generate hit. Then somehow the model thinks that the most likely next word after hit is "a". And there are lots of reasons it could think so. Because after hit most commonly, there's a direct object now and then he hit a car, he hit a roadblock, right? So that sounds pretty likely. But once you've generated it, there's no way to go backwards. And so you just have to keep on going from there and you may not be able to generate the translation you want. At best you can generate, he hit a pie, or something. So we'd like to be able to explore a bit more in generating our translations. And well, what could we do? Well, I sort of mentioned this before looking at the statistical empty models. Overall, what we'd like to do is find translations that maximize the probability of y given x, and at least if we know what the length of that translation is. We can do that as a product of generating a word at a time. And so to have a full model. We also have to have a probability distribution over how long the translation length would be. So we could say this is the model. And let's generate and score all possible sequences y using this model. And that's where that then requires generating an exponential number of translations. And it's far, far too expensive. So beyond greedy decoding, the most important method that is used. And you'll see lots of places is something called beam search decoding. And so this isn't what neural, well, any kind of machine translation has one place where it's commonly used. That this isn't a method specific to machine translation. You find lots of other places, including all other kinds of sequence to sequence models. It's not the only other decoding method. Once when we got on to the language generation class, we'll see a couple more. But this is sort of the next one that you should know about. So beam search's idea is that you're going to keep some hypotheses to make it more likely that you'll find a good generation while keeping the search tractable. So what we do is choose a beam size. And for neural MT, the beam size is normally fairly small, something like 5 to 10. And at each step of the decoder, we're going to keep track of the k most probable partial translation. So initial sub- sequences of what we're generating, which we call hypotheses. So a hypothesis, which is then sort of the prefix of a translation has a score which is this log probability up to what's been generated so far. So we can generate that in the typical way using our conditional language model. So as written all of the scores are negative. And so the least negative one, i.e., the highest probability one is the best one. So what we want to do is search for high probability hypotheses. So this is a heuristic method. It's not guaranteed to find the highest probability decoding. But at least, it gives you more of a shot than simply doing greedy decoding. So let's go through an example to see how it works. So in this case, so I can fit it on a slide. The size of our beam is just 2. Though normally, it would actually be a bit bigger than that. And the blue numbers are the scores of the prefixes. So these are these log probabilities of a prefix. So we start off with our start symbol. And we're going to say, OK. What are the two most likely words, to generate first according to our language model? And so maybe the first two most likely words are he and I. And there are the log probabilities. Then what we do next is for each of these k hypotheses, we find what are likely words to follow them? In particular, we find what are the k most likely words to follow each of those. So we might generate he hit, he struck, I was, I got. OK. So at this point, it sort of looks like we're heading down what will turn into an exponential size tree structure again. But what we do now is we work out the scores of each of these partial hypotheses. So we have four partial hypotheses. He hit, he struck. I was, I got. And we can do that by taking the previous score that we have the partial hypothesis and adding on the log probability of generating the next word here, he, hit. So this gives the scores for each hypothesis. And then we can say, which of those two partial hypotheses? Because our beam size, k equals 2, have the highest score? And so they are, I was, and he hit. So we keep those two and ignore the rest. And so then for those two, we are going to generate k hypotheses for the most likely following word. He hit a, he hit me, I was hit, I was struck. And again, now, we want to find the k most likely hypotheses out of this full set. And so that's going to be he struck me and I was, I don't know, he struck me. And he hit a. So we keep just those ones. And then for each of those, we generate the k most likely next words tart, pie, with, on. And then again, we filter back down to size k by saying, OK, the two most likely things here are pie or with. So we continue working on those, generate things, find the two most likely, generate things, find the two most likely. And at this point, we would generate end of string. And say, OK, we've got a complete hypothesis. He struck me with a pie. And we could then trace back through the tree to obtain the full hypothesis for this sentence. So that's most of the algorithm. There's one more detail, which is the stopping criterion. So in greedy decoding, we usually decode until the model produces an end token. And when it produces the end token, we say we are done. In beam search decoding, different hypotheses may produce end tokens on different time steps. And so we don't want to stop as soon as one path through the search tree has generated end. Because it could turn out there's a different path through the search tree, which will still prove to be better. So what we do is sort of put it aside as a complete hypothesis and continue exploring other hypotheses via our beam search. And so usually, we will then either stop when we've hit a cut off length, or when we've completed n complete hypotheses. And then we'll look through the hypotheses that we've completed and say which is the best one of those. And that's the one we'll use. OK. So at that point, we have our list of completed hypotheses. And we want to select the top one with the highest score. Well, that's exactly what we've been computing. Each one has a probability that we've worked out. But it turns out that we might not want to use that just so naively. Because that turns out to be a kind of a systematic problem, which is not as a theorem. But in general, longer hypotheses have lower scores. So if you think about this as probabilities of successively generating each word, that basically at each step, you're multiplying by another chance of generating the next word probability, and commonly those might be 10 to the minus 3, 10 to the minus 2. So just from the length of the sentence, your probabilities are getting much lower the longer that they go on. In a way that appears to be unfair since although in some sense extremely long sentences aren't as likely as short ones. They're not less likely by that much. A lot of the time we produce long sentences. So for example, in a newspaper, the median length of sentences is over 20. So you wouldn't want to be having a decoding model when translating news articles that says, huh, just generate two word sentences. They're just way high probability according to my language model. So the commonest way of dealing with that is that we normalize by length. So if we're working in log probabilities, that means taking dividing through by the length of the sentence. And then you have a per word log probability score. And you can argue that this isn't quite right. In some theoretical sense, but in practice it works pretty well and it's very commonly used. Neural translation has proven to be much, much better. I'll show you a couple of statistics and about that in a moment. It has many advantages. It gives better performance. The translations are better. In particular, they're more fluent because neural language models produce much more fluent sentences. But also, they much better use context because neural language models, including conditional neural language models give us a very good way of conditioning on a lot of contexts. In particular, we can just run a long encoder and condition on the previous sentence, or we can translate words well in context by making use of neural context. Neural models better understand phrase similarities and phrases that mean approximately the same thing. And then the technique of optimizing all parameters of the model end to end in a single large neural network has just proved to be a really powerful idea. So previously, a lot of the time, people were building separate components and tuning them individually, which just meant that they weren't actually optimal when put into a much bigger system. So really a hugely powerful guiding idea in neural network land is if you can sort of build one huge network, and just optimize the entire thing end to end, that will give you much better performance than component-wise systems. We'll come back to the costs of that later in the course. The models are also actually great in other ways. They actually require much less human effort to build. There's no feature engineering. There's in general, no language specific components. You're using the same method for all language pairs. Of course, it's rare for things to be perfect in every way. So neural machine translation systems also have some disadvantages compared to the older statistical machine translation systems. They're less interpretable. It's harder to see why they're doing what they're doing, where before you could actually look at phrase tables and they were useful. So they're hard to debug. They also tend to be sort of difficult to control. So compared to anything like writing rules, you can't really give much specification as if you like to say I'd like my translations to be more casual or something like that. It's hard to know what they'll generate. So there are various safety concerns. I'll show a few examples of that in just a minute. But first, before doing that, quickly how do we evaluate machine translation? The best way to evaluate machine translation is to show a human being who's fluent in the source and target languages the sentences, and get them to give judgment on how good a translation it is. But that's expensive to do, and might not even be possible if you don't have the right human beings around. So a lot of work was put into finding automatic methods of scoring translations that were good enough. And the most famous method of doing that is what's called BLEU. And the way you do BLEU is you have a human translation or several human translations of the source sentence, and you're comparing a machine generated translation to those pre-given human written translations. And you score them for similarity by calculating n-gram precisions, i.e., words that overlap between the computer and human reason translation, bi-grams, tri-grams, and 4-grams. And then working out a geometric average between overlaps of n-grams, plus there's a penalty for too short system translations. So BLEU has proven to be a really useful measure. But it's an imperfect measure. That commonly there are many valid ways to translate a sentence. And so there's some luck as to whether the human written translations you have happened to correspond to which what might be a good translation from the system. There's more to say about the details of BLEU and how it's implemented. That you're going to see all of that during assignment 4, because you will be building neural machine translation systems, and evaluating them with a BLEU algorithm. And there are full details about BLEU in the assignment handout. But at the end of the day, BLEU gives a score between 0 and 100 where your score is 100. If you are exactly producing one of the human written translations, and 0 if there's not even a single unigram that overlaps between the two, With that rather brief intro, I wanted to show you sort of what happened in machine translation. So machine translation with statistical models, phrase-based statistical machine translation that I showed at the beginning of the class had been going on since the mid 2000s decade. And it had produced sort of semi-good results of the kind that are in Google Translate in those days. But by the time you entered the 2010s, basically progress in statistical machine translation had stalled. And you were getting barely any increase over time. And most of the increase you were getting over time was simply because you're training your models on more data. In those years, around the early 2010s, the big hope that most people had. Someone asked what is the y-axis here, this y-axis is this BLEU score that I told you about on the previous slide. In the early 2010s, the big hope that most people in the machine translation field had was, well, if we built a more complex kind of machine translation model that knows about the syntactic structure of languages, that makes use of tools like dependency parsers, we'll be able to build much better translations. And so those are the purple systems here, which I haven't described at all. But it's as the years went by it was pretty obvious that barely seemed to help. And so then in the mid 2010s, so in 2014 was the first modern attempt to build a neural network from machine translations and encoded-decoder model. And by the time it was sort of evaluated in bake offs in 2015, it wasn't as good as what had been built up over the preceding decade. But it was already getting pretty good. But what was found was that these new models just really opened up a whole new pathway to start building much, much better machine translation systems. And since then, things have just sort of taken off. And year by year, neural machine translation systems are getting much better and far better than anything we had preceding that. So for at least the early part of the application of deep learning and natural language processing, neural machine translation was the huge big success story. In the last few years, when we've had models like GPT2 and GPT3, and other huge neural models like BERT improving web search. It's a bit more complex. But this was the first area where there was a neural network, which was hugely better than what had preceded, and was actually solving a practical problem that lots of people in the world need. And it was stunning with the speed at which success was achieved. So 2014 were the first what I call here fringe research attempts to build a neural machine translation system. Meaning that three or four people who are working on neural network models thought, oh, why don't we see if we can use one of these to translate, learn to translate sentences, where there weren't really people with a background in machine translation at all? But a success was achieved so quickly that within two years' time, Google had switched to using neural machine translation for most languages. And by a couple of years later, after that, essentially anybody who does machine translation is now deploying live neural machine translation systems and getting much, much better results. So that was sort of just an amazing technological transition that for the preceding decade, the big statistical machine translation systems like the previous generation of Google Translate had literally been built up by hundreds of engineers over the years. But a comparatively small group of deep learning people in a few months with a small amount of code. And hopefully, you'll even get a sense of this doing assignment 4, were able to build neural machine translation systems that proved to work much better. Does that mean that machine translation is solved? No. There are still lots of difficulties which people continue to work on very actively. And you can see more about it in the Skynet Today article that's linked at the bottom. But there are lots of problems with out of vocabulary words. There are domain mismatches between the training and test data. So it might be trained mainly on news wire data but you want to translate people's Facebook messages. There are still problems of maintaining context over longer text. We'd like to translate languages for which we don't have much data. And so these methods work by far the best when we have huge amounts of parallel data. Even our best multilayer LSTMs aren't that great of capturing sentence meaning. There are particular problems such as interpreting what pronouns refer to, or in languages like Chinese or Japanese, where there's often no pronoun present. But there is an implied reference to some person working out how to translate that. For languages that have lots of inflectional forms of nouns, verbs, and adjectives. These systems often get them wrong. So there's still tons of stuff to do. So here's just sort of quick funny examples of the kind of things that go wrong, right? So if you asked to translate paper jam. Google Translate is deciding that this is a kind of jam just like this. Raspberry jam and strawberry jam. And so this becomes a jam of paper. There are problems of agreement and choice. So if you have many languages don't distinguish gender. And so the sentences are neutral between things masculine or feminine, so Malay, or Turkish are two well known languages of that sort. But what happens when that gets translated into English by Google Translate is that the English language model just kicks in and applies stereotypical biases. And so these gender neutral sentences get translated into, she works as a nurse. He works as a programmer. So if you want to help solve this problem, all of you can help by using singular they in all contexts when you're putting material online. And that could then change the distribution of what's generated. And people also work on modeling improvements to try and avoid this. Here's one more example that's kind of funny. People noticed a couple of years ago. That if you choose one of the rarer languages that Google will translate such as Somali, and you just write in some rubbish like ag ag ag ag. Freakily, it had produced out of nowhere prophetic and biblical texts, as the name of the Lord was written in the Hebrew language. It was written in the language of the Hebrew nation, which makes no sense at all. Well, we're about to see a bit more about why this happens. But that was a bit worrying. As far as I can see, this problem is now fixed in 2021. I couldn't actually get Google Translate to generate examples like this anymore. So there are lots of ways to keep on doing research. NMT certainly is a flagship task for NLP and deep learning. And it was a place where many of the innovations of deep learning NLP were pioneered, and people continue to work hard on it, people have found many, many improvements. And actually for the last bit of the class and the minute I'm going to present one huge improvement, which is so important that it's really come to dominate the whole of the recent field of neural networks for NLP. And that's the idea of attention. But before I get onto attention, I want to spend three minutes on our assignment 4. So for assignment 4 this year, we've got a new version of the assignment, which we hope will be interesting. But it's also a real challenge. So for assignment 4 this year, we've decided to do Cherokee English machine translation. So Cherokee is an endangered Native American language that has about 2000 fluent speakers. It's an extremely low resource language. So it's just there isn't much written Cherokee data available period. And particularly, there's not a lot of parallel sentences between Cherokee and English. And here's the answer to Google's freaky prophetic translations. For languages for which there isn't much parallel data available, commonly the biggest place where you can get parallel data is from Bible translations. So you can have your own personal choice wherever it is over the map as to where you stand with respect to religion. But the fact of the matter is if you work on indigenous languages, what you very, very quickly find is that a lot of the work that's done on collecting data on indigenous languages, and a lot of the material that is available in written form for many indigenous languages is Bible translations. Yeah. OK. So this is what Cherokee looks like. And so you can see that the writing system has a mixture of things that look like English letters and then all sorts of letters that don't. And so here's the initial bit of a story long ago where seven boys who used to spend all their time down by the townhouse. So this is a piece of parallel data that we can learn from. So the Cherokee writing system has 85 letters. And the reason why it has so many letters is that each of these letters actually represents a syllable. So many languages of the world have strict consonant vowel syllable structure. So you have words like right up here or something like that for Cherokee, right? And another language like that's Hawaiian. And so each of the letters represents a combination of a consonant and a vowel. And that's the set of those. You then get 17 by 5 gives you 85 letters. Yeah, so being able to do this assignment. Big thanks to people from University of North Carolina, Chapel Hill who've provided the resources were using for this assignment. Although you can do quite a lot of languages on Google Translate. Cherokee is not a language that Google offers on Google Translate. So we can see how far we can get. But we have to be modest in our expectations because it's hard to build a very good MT system with only a fairly limited amount of data. So we'll see how far we can get. There is a flipside, which is for you students doing the assignment. The advantage of having not too much data is that your models will train relatively quickly. So we'll actually have less trouble than we did last year with people's models taking hours to train as the assignment deadline closed in. There's a couple more words about Cherokee. So we have some idea what we're talking about. So the Cherokee originally lived in Western North Carolina and Eastern Tennessee. They then sort of got shunted Southwest from that. And then in particular, for those of you who went to American high schools and paid attention, you might remember discussion of the Trail of Tears when a lot of the Native Americans from the Southeast of the US got forcibly shoved a long way further West. And so most Cherokee now live in Oklahoma. There are some that are in North Carolina. The writing system that I showed on this previous slide, it was invented by a Cherokee man, Sequoyah. That's a drawing of him there. And that was actually kind of an incredible thing. So he started off illiterate and worked out how to produce a writing system that would be good for Cherokee. And given that it has this consonant-vowel structure, he chose a syllabary which turned out to be a good choice. So here's a neat historical fact. So in the 1830s and 1840s, the percentage of Cherokee that were literate in Cherokee written like this was actually higher than the percentage of white people in the southeastern United States at that point in time. OK. Before time disappears, oops, time has almost disappeared. I was just starting to say, oh no, I'll have to do a bit more of this. We'll have to do a bit more of this next time. That'll be OK, right? So the final idea that's really important for sequence to sequence models is the idea of attention. And so we had this model of doing sequence to sequence models such as for neural machine translation. And the problem with this architecture is that we have this one hidden state, which has to encode all the information about the source sentence. So it acts as a kind of information bottleneck. And that's all the information that the generation is conditioned on. Well, I did already mention one idea last time of how to get more information where I said look maybe you could kind of average all of the vectors of the source to get a sentence representation. But that method turns out to be better for things like sentiment analysis. And not so good for machine translation, where the order of words is very important to preserve. So it seems like we would do better, if somehow, we could get more information from the source sentence while we're generating the translation. And in some sense, this just corresponds to what a human translator does, right? If you're a human translator, you read the sentence that you're meant to translate. And you maybe start translating a few words. But then you look back at the source sentence to see what else was in it and translate some more words. So very quickly after the first neural machine translation systems, people came up with the idea of maybe we could build a better neural empty MT that did that. And that's the idea of attention. So the core idea is on each step of the decoder, we're going to use a direct link between the encoder and the decoder that will allow us to focus on a particular word or words in the source sequence and use it to help us generate what words come next. I'll just go through now showing you the pictures of what attention does and then at the start of next time we'll go through the equations in more detail. So we use our encoder just as before and generate our representations, feed in our conditioning as before, and say we're starting our translation. But at this point, we take this hidden representation, and say, I'm going to use this hidden representation to look back at the source to get information directly from it. So what I will do is I will compare the hidden state of the decoder with the hidden state of the encoder at each position and generate an attention score, which is a kind of similarity score like a product. And then based on those attention scores, I'm going to calculate a probability distribution as to by using a softmax as usual to say which of these encoder states is most like my decoder state. And so we'll be training the model here to be saying, well, probably you should translate the first word of the sentence first, so that's where the attention should be placed. So then based on this attention distribution, which is a probability distribution coming out of the softmax, we're going to generate a new attention output. And so this attention output is going to be an average of the hidden states of the encoder model. That is going to be a weighted average based on our attention distribution. And so we then kind of take that attention output, combine it with the hidden state of the decoder RNN and together, the two of them are then going to be used to predict via a softmax what word to generate first, and we hope to generate he. And then at that point, we sort of chug along and keep doing the same kind of computations at each position. There's a little side note here that says sometimes we take the attention output from the previous step, and also feed into the decoder along with the usual decoder input. So we're taking this attention output and actually feeding it back in to the hidden state calculation. And that can sometimes improve performance. And we actually have that trick in the assignment 4 system. And you can try it out. OK. So we generate along and generate our whole sentence in this manner. And that's proven to be a very effective way of getting more information from the source sentence more flexibly to allow us to generate a good translation. I'll stop here for now and at the start of next time. I'll finish this off by going through the actual equations for how attention works.
Stanford_CS224N_Natural_Language_Processing_with_Deep_Learning_2023
Stanford_CS224N_NLP_with_Deep_Learning_2023_Hugging_Face_Tutorial_Eric_Frankel.txt
SPEAKER 1: Hi, everyone. Welcome to the 224N Hugging Face Transformers tutorial. So this tutorial is just going to be about using the Hugging Face library. It's really useful and it's a super effective way of being able to use some off-the-shelf NLP models, specifically, models that are kind of transformer-based. And being able to use those for either your final project, your custom final project, or something like that. Just using it in the future. So these are-- it's a really helpful package to learn. And it interfaces really well with PyTorch in particular too. OK, so first things first is in case there's anything else that you are missing from this kind of tutorial, the Hugging Face documentation is really good. They also have lots of tutorials and walkthroughs as well as other kind of like notebooks that you can play around with as well. So if you're ever wondering about something else, that's a really good place to look. OK, so in the Colab, the first thing we're going to do that I already did, but can maybe run again, is just installing the Transformers Python package and then the data sets Python package. So this corresponds to the Hugging Face Transformers and data sets. And so those are really helpful. The Transformers is where we'll get a lot of these kind of pre-trained models from. And the data sets will give us some helpful data sets that we can potentially use for various tasks. So, in this case, sentiment analysis. OK, and so we'll use a bit of a helper function for helping us understand what encoding is-- what encodings are actually happening as well. So we'll run this just to kind of kick things off and import a few more things. OK, so first, what we'll do is this is generally kind of like the step-by-step for how to use something off of Hugging Face. So first what we'll do is we'll find some model from the Hugging Face Hub here. And note that there's a ton of different models that you're able to use. There's BERT, there's GPT-2, there's t5-small, which is another language model from Google. So there are a bunch of these different models that are pre-trained and all of these weights are up here in Hugging Face that are freely available for you guys to download. So if there's a particular model you're interested in, you can probably find a version of it here. You can also see different types of models on the side as well that-- for a specific task. So if we wanted to do something like zero shot classification, there are a couple models that are specifically good at doing that particular task. So based off of what task you're looking for, there's probably a Hugging Face model for it that's available online for you to download. OK, so that's what we'll do first is we'll go ahead and find a model in the Hugging Face Hub. And then, whatever you want to do, in this case, we'll do sentiment analysis. And then, there are two things that we need next. The first is a tokenizer for actually splitting your input text into tokens that your model can use and the actual model itself. And so the tokenizer, again, converts this to some vocabulary IDs. These discrete IDs that your model can actually take in. And the model will produce some prediction based off of that. OK, so first, what we can do is, again, import this auto tokenizer and this AutoModel from-- for sequence classification. So what this will do, initially, is download some of the key things that we need so that we can actually initialize these. So what do each of these do? So first, the tokenizer, this auto tokenizer, is from some pre-trained tokenizer that has already been used. So in general, there's a corresponding tokenizer for every model that you want to try and use. In this case, it's like SiEBERT, so like something around Sentiment in RoBERTa. And then, the second is you can import this model for sequence classification as well from something pre-trained on the model hub again. So again, this corresponds to Sentiment, RoBERTa, Large English. And if we want, we can even find this over here. We can find it as I think English. Yeah, large English. So again, this is something we can easily find. You just copy this string up here and then you can import that. OK, we've downloaded all the things that we need, some kind of binary files as well. And then, now, we can go ahead and actually use some of these inputs, right? So this gives you some set of an input, right? This input string, I'm excited to learn about Hugging Face Transformers. We'll get some tokenized inputs here from the actual tokenized things here after we pass it through the tokenizer. And then, lastly, we'll get some notion of the model output that we get, right? So this is kind of some logits here over whatever classification that we have. So in this case, good or bad. And then, some corresponding prediction. And we'll walk through what this kind of looks like in just a second as well in a little more depth. But this is broadly how we can actually use these together. We'll tokenize some input. And then, we'll pass these inputs through the model. So we'll talk about tokenizers first. So tokenizers are used for, basically, just pre-processing the inputs that you get for any model. And it takes some raw string to like-- essentially a mapping to some number or ID that the model can take in and actually understand. So tokenizers are either kind of like-- are specific to the model that you want to use, or you can use the auto tokenizer that will kind of conveniently import whatever corresponding tokenizer you need for that model type. So that's kind of like the helpfulness of the auto tokenizer. It'll kind of make that selection for you and make sure that you get the correct tokenizer for whatever model you're using. So the question is, does it, make sure that everything is mapped to the correct index that the model is trained on? The answer is yes. So that's why the auto tokenizer is helpful. So there are two types of tokenizers. There's the Python tokenizer. And there's also a tokenizer fast. The tokenizer fast is written in Rust. In general, if you do the auto tokenizer, it will just default to the fast one. There's not really a huge difference here. It's just about the inference time for getting the model outputs. Yeah, so the question is the tokenizer creates dictionaries of the model inputs. So it's more like-- I think the way to think about a tokenizer is that dictionary almost, right? So you want to kind of translate almost or have this mapping from the tokens that you can get from this string. And then, map that into some inputs that the model will actually use. So we'll see an example of that in just a second. So for example, we can kind of call the tokenizer in any way that we would for a typical PyTorch model. But we're just going to call it on a string. So here, we have our input string is Hugging Face Transformers is great. We pass that into the tokenizer, almost like it's like a function, right? And then, we'll get out some tokenization. So this gives us a set of input IDs. So to answer the earlier question, these are basically the numbers that each of these tokens represent, right? So that the model can actually use them. And then, a corresponding attention mask for the particular transformer, OK? So there are a couple ways of accessing the actual tokenized input IDs. You can treat it like a dictionary. So hence, kind of thinking about it almost as that dictionary form. It's also just like a property of the output that you get. So there are two ways of accessing this in a pretty Pythonic way. OK, so what we can see as well is that we can look at the particular-- the actual kind of tokenization process almost. And so this can maybe give some insight into what happens at each step, right? So our initial input string is going to be Hugging Face Transformers is great. OK, the next step is that we actually want to tokenize these individual words that are passed in. So here, this is the kind of output of this tokenization step, right? We get these individual split tokens. We'll convert them to IDs here. And then, we'll add any special tokens that our model might need for actually performing inference on this. So there's a couple steps that happen kind of underneath when you use an actual-- when you use a tokenizer that happens at a few things at a time. One thing to note is that for fast tokenizers as well, there is another option that you're able to get to. So you have, essentially, you have this input string. You have the number of tokens that you get. And you might have some notion of the special token mask as well. So using char2word is going to give you the word piece of a particular character in the input. So here, this is just giving you additional options that you can use for the fast tokenizer as well for understanding how the tokens are being used from the input string. OK, so there are different ways of using the outputs of these tokenizers too. So one is that you can pass this in. And if you indicate that you want it to return a tensor, you can also return a PyTorch tensor. So that's great in case you need a PyTorch tensor, which you probably generally want. You can also add multiple tokens into the tokenizer, and then, pad them as however you need. So for here, for example, we can use the pad token as being this kind of like pad bracket almost. And giving the token ID is going to correspond to 0. So this is just going to add padding to whatever input that you give. So if you need your outputs to be the same length for a particular type of model, right, this will add those padding tokens. And then, correspondingly, gives you the zeros and the attention mask where you actually need it. OK, and so the way to do that here is you basically set padding to be true. You can also set truncation to be true as well. And so if you ever have more-- any other kind of features of the tokenizer that you're interested in, again, you can check the Hugging Face documentation, which is pretty thorough for what each of these things do. Yeah, so the question is looking at the hash hash, at least, and whether that means that we should have a space before or not. So here, in this case-- yeah, so in this case, we probably don't want like the space before, right? Just because we have the "Hugging--" I don't know, "Hugging" is all one word in this case. Generally, for the tokenizers, generally, the output that they give is still pretty consistent though in terms of how the tokenization process works. So there might be these instances where it might be contrary to what you might expect for how something is tokenized. In general, the tokenization generally works fine. So in most cases, the direct output that you get from the Hugging Face tokenizer is sufficient. OK, awesome. So one last thing past the adding additional padding is that you can also kind of decode an entire batch at one given time. So if we look again, we have our tokenizer. We'll additionally have this method called a batch decode. So if we have the model inputs that we get up here, this is the output of passing these sentences or these strings into the tokenizer. We can go ahead and just pass these input IDs that correspond to that into the batch decode and it'll give us this decoding that corresponds to all the padding we added in. Each of the particular kind of words and strings. And if you want to ignore all of the presence of these padding tokens or anything like that, you can also pass that in as skipping the special tokens. Gotcha. So this gives like-- this is a pretty high level overview of the-- how you would want to use tokenizers, I guess, in using Hugging Face. So now we can talk about maybe how to use the Hugging Face models themselves. So again, this is pretty similar to what we're seeing for something like initializing a tokenizer. You just choose the specific model type for your model. And then, and you can use that or the specific kind of AutoModel class. Where, again, this AutoModel kind of takes almost the initialization process. It takes care of it for you in a pretty easy way without really any too much overhead. So additionally, so for the pre-trained Transformers that we have, they generally have the same underlying architecture. But you'll have different kind of heads associated with each Transformer. So attention heads so you might have to train if you're doing some sequence classification or just some other task. So Hugging Face will do this for you. And so, for this, we'll walk through an example of how to do this for sentiment analysis. So if there's a specific context like sequence classification we want to use, we can use like this-- the very specific kind of like class Hugging Face provides, so DistilBERT for sequence classification. Alternatively, if we were doing it using DistilBERT in a masked language model setting, we use DistilBERT for masked LM. And then, lastly, if we're just doing it purely for the representations that we get out of DistilBERT, we can just use the baseline model. So the key thing here, or key takeaway, is that there are some task specific classes that we can use from Hugging Face to initialize. So AutoModel, again, is similar to the auto tokenizer. So for this, it's just going to kind of load by default that specific model. And so, in this case, it's going to be just the basic weights that you need for that. OK, so here, we'll have basically three different types of models that we can look at. One is like an encoder type model, which is BERT. A decoder type model, like GPT-2 that's performing these-- generating some text potentially. And encoder decoder, models so BART or T5, in this case. So again, if you go back to kind of the Hugging Face Hub, there's a whole sort of different types of models that you could potentially use. And if we look in the documentation as well, so here, we can understand some notion of the different types of classes that we might want to use. So there's some notion of the auto tokenizer, different auto models for different types of tasks. So here, again, if you have any kind of specific use cases that you're looking for, then you can check the documentation. Here, again, if you use an AutoModel from pre-trained, you'll just create a model that's an instance of that BERT model. In this case, BERT model for the BERT-base case. We can go ahead and start. One last thing to note is that, again, the particular choice of your model matches up with kind of the type of architecture that you have to use, right? These different types of models can perform specific tasks. So you're not going to be able to load or use BERT, for instance, or DistilBERT as a sequence to sequence model, for instance, which requires the encoder and decoder because DistilBERT only consists of an encoder. So there's a bit of a limitation on how you can exactly use these, but it's, basically, based on the model architecture itself. OK, awesome. So let's go ahead and get started here. So similarly here, we can import to AutoModel for sequence classification. So again, this is-- we're going to perform some classification task and we'll import this AutoModel here so that we don't have to reference, again, something like DistilBERT for sequence classification. We'll be able to load it automatically and it'll be all set. Alternatively, we can do DistilBERT for sequence classification here. And that specifically will require DistilBERT to be the input there. OK, so these are two different ways of basically getting the same model here, one using the AutoModel, one using just explicitly DistilBERT. Cool. And here, because it's classification, we need to specify the number of labels or the number of classes that we're actually going to classify for each of the input sentences. OK, so here, we'll get some-- like a Warning here if you are following along and you print this out because some of the sequence classification parameters aren't trained yet. And so we'll go ahead and take care of that. So here, similarly, we'll walk through how to actually train some of these models. So the first is, how do you actually pass any of the inputs that you get from a tokenizer into the model, OK? Well, if we get some model inputs from the tokenizer up here and we pass this into the model by specifying that the input IDs are the input IDs from the model inputs. And likewise, we want to emphasize or we can show here and specifically pass in that the attention mask is going to correspond to the attention mask that we gave from these outputs of the tokenizer, OK? So this is option one where you can specifically identify which property goes to what. The second option is using kind of a Pythonic hack almost, which is where you can directly pass in the model inputs. And so this will, basically, unpack almost the keys of the model inputs here. So the model input keys, so the input IDs, correspond to this. The attention mask corresponds to the attention mask argument. So when we use this star star kind of syntax, this will go ahead and unpack our dictionary and, basically, map the arguments to something of the same keys. So this is an alternative way of passing it into the model. Both are going to be the same. OK, so now, what we can do is we can actually print out what the model outputs look like. So again, these are the inputs, the token IDs and the attention mask. And then, second, we'll get the actual model outputs. So here, notice that the outputs are given by these logits here. There's two of them we passed in one example, and there's two potential classes that we're trying to classify, OK? And then, lastly, we have a course-- the corresponding distribution over the labels here, right? Since this is going to be binary classification. Yes, it's a little bit weird that you're going to have the two classes for the binary classification task. And you could, basically, just choose to classify one class or not. But we do this just, basically, because of how Hugging Face models are set up. And so, additionally, these are-- the models that we load in from Hugging Face are basically just PyTorch modules. So these are the actual models. And we can use them in the same way that we've been using models before. So that means things like loss.backward or something like that actually will do this back propagation step corresponding to the loss of your inputs that you pass in. So it's really easy to train these guys as long as you have a label for your data. You can calculate your loss using the PyTorch cross entropy function. You get some loss back. And then, you can go ahead and back propagate it. You can actually even get the parameters as well in the model that you're-- would probably get updated from this. So this is just some big tensor of the actual embedding weights that you have. OK, we also have a pretty easy way for Hugging Face itself to be able to calculate the loss that we get. So again, if we tokenize some input string, we get our model inputs. We have two labels, positive and negative. And then, give some kind of corresponding label that we assign to the model inputs and we pass this in. We can see here that the actual model outputs that are given by Hugging Face includes this loss here, right? So it'll include the loss corresponding to that input anyways. So it's a really easy way of actually calculating the loss just natively in Hugging Face without having to call any additional things from a PyTorch library. And then, lastly, we can actually even use-- if we have kind of like these two labels here, again, for positive or negative, what we can do is just take the model outputs, look at the logits, and see which one is like the biggest again. We'll pass that and take the argmax. So that will give the index that's largest. And then, that's the output label that the model is actually predicting. So again, it gives a really easy way of being able to do this sort of classification, getting the loss, getting what the actual labels are just from within Hugging Face. OK, awesome. So well, last thing as well is that we can also look inside the model in a pretty cool way and also seeing what the attention weights the model actually puts-- the attention weights the model actually has. So this is helpful if you're trying to understand what's going on inside of some NLP model. And so, for here, we can do, again, where we're importing our model from some pre-trained model weights in the Hugging Face Hub. We want to output attention. Set output attentions to true and output hidden states to true. So these are going to be the key arguments that we can use for actually investigating what's going on inside the model at each point in time. Again, we'll set the model to be in eval mode. And lastly, we'll go ahead and tokenize our input string again. We don't really care about any of the gradients here. Again, so we don't actually want to backpropagate anything here. And finally, pass in the model inputs. So now, what we're able to do is when we print out the model hidden states, so now this is a new kind of property in the output dictionary that we get. We can look at what these actually look like here. And so this is a massive output. So you can actually look at the hidden state size per layer, right? And so this kind of gives a notion of what we're going to be looking like-- looking at, what the shape of this is, at each given layer in our model, as well as the attention head size per layer. So this gives you the kind of shape of what you're looking at. And then, if we actually look at the model output itself, we'll get all of these different hidden states basically, right? So we have tons and tons of these different hidden states. We'll have the last hidden state here. So the model output is pretty robust for showing you what the hidden state looks like as well as what attention weights actually look like here. So in case you're trying to analyze a particular model, this is a really helpful way of doing that. So what model.eval does is it-- sorry, question is, what does the .eval do? What it does is it basically sets your-- and this is true for any PyTorch module or model-- is it sets it into quote unquote "eval mode". So again, for this, we're not really trying to calculate any of the gradients or anything like that that might correspond to correspond to some data that we pass in or try and update our model in any way. We just care about evaluating it on that particular data point. So for that, then, it's helpful to set the model into eval mode, essentially, to help make sure that kind of disables some of that stuff that you'd use during training time. So it just makes it a little more efficient. Yeah, the question is, it's already pre-trained, so can you go ahead and evaluate it? Yeah, you can. So yeah, this is just the raw pre-trained model with no fine tuning. So the question is, how do you interpret these shapes, basically, for the attention head size and then the hidden state size? Yeah, the key thing here is you'll probably want to look at the shape given on the side. It'll correspond to the layer that you're actually kind of like looking at. So here, when we call-- we looked at the shape here. We're specifically looking at the first one in this list, right? So this will give us the first hidden layer. The second gives us a notion of the batch that we're looking at. And then, the last is like-- so this is like some tensor, right? 768 dimensional, I don't know, representation that corresponds there. And then, for the attention head size, it corresponds to the actual query word and the keyword for these last two here. Yeah, so but for this, we would expect this kind of initial index here, the one to be bigger if we printed out all of the layers. But we're just looking at the first one here. So we can also do this for actually being able to get some notion of how these different-- how this actually looks and plot out these axes as well. So again, if we take this same kind of model input, which again, is this Hugging Face Transformers is great, we're actually trying to see what do these representations look on a per layer basis. So what we can do here is, basically, we're looking at-- for each layer that we have in our model, and again, this is purely from the model output attentions, or the actual outputs of the model. So what we can do is, for each layer, and then, for each head, we can analyze, essentially, what these representations look like, and in particular, what the attention weights are across each of the tokens that we have. So this is a good way of, again, understanding what your model is actually attending to within each layer. So on the side, if we look here, maybe zoom in a bit, we can see that this is going to be-- corresponds to the different layers. And the top will correspond to-- these are across the different attention heads, OK? This will just give you some notion of what the weights are here. So again, just to clarify. So again, if we maybe look at the labels-- so it's a little cut off and like zoomed out. So this y-axis here, these different rows, correspond to the different layers within the model. Oops. On the x-axis here, we have the different attention heads that are present in the model as well. And so for each head, we're able to for-- at each layer to basically get a sense of how the attention distribution is actually being distributed, what's being attended to, corresponding to each of the tokens that you actually get here. So if we look up, again, here as well, right? We're just trying to look at, basically, the model attentions that we get for each corresponding layer. The question is, what's the color key? Yellow is higher magnitude and higher value. And then, darker is like closer to 0. So probably very Navy is zero. So what we can do is now maybe walk through what a fine tuning task looks like here. And so, first, in a project, you're probably going to want to fine tune a model. That's fine. And we'll go ahead and walk through an example of what that looks like here. OK, so what we can do as well is use some of the data sets that we can get from Hugging Face as well-- so it doesn't just have models, it has really nice data sets-- and be able to load that in as well. So here what we're going to be looking at is looking at the IMDb data set. And so, here, again, is for sentiment analysis. We'll just look at only the first 50 tokens or so. And generally, so this is a helper function that we'll use for truncating the output that we get. And then, lastly, for actually making this data set, we can use the DatasetDict class for Hugging Face again. That will basically give us this smaller data set that we can get for the train data set as well as specifying what we want for validation as well. So here, what we're going to do for our mini data set for the purpose of this demonstration is we'll use make train and val both from the IMDb trained data set. We'll shuffle it a bit. And then, we're just going to select here 128 examples and then 32 for validation. So it'll shuffle it around. It'll take the first one 28 and it'll take the next 32. And then, we'll kind of truncate those particular inputs that we get. Again, just to make sure we're efficient and we can actually run this on a CPU. OK, so next what we can do is just see, what does this look like? It'll just, again, this is kind of just like a dictionary. It's a wrapper class almost of giving your trained data set and then your validation data set. And in particular, we can even look at what the first 10 of these looks like. So first, the output. So we specify train. We want to look at the first 10 entries in our trained data set. And the output of this is going to be a dictionary as well, which is pretty cool. So we have some-- the first 10 text examples that give the actual movie reviews here. So this is given in a list. And then, the second key that you get are the labels corresponding to each of these, so whether it's positive or negative. So here, 1 is going to be a positive review, 0 is negative. So it makes it really easy to use this for something like sentiment analysis. OK, so what we can do is go ahead and prepare the data set and put it into batches of 16, OK? So what does this look like? What we can do is we can call the map function that this small data set dictionary has. So you can call map. And pass in a lambda function of what we want to actually do. So here, the lambda function is for each example that we want to tokenize the text, basically. So this is basically saying how do we want to preprocess this. And so, here, we're extracting the tokens, the input IDs that we'll pass into the model, we are adding padding and truncation as well. We're going to do this in a batch, and then, the batch size will be 16. OK, hopefully this makes sense. OK, so next, we're basically just going to do a little more modification on what the data set actually looks like. So we're going to remove the column that corresponds to text. And then, we're going to rename the column label to labels. So again, if we see this, this was called label. We're just going to call it labels. And we're going to remove the text column because we don't really need it anymore. We just have gone ahead and pre-processed our data into the input IDs that we need. OK, and lastly, we're going to set it-- the format to Torch so we can go ahead and just pass this in to our model or our PyTorch model. The question is, what is labels? So label here corresponds to, again, in the context of sentiment analysis, it's just-- yeah, positive or negative. And so here we're just renaming the column. OK, so now we'll just go ahead and see what this looks like. Again, we're going to look at the train set and only these first two things. And so here now, we have the two labels that correspond to each of the reviews and the input IDs that we get corresponding for each of the reviews as well. Lastly, we also get the attention mask. So it's basically just taking what you get out from the tokenizer and it's just adding this back into the data set. So it's really easy to pass in. The question is, we truncated, which makes things easy. But how do you want to apply padding evenly. So here, if we do pass in-- so first it's like you could either manually set some high truncation limit, like we did. The second is that you can just go ahead and set padding to be true. And then, basically, the padding is basically added based off of the longest sequence that you have. Yeah, so question is, I guess, doing it for all of them-- all the texts lists evenly. So again, it just depends on the size of the data set you're loading in, right? So if you're looking at particular batches at a time, you can just pad within that particular batch versus-- yeah, you don't need to like load all of the data set into memory, pad the entire data set in the same way. So it's fine to do it within just batches. Yeah, the question was, how are the input IDs added? And yeah, the answer is, yes, it's basically done automatically. So we had to manually remove the text column here and that kind of, this first line here. But if you recall, the outputs of tokenizer is basically just the input IDs and the attention mask. So it just is smart enough to basically aggregate those together. OK, the last thing we're going to do is, basically, just split these. So we have this data set now. It looks great. We're just going to import a PyTorch Data Loader, typical, normal Data Loader. And then, go ahead and load each of these data sets that we just had. I'm specifying the batch size to be 16. OK, so that's fine and great. And so, now, for training the model, it's basically exactly the same as what we would do in typical PyTorch. So again, it's like you still want to compute the loss. You can back propagate the loss and everything. Yeah, so it's really up to your own design how you do the training. So here, there's only a few asterisks, I guess. One is that you can import specific optimizer types from the Transformers package. So you can do add in with weight decay. You can get a linear schedule for like the learning rate, which will kind of decrease the learning rate over time for each training set. So again, it's basically up to your choice. But if you look at the structure of this code, we load the model for classification. We set a number of epochs. And then, however many training steps we actually want to do. We initialize our optimizer and get some learning rate schedule. And then, from there, it's basically the same thing as what we would do for a typical PyTorch model, right? We set the model to train mode. We go ahead and pass in all of these batches from the Data Loader. And then, backpropagate, step the optimizer, and everything like that. So it's pretty similar from what we're kind of used to seeing, essentially. Awesome. So that'll go do its thing at some point. OK, and so, that's one potential option is if you really like PyTorch, you can just go ahead and do that and it's really nice and easy. The second thing is that Hugging Face actually has some sort of a trainer class that you're able to use that can handle most of these things. So again, if we do the same thing here, this will actually run once our model is done training. We can create our data set in the same way as before. Now, what we need to use is this import of a training arguments class. So this is going to be, basically, a dictionary of all the things that we want to use when we're going to actually train our model. And then, this kind of like additional trainer class, which will handle the training magically for us and wrap around in that way. OK, I think we're missing a directory. But I think it's pretty straightforward for how you want to train here. So for here at least, again, there are the two key arguments. The first is training arguments. So this will specify how a number of specifications that you can actually pass through to it. It's where you want to log things for each device. In this case, we're just using one GPU. But potentially, if you're using multiple GPUs, what the batch size is during training or the batch size is during evaluation time, how long you want to train it for, how you want to evaluate it. So this is kind of evaluating on an epoch level what the learning rate is and so on and so on. So again, if you want to check the documentation, you can see that here. There's a bunch of different arguments that you can give. There's warm up steps, warm up ratio, like weight decay, there's so many things. So again, it's basically like a dictionary. Feel free to look at these different arguments you can pass in. But there's a couple key ones here. And this is, basically-- this basically mimics the same arguments that we used before in our explicit PyTorch method here for Hugging Face. OK, similarly, what we do is we can just pass this into the trainer. And that will take care of, basically, everything for us. So that whole training loop that we did before is condensed into this one class function for, actually, just doing the training. So we pass in the model, the arguments, the train data set, eval data set, what tokenizer we want to use, and then, some function for computing metrics. So for here, we pass in this function, eval, and it takes eval predictions as input. Basically, what this does is these predictions are given from the trainer-- passed into this function. And we just can split it into the actual logits and the labels that are predicted. Or sorry, the ground truth labels that we have. And then, from here, we can just calculate any sort of additional metrics we want like accuracy, F1 score, recall or whatever you want. OK, so this is an alternative way of formulating that training loop. OK, the last thing here as well is that we can have some sort of callback as well if you want to do things during the training process. So after every epoch or something like that, you want to evaluate your model on the validation set or something like that. Or just go ahead and dump some sort of output. That's what you can use a callback for. And so, here, this is just a logging callback. It's just going to log kind of the information about the process itself. Again, not super important. But in case that you're looking to try and do any sort of callback during training, it's an easy way to add it in. The second is if you want to do early stopping as well. So early stopping will, basically, stop your model early, as it sounds if it's not learning anything and a bunch of epochs are going by. And so you can set that so that you don't waste compute time or you can see the results more easily. The question is, is there a good choice for the patient's value? I think it just depends on the model architecture. Not really, I guess, it's-- yeah, it's pretty up to your discretion. OK, awesome. And so the last thing that we do is just do calltrainer.train. So if you recall, this is just the instantiation of this trainer class calltrainer.train. And it'll just kind of go. So now it's training. It's great. It gives us a nice kind of estimate of how long things are taking, what's going on, what arguments do we actually pass in. So that's just going to run. And then, likewise, hopefully it'll train relatively quickly. OK, it'll take two minutes. We can also evaluate the model pretty easily as well. So we just call it trainer.predict on whatever data set that we're interested in. So here, it's the tokenized data set corresponding to the validation data set. OK, hopefully we can pop that out soon. And lastly, so if we saved anything to our model checkpoints, so hopefully, this is saving stuff right now. Yeah, so this is going to be-- is continuing to save stuff to the folder that we specified. And so here, in case we ever want to load our model again from the weights that we've actually saved, we just pass in the name of the checkpoint, the relative path here to our checkpoint. So that is how we have some checkpoint 8 here, right? We just pass in the path to that folder. We load it back in. We tokenize and it's the same thing as we did before. There are a few additional appendices for how to do different tasks as well. So there's an appendix on generation. How to define a custom data set as well. How to pipeline different kind of tasks together. So this is kind of like using some-- a pre-trained model that you can just use through the pipeline interface really easily there is-- in different types of tasks, like math language modeling. But feel free to look through those at your own time. And yeah, thanks a bunch.
Stanford_CS224N_Natural_Language_Processing_with_Deep_Learning_2023
Stanford_CS224N_NLP_with_Deep_Learning_2023_Lecture_9_Pretraining.txt
Hello, welcome to CS224N. Today we'll be talking about pretraining, which is another exciting topic on the road to modern natural language processing. OK. How is everyone doing? Thumbs up, thumbs side, thumbs down. Wow! No response bias there. All thumbs up. Oh, the side. Nice. I like that honesty. That's good. Well, OK, so we're now-- what is this? Week 5? Yes, it's week 5. And we have a couple-- so this lecture, the Transformers lecture, and then to a lesser extent, Thursday's lecture on natural language generation will be the sort of sum of lectures for the assignments you have to do, right? So assignment 5 is coming out on Thursday. And the topics covered in this lecture and the self-attention transformers and, again, a little bit of natural language generation will be tested in assignment 5. And then, for the rest of the course, we will go through some really fascinating topics in sort of modern natural language processing that should be useful for your final projects and future jobs and interviews and intellectual curiosity. But I think that this-- today's lecture is significantly less technical in detail than last Thursday's on self-attention and transformers, but it should give you an idea of the sort of world of pretraining and sort of how it helps define natural language processing today. So a reminder about assignment 5. Your project proposals also are due on Tuesday, next Tuesday. Please do get those in. Try to get them in on time so that we can give you prompt feedback about your project proposals. And yeah, so let's jump into it. OK. So what we're going to start with today is a bit of a technical detail on word structure and sort of how we model the input sequence of words that we get. So when we were teaching word2vec and all the methods that we've talked about so far, we assumed a finite vocabulary, right? So you had a vocabulary V that you define via whatever. You've looked at some data. You've decided what the words are in that data. And so you have some words like hat and learn. And you have this embedding. It's in red because you've learned it properly. Actually, let's replace "hat" and "learn" with "pizza" and "tasty." Those are better. And so that's all well and good. You see these words in your model, and you have an embedding that's been learned on your data to sort of know what to do when you see those words. But when you see some sort of variations, maybe you see like "taaaasty" and maybe a typo like "laern," or maybe novel items where it's a word that you as a human can understand as this combination-- this is called derivational morphology. --of like this word, transformer, that and -ify, which means take this noun and give me back a verb that means to make more like that noun, to "transformerify" NLP might mean to make NLP more like using transformers and such. And for each of these-- this maybe didn't show up in your training corpus. And language is always doing this. People are always coming up with new words, and there's new domains, and there's-- young people are always making new words. It's great. And so it's a problem for your model, though,q because you've defined this finite vocabulary. And there's no mapping in that vocabulary for each of these things, even though their meanings should be relatively well defined based on the data you've seen so far. It's just that the string of characters that define them aren't quite what you've seen. And so what do you do? Well, maybe you map them to this universal unknown token, this UNK, right? So it's like, oh, I see something, I don't know what, I've never seen it before. I'm going to say it's always represented by the same token, UNK. And so that's been done in the past. And that's sort of bad, right, because it's totally losing tons of information. But you need to map it to something. And so this is a clear problem, especially-- in English, it's a problem. In many of the world's languages, it's a substantially larger problem, right? So English has a relatively simple word structure. There's a couple of conjugations for each verb, like eat, eats, eaten, ate. But in a language with much more complex morphology or word structure, you'll have a considerably more complex set of things that you could see in the world. So here is a conjugation table for a Swahili verb, and it has over 300 conjugations. And if I define a vocabulary to be every unique string of characters maps to its own word, then every one of the 300 conjugations would get an independent vector under my model, which makes no sense because the 300 conjugations obviously have a lot in common and differ by sort of meaningful extents. So you don't want to do this. I'd have to have a huge vocabulary if I wanted all conjugations to show up. And that's a mistake for efficiency reasons and for learning reasons. Any questions so far? Cool. OK. And so what we end up doing is-- we'll look at subword structure, subword modeling. So what we're going to do is we're going to say, I'm not going to even try to define what the set of all words is. I'm going to define my vocabulary to include parts of words. Where am I? Right? So I'm going to split words into sequences of known subwords. And so there's a simple algorithm for this where you start with all characters, right? So if I only had a vocabulary of all characters, and maybe like an "end-of-word" symbol, for a finite data set, then I could-- no matter what word I saw in the future, as long as I had seen all possible characters, I could take the word and say, I don't know what this word is. I'm going to split it into all of its individual characters. So you won't have this UNK problem. You can sort of represent any word. And then you're going to find common adjacent characters and say, OK, "a" and "b" co-occur next to each other quite a bit. So I'm going to add a new word to my vocabulary. Now it's all characters plus this new word "a, b," which is a subword. And likewise, I'm going-- so now I'm going to replace the character pair with a new subword and repeat until you add a lot, a lot, a lot of vocabulary items through this process of what things tend to co-occur next to each other. And so what you'll end up with is a vocabulary of very commonly co-occurring substrings by which you can build up words. And this was originally developed for machine translation but then has been used considerably in pretty much all modern language models. So now we have "hat" and "learn." So in our subword vocabulary, "hat" and "learn" showed up enough that they are their own individual words. So that's sort of good, ri ght? So simple common words show up as a word in your vocabulary just like you would like them to. But now "tasty" maybe gets split into T-A-A. And then maybe, in some cases, this ## means don't add a space next. So T-A-A, and then A-A-A, and then S-T-Y, right? So I've actually taken one thing that seems like a word. And in my vocabulary, it's now split into three subword tokens. So when I pass this to my transformer or to my recurrent neural network, the recurrent neural network would take T-A-A as just a single element, do the RNN update, and then take A-A-A, do the RNN update, and then S-T-Y. So it could learn to process constructions like this. And maybe I can even add more A-A-As in the middle and have it do something similar instead of just seeing the entire word tasty and not knowing what it means. Is that-- that's feedback, yeah. How loud is that feedback? Are we good? OK, I think we're fixed. Great. And so same with "transformerify." Maybe "transformer" is its own word. And then "-ify" And so you can see that you have three learned embeddings instead of one useless UNK embedding. So this is just wildly useful and is used pretty much everywhere. Variants of this algorithm are used pretty much everywhere in modern NLP. Questions? Yes. If we have three embeddings for "tasty," do we just add them together? So the question is, if we have three embeddings for "tasty," do we just add them together? If we want to represent-- so when we're actually processing the sequence, I'd see something like "I learned about the T-A-A, A-A-A, S-T-Y." So it'd actually be totally separate tokens. But if I wanted to then say what's my representation of this thing, it depends on what you want to do. Sometimes you average the contextual representations of the three or look at the last one, maybe. At that point, it's unclear what to do, but everything sort of works OK. How do you know where to split? How do you what? How do you know where to split the phrases? Yeah, so you know where to split based on the algorithm that I specified earlier for learning the vocabulary. So you've learned this vocabulary by just combining commonly co-occurring adjacent strings of letters. So "a, b" co-occurred a lot. So now I've got a new word, that's "a, b." And then, when I'm actually walking through and tokenizing, I try to split as little as possible. So I split words into the maximal subword that takes up the most characters. There are algorithms for this. Yeah, so I'm like, OK, if I want to split this up, there's many ways I can split it up. And you try to find some approximate what the best way to split it into the fewest words is. Yeah. [INAUDIBLE] of punctuation in the character set [INAUDIBLE]?? Yes. The question is, do people make use of punctuation in the character set? Do people do it? Yes, absolutely. So from this point on, just assume that what text is given to these models is as unprocessed as possible. You take it. You try to make it sort of clean-looking text where you've removed HTML tags, maybe if it's from the internet or whatever. But then, beyond that, you process it as little as possible so that it reflects as well as possible what people might actually be using this for. So maybe earlier in the course, when we were looking at word2vec, maybe we might we have thought about, oh, we don't want word2vec vectors of punctuation or something like that. Now everything is just as close as possible to what the text you'd get with people trying to use your system would be. So yes. In practice punctuation, and dot, dot, dot might be its own word, and maybe a sequence of hyphens because people make big bars across tables. Yeah. How does it impact one word or it could be multiple embeddings versus a single embedding? Does the system treat those any differently? The question is, does the system treat any differently words that are really themselves, the whole word, versus words that are pieces? No, the system has no idea. They're all just indices into your embedding vocabulary matrix. So they're all treated equally. What about really long words that are, I guess, relatively common? Because if you're building up from, say, character all the way up-- what happens then? Yeah, the question is, what happens to very long words, if you're building up from character pairs and portions of characters? In practice, the statistics speak really well for themselves. So if a long word is very common, it will end up in the vocabulary. And if it's not very common, it won't. There are algorithms that aren't this that do slightly better in various ways, but the intuition that you sort of figure out what the common co-occurring substrings are independent of length almost is the right intuition to have. And so, yeah, you can actually just look at the learned vocabularies of a lot of these models. And you see some long words just because they-- if they showed up a lot. I'm curious, how does it weigh the frequency? So let's say there's If we-- or in your next slide, it was like -ify at the very last one. So it if could be really common. So how does a weigh the frequency of a subword versus the length of it? It tries to split it up into the smallest number. But what if it could split it up into three but one of them was super common? Yeah. So the question is, if a "transformer" is a subword in my vocabulary, and "if" is a subword, and "y" is a subword, and -ify, as a three-use letter tuple, is also a subword, how does it choose to take the -ify, maybe it's not very common, as opposed to splitting it into more subwords? It's just a choice. We choose to try to take the smallest number of subwords because that tends to be more of the bottleneck as opposed to having a bunch of very common, very short subwords. Sequence length is a big problem in transformers. And this seems to be what works. Although, trying to split things into multiple options of a sequence and running the transformer on all of them is the thing that people have done to see which one will work better. But yeah, having fewer bigger subwords tends to be the best sort of idea. I'm going to start moving on, though. Feel free to ask me more questions about this afterwards. OK, so let's talk about pretraining from the context of the course so far. So at the very beginning of the course, we gave you this quote, which was, "You shall know a word by the company it keeps." This was the thesis of the distributional hypothesis, right, that the meaning of the word is defined by, or at least reflected by, what words it tends to co-occur around. And we implemented this via word2vec. The same person who made that quote had a separate quote, actually earlier, that continues this notion of meaning as defined by context, which says something along the lines of, well, since the word shows up in context when we actually use it, when we speak to each other, the meaning of the word should be defined in the context that it actually shows up in. And so, the complete meaning of a word is always contextual, and no study of meaning apart from a complete context can be taken seriously. So right? The big difference here is at word2vec training time if I have the word record, R-E-C-O-R-D, when I'm training word2vec, I get one vector or two, but-- one vector, meaning record the string. And it has to learn by what context it shows up in that sometimes this can mean I "record," i.e. the verb or "record," i.e. the noun. But I only have one vector to represent it. And so when I use the word2vec embedding of "record," it has this mixture of meaning of both of its senses. It doesn't get to specialize and say, oh, this part means record, and this part means record. And so word2vec is going to just sort of fail. And so I can build better representations of language through these contextual representations that are going to take things like recurrent neural networks or transformers that we used before to build up sort of contextual meaning. So what we had before were pretrained word embeddings. And then we had a big box on top of it like a transformer or an LSTM that was not pretrained, right? So you learn via context your word embeddings here. And then you have a task, like sentiment analysis or machine translation or parsing or whatever. And you initialize all the parameters of this randomly, and then you train to predict your label. And the big difference in today's work is that we're going to try to pre-train all the parameters. So I have my big transformer, and instead of just pretraining my word embeddings with word2vec, I'm going to train all of the parameters of the network, trying to teach it much more about language that I could use in my downstream tasks. So now I'm-- the labeled data that I have for, say, machine translation might need to be smaller. I might not need as much of it because I've already trained much more of the network than I otherwise would have if I had just gotten word2vec embeddings. OK. So here, I've pretrained this entire sort of structure, the word embeddings, the transformer on top. Everything's been trained via methods that we'll talk about today. And so, what does this give you? It gives you very strong representations of language. So the meaning of "record" and "record" will be different in the sort of contextual representations that know where in the sequence it is and what words are co-occurring with it in this specific input than word2vec, which only has one representation for record independent of where it shows up. It'll also be used as strong parameter initializations for NLP models. So in all of your homework so far, you've worked with building out a natural language processing system from scratch, like how do I initialize this weight matrix? And we always say, oh, small, normally distributed noise, like little values close to zero. And here we're going to say, well, just like we were going to use the word2vec embeddings, and those sort of encoded structure, I'm going to start maybe my machine translation system from a parameter initialization that's given to me via pretraining. And then also it's going to give us probability distributions over language that we can use to generate and otherwise. And we'll talk about this. OK? So whole models are going to be pretrained. So all of the pretraining is effectively going to be centered around this idea of reconstructing the input. So you have an input, it's a sequence of texts that some human has generated, and the hypothesis is that by masking out part of it and tasking a neural network with reconstructing the original input, that neural network has to learn a lot about language, about the world in order to do a good job of reconstructing the input, right? So this is now a supervised learning problem, just like machine translation. I've taken this sentence that just existed, "Stanford University is located in--" say, Palo Alto, California, or Stanford, California, I guess. And I have, by removing this part of the sentence, made a label for myself. The input is this broken, masked sentence, and the label is "Stanford" or "Palo Alto." So if I give this example to a network and ask it to predict the center thing, as it's doing its gradient step on this input, it's going to encode information about the co-occurrence between this context and for "University is located in--" and "Palo Alto." So by tasking it with this, it might learn, say, where Stanford is. What else might it learn? Well, it can learn things about maybe syntax. So "I put--" blank "--fork down on the table." Here, there's only a certain set of words that could go here. "I put the fork down on the table." "I put a fork down on the table." These are syntactic constraints. So the context shows me what kinds of words can appear in what kinds of contexts. "The woman walked across the street, checking for traffic over--" blank "--shoulder." Any idea is that what could go here? "Her," right? So this co-reference between this entity who is being discussed in the world, this woman, and her shoulder-- now when I discuss-- this is sort of a linguistic concept. The word "her" here is a co-referent to a woman, right? It's referring to the same entity in the discourse. And so the network might be able to learn things about what kind of entities are doing, what, where. It can learn things about sort of semantics. So if I have, "I went to the ocean to see the fish, turtles, seals, and--" blank. Then, the word that's in the blank should be sort of a member of the class that I'm thinking of as a person writing this sentence, of stuff that I see when I go to the ocean and see these other things as well, right? So in order to do this prediction task, maybe I'll learn about the semantics of aquatic creatures. OK. So what else could I learn? I've got, "Overall, the value I got from the two hours watching it was the sum total of the popcorn and drink. The movie was--" blank. What kind of task could I be learning from doing this sort of prediction problem? [INAUDIBLE] Sentiment. Exactly. So this is just a naturalistic text that I naturally wrote myself. But by saying, oh, the movie was bad, I'm learning about the latent sentiment of the person who wrote this, what they were feeling about the movie at the time. So maybe if I see a new review later on, I can just paste in the review, say the movie was-- blank. And if the model generates "bad" or "good," that could be implicitly solving the task of sentiment analysis. So here's another one, "Iroh went to the kitchen to make some tea. Standing next to Iroh, Zuko pondered his destiny. Zuko left the--" blank. OK. So in this scenario, we've got a world that implicitly has been designed by the person who is creating this text, right? I've got physical locations in the discourse, like the kitchen. And I've got Zuko. We've got Iroh is in the kitchen. Zuko is next to Iroh. So Zuko must be in the kitchen. So what could Zuko leave but the kitchen, right? And so in terms of latent notions of embodiment and physical location, the way that people talk about people being next to something and then leaving something could tell you stuff about-- yeah, a little bit about how the world works even. So here's the sequence, "I was thinking about the sequence that goes 1, 1, 2, 3, 5, 8, 13, 21--" blank. And this is a pretty tough one, right? This is the Fibonacci sequence. If you have a model by looking at a bunch of numbers from the Fibonacci sequence, learn to, in general, predict the next one-- that's a question you should be thinking about throughout the lecture. OK, any questions on these examples of what you might learn from predicting the context? OK, cool. So a very simple way to think about pretraining is pretraining is language modeling. So we saw language modeling earlier in the course. And now, we're just going to say instead of using my language model just to provide probabilities over the next word, I am going to train it on that task. I'm going to actually model the distribution p theta of the word t given all the words previous. And there's a ton of data for this, ri ght? There's just an amazing amount of data for this in a lot of languages, especially English. There's very little data for this and actually most of the world's languages, which is a separate problem. But you can pre-train just through language modeling, right? So I'm going to sort of do the teacher-forcing thing. So I have "Iroh." I predict "goes," I have "goes," I predict "to." And I'm going to train my sort of LSTM or my transformer to do this task. And then I'm just going to keep all the weights. OK, I'm going to save all the network parameters. And then, once I have these parameters, ri ght, instead of generating them from my language model, I'm just going to use them as an initialization for my parameters. So I have this pretraining fine-tuning paradigm, two steps. Most of you, I think, in your-- well, maybe not this year. Let's say a large portion of you this year in your final projects will be doing the pretraining, fine-tuning sort of paradigm where someone has done the pretraining for you. So you have a ton of text. You learn very general things about the distribution of words and the latent things that tell you about the world and about language. And then in step two, you've got some tasks, maybe sentiment analysis. And you have maybe not very many labels. You have a little bit of labeled data. And you adapt the pretrained model to the task that you care about by further doing gradient steps on this task. So you give it "the movie was--" you predict "happy" or "sad." And then you sort of continue to update the parameters based on the initialization from the pretraining. And this just works exceptionally well, unbelievably well compared to training from scratch, intuitively, because you've taken a lot of the burden of learning about language, learning about the world off of the data that you've labeled for sentiment analysis. And you're giving that task of learning all this very general stuff to the much more general task of language modeling. Yes. You said we didn't have much data in other languages. What do you mean by data? Is it just text in that language-- Yeah --labeled in some way? So the question is, you said we have a lot of data in English but not in other languages. What do you mean by data that we don't have a lot of in other languages? Is it just text? It's literally just text, no annotations because you don't need annotations to do language model pretraining. The existence of that sequence of words that someone has written provides you with all these pairs of input and output. Input "Iroh," output "goes." Input "Iroh goes--" output "to." Those are all labels that you've constructed from the input just existing. But in most languages, even on the entire internet-- there's about 7,000-ish languages on Earth, and most of them don't have the sort of billions of words that you might want to train these systems on. Yeah. If you're pretraining on the entire thing, are you still learning one vector representation per word? The question is, if you're pretraining the entire thing, do you still learn one vector representation per word? You learn one vector representation, that is the noncontextual input vector. So you have your vocabulary matrix, you've got your embedding matrix that is vocabulary size by model dimensionality. And so yeah, "Iroh" has one vector. "--goes" has one vector. But then the transformer that you're learning on top of it takes in the sequence so far and sort of gives a vector to each of them that's dependent on the context in that case. But still, at the input, you only have one embedding per word. So what metric would you use to evaluate the pretrained model? It's supposed to be general. But there's application-specific metrics. So which one do you use? Yeah, so the question is, what metric do you use to evaluate pretrained models since it's supposed to be so general? --but there are lots of very specific evaluations you could use. We'll get into a lot of that in the rest of the lecture. While you're training it, you can use simple metrics that sort of correlate with what you want but aren't actually what you want, just like the probability quality, right? So you can evaluate the perplexity of your language model, just like you would have when you cared about language modeling. And it turns out to be the case that better perplexity correlates with all the stuff that's much harder to evaluate, like lots and lots of different tasks. But also, the natural language processing community has built very large of benchmark suites of varying tasks to try to get at sort of a notion of generality. Although that's very, very difficult. It's ill-defined, even. And so when you develop new pretraining methods, what you often do is you try to pick a whole bunch of evaluations and show that you do better on all of them. And that's your argument for generality. OK So why should this sort of pretraining, fine-tuning, two-part paradigm help? This is still an open area of research, but the intuitions are all you're going to take from this course. So right? So pretraining provides some starting parameters, L theta. So this is all the parameters in your network, right, from trying to do this minimum over all possible settings of your parameters of the pretraining loss. And then, the fine-tuning process takes your data for fine-tuning. You've got some labels. And it tries to approximate the minimum through gradient descent of the loss of the fine-tuning task of theta. But you start at theta hat, right? So you start gradient descent at theta hat, which your pretraining process gave you. And then, if you could actually solve this min and wanted to, it sort of feels like the starting point shouldn't matter. But it really, really, really does. It really does. So that's-- and we'll talk a bit more about this later. But the process of gradient descent, maybe it stick relatively close to the theta hat during fine-tuning, ri ght? So you start at theta hat, and then you walk downhill with gradient descent until you hit a valley. And that valley ends up being really good because it's close to the pretraining parameters, which were really good for a lot of things. This is a cool place where sort of practice and theory are sort of meeting, where optimization people want to understand why this is so useful. NLP people just want to build better systems. So yeah, maybe the stuff around theta hat tends to generalize well. If you want to work on this kind of thing, you should talk about it. Yeah. So if stochastic gradient descent sticks relatively close. But what if we were to use a different optimizer? How would that change our results? The question is, if stochastic gradient descent sticks relatively close, what if we use a different optimizer? If we use any common variant or gradient descent, like any first-order method like, Adam, which we use in this course, or AdaGrad, or-- they all have these very, very similar properties. Other types of optimization, we just tend to not use. So who knows? Yeah. [INAUDIBLE] still unclear on why the pretraining plus fine-tuning works better than just fine-tuning but making the [INAUDIBLE] adding more layers, more data, et cetera? Yeah, the question is, why does the pretrained, fine-tune paradigm work better than just making the model more powerful, adding more layers, adding more data to just the fine-tuning? The simple answer is that you have orders of magnitude more data that's unlabeled. That's just text that you found. Then you do carefully labeled data and the task that you care about because that's expensive to get. It has to be examples of your movie reviews or whatever that you've had someone label carefully. So you have something like, on the internet, at least 5 trillion, maybe 10 trillion words of this, and you have maybe a million words of your labeled data or whatever over here. So it's just the scale is way off. But there's also an intuition that learning to do a very, very simple thing, like sentiment analysis, is not going to get you a very generally able agent in a wide range of settings compared to language modeling. So it's hard to get-- how do I put it? Even if you have a lot of labeled data of movie reviews of the kind that people are writing today, maybe tomorrow they'll start writing slightly different kinds of movie reviews, and your system doesn't perform as well. Whereas if you pretrained on a really diverse set of texts from a wide range of sources and people, it might be more adaptable to seeing stuff that doesn't quite look like the training data you showed it, even if you showed it a ton of training data. So one of the big takeaways of pretraining is that you get this huge amount of variety of sort of text on the internet. And you have to be very careful. Yeah, you should be very careful about what kind of text you're showing it and what kind of text you're not because the internet is full of awful text as well. But some of that generality just comes from how hard this problem is and how much data you can show it. Is the pretrained model just like trained on so much data? How do you then train it so that it considers the stuff that you're fine-tuning it with as more important, more salient to the task it's trying to do rather than just one in a billion articles of data or something? Yeah, so the question is, given that the amount of data on the pretraining side is orders of magnitude more than the amount of data on the fine-tuning side, how do you sort of get across to the model that, well, OK, actually, the fine-tuning task is what I care about. So focus on that. It's about the fact that I did this first, the pretraining first, and then I do the fine-tuning second. So I've done-- I've gotten my parameter initialization from this. I've set it somewhere. And then I fine-tune it. I move to where the parameters are doing well for this task afterwards. And so, well, it might just forget a lot about how to do this because now I'm just asking it to do this at this point. I should move on, I think. But we're going to keep talking about this in much more detail with more concrete elements. So OK. So let's talk about model pretraining. Oh, wait. That did not advance the slides. Nice, OK. Let's talk about model pretraining three ways. In our Transformers lecture, Tuesday, we talked about encoders, encoder-decoders, and decoders. And we'll do decoders last because, actually, many of the largest models that are being used today are all decoders. And so we'll have a bit more to say about them. So let's recall these three. So encoders get bidirectional context. You have a single sequence, and you're able to see the whole thing, like an encoder in machine translation. Encoder-decoders have one portion of the network that gets bidirectional context. So that's like the source sentence of my machine translation system. And then they're paired with a decoder that gets unidirectional context so that I have this informational masking where I can't see the future so that I can do things like language modeling, I can generate the next token of my translation, whatever. So you could think of it as, I've got my source sentence here and my partial translation here, and I'm decoding out the translation. And then decoders only are things like language models where-- we've seen a lot of this so far. And there's pretraining for all three large classes of models. And how you pre-train them, and then how you use them depends on the properties and the proclivities of the specific architecture. So let's look at encoders first. So we've looked at language modeling quite a bit, but we can't do language modeling with an encoder because they get bidirectional context. So if I'm down here at "I" and I want to present-- I want to predict the next word, it's a trivial task at this level here to predict the next word because, in the middle, I was able to look at the next word. And so I should just know. There's nothing hard about learning to predict the next word here because I could just look at it, see what it is and then copy it over. So when I'm training an encoder in something for pretraining, I have to be a little bit more clever. In practice, what I do is something like this. I take the input, and I modify it somewhat. I mask out words like I did in the examples I gave at the beginning of class. So I blank to the blank. And then I have the network predict with its whole-- I have it build contextual representations. So now this vector representation of the blank sees the entire context around it here. And then I predict the word went, and then here the word store. Any questions? And you can see how this is doing something quite a bit like language modeling but with bidirectional context. I've removed the network's information about the words that go in the blanks. And I'm training it to reconstruct that. So I only have loss terms. So I only ask it to actually do the prediction, compute the loss, back-propagate the gradients for the words that I've masked out. And you can think of this as instead of learning probability of x, where x is like a sentence or a document, this is learning the probability of x, the real document given x tilde, which is sort of this corrupted document with some of the information missing. And so we get this sequence of vectors here, one per word, which is the output of my encoder in blue. And then I'd say that for the words that I want to predict, yi, I draw them. This sim means the probability is proportional to my embedding matrix times my representation of it. So it's just a linear transformation of that last thing here. So this A plus b is this red portion here, and then do the prediction. And I train the entire network to do this. Yes. So the words that we mask out-- or do we just select them randomly, or is there something into it? The question is, do we just choose words randomly to mask out, or is there a scheme? Mostly randomly. We'll talk about a slightly smarter scheme in a couple of slides. But yeah, just mostly randomly. Yeah. All right. Just what was that last part on the bottom, x of the max version of? Like if it's the first or the very last sentence? Yeah, so I'm saying that I'm defining X tilde to be this input part where I've got the masked version of the sentence with these sort of words missing. And then, I'm defining a probability distribution that's the probability of a sequence conditioned on the input being the sort of corrupted sequence, the masked sequence. OK. So this brings us to a very, very popular NLP model that you need to know about. It's called BERT. And it was the first one to popularize this masked language modeling objective. And they released the weights of this pretrained transformer that they pretrained via something that looks a lot like masked language modeling. And so these you can download, you can use them via code that's released by the company Hugging Face that we have continued to bring up. Many of you will use a model like BERT in your final project because it's such a useful builder of representations of language and context. So let's talk a little bit about the details of masked language modeling in BERT. First, we take 15% of the subword tokens. So remember, all of our inputs now are subword tokens. I've made them all look like words, but just like we saw at the very beginning of class, each of these tokens could just be some portion, some subword. And I'm going to do a couple of things with it. Sometimes I am going to just mask out the word and then predict the true word. Sometimes I'm going to replace the word with some random sample of another word from my distribution-- from my vocabulary and predict the real word that was supposed to go there. And sometimes, I'm going to not change the word at all and still predict it. The intuition of this is the following. If I just had to build good representations in the middle of this network for words that are masked out, then when I actually use the model at test time on some real-- review to do sentiment analysis on, well, there are never going to be any tokens like this. So maybe the model won't do a very good job because it's like, oh, I have no job to do here because I only need to deal with the masked tokens. By giving it sequences of words, or sometimes it's the real word that needs to be predicted-- sometimes, you have to detect if the word is wrong. The idea is that now when I give it a sentence that doesn't have any masks, it actually does a good job of representing all the words in context because it has this chance that it could be asked to predict anything at any time. So the folks at Google who were defining this had a separate additional task that is sort of interesting to think about. So this was their BERT model from their paper. They had their position embeddings just like we saw from our transformers lecture, token embeddings just like we saw from the transformers lecture. But then also they had this thing called a segment embedding where they had two possible segments, segment A and segment B. And they had this additional task where they would get a big chunk of text for segment A and a big chunk of text for segment B. And then, they would ask the model, is segment B real continuation of segment A? Was it the text that actually came next, or did I just pick this big segment randomly from somewhere else? And the idea was that this should teach the network something-- some notion of long-distance coherence, right, about the connection between a bunch of text over here and a bunch of text over there. It turns out it's not really necessary, but it's an interesting idea. And sort of similar things have continued to have some influence since then. But again, you should get this intuition that we're trying to come up with hard problems for the network to solve such that by solving them, it has to learn a lot about language. And we're defining those problems by making simple transformations or removing information from text that just happened to occur. Questions? Yeah, the plus signs. Do we concatenate the vectors, or do we do an element-wise addition? The question is, for these plus signs, do we concatenate the vectors or do element-wise addition? We do element-wise addition. You could have concatenated them. However, one of the big conventions of all of these networks is that you always have exactly the same number of dimensions everywhere at every layer of the network. It just makes everything very simple. So just saying everything's the same dimension and then doing addition just ends up being simpler. Yeah. So why is the next-sentence prediction not necessary with an intuition for that? Yeah, why was the next sentence prediction not necessary? One thing that it does that's a negative is that now the effect of context length for a lot of your examples is halved. So one of the things that's useful about pretraining seemingly is that you get to build representations of very long sequences of text. So this is very short. But in practice, segment A was going to be something like 250 words and segment B was going to be 250 words. And in the paper that let us know that this wasn't necessary they always had a long segment of 500 words. And it seemed to be useful to always have this very long context because longer contexts help give you more information about the role that each word is playing in that specific context, right? If I see one word, it's hard to know. If I just see a "record," it's hard to know what it's supposed to mean. But if I see a thousand words around it, it's much clearer what its role is in that context is? So yeah, it cuts the effect of context size is one answer. Well, another thing is that this is actually much more difficult. This is a much more recent paper that I don't have in the slides, but it's been shown since then that these models are really, really bad at the next-sentence prediction task. So it could be that maybe it just was too hard at the time. And so it just wasn't useful because the model was failing to do it at all. So I'll give the link for that paper later. Can you explain again why we need to do next-sentence prediction? What about just masking and predicting the next? I missed that jump, so the next-sentence prediction. Yeah, so the question is, why do we need to do next sentence prediction? Why not just do the masking we saw before? That's the thing. You seem to not need to do next since prediction. But as like history of the research, it was thought that this was useful. And the idea was that it required you to develop this sort of pairwise, like do these two segments of text interact? How do they interact? Are they related? This sort of the longer distance notion. And many NLP tasks are defined on pairs of things. And they thought that might be useful. And so they published it with this. And then someone else came through and published a new model that didn't do that. And it sort of did better. So this is just-- so yeah, there are intuitions as to why it could work. It just didn't. So BERT wasn't doing masking or it was doing both? It was doing both. It was doing both this next-sentence-- so BERT was doing both this next-sentence prediction training as well as this masking training all at the same time. And so you had to have a separate predictor head on top of BERT, a separate predictor classification thing. And so one detail there is that there's this special word at the beginning of BERT in every sequence, that CLS. And you can define a predictor on top of that fake word embedding that was going to say is the next sentence is real or fake or not. OK, I'm going to move on. And so this gets at the question that we had earlier about how do you evaluate these things. There's a lot of different NLP tasks out there. Gosh. And when people were defining these papers, they would look at a ton of different evaluations that had been sort of compiled as a set of things that are still hard for today's systems. So are you detecting paraphrases between questions or two Quora questions, actually, the same question? That turns out to be hard. Can you do sentiment analysis on this hard data set? Can you tell if sentences are linguistically acceptable, are they grammatical or not? Are two sequences similar semantically? Do they mean vaguely a similar thing? And we'll talk a bit about natural language inference later. But that's the task of defining-- if I say I saw the dog, that does not necessarily mean I saw the little dog. But saying I saw the little dog does mean I saw the dog. So that's this natural language inference task. And the striking-- the difference between pretraining days where you had this row here, before you had substantial amounts of pretraining, and BERT was just like the field was taken aback in a way that's hard to describe, very carefully crafted architectures for each individual task where everyone was designing their own neural network and doing things that they thought were clever as how to define all the connections and the weights and whatever to do their tasks independently. So everyone was doing a different thing for each one of these tasks roughly. All of that was blown out of the water by just building a big transformer and just teach it to predict the missing words a whole bunch and then fine-tune it on each of these tasks. So this was just a sea change in the field. People were amazed. It's a little bit less flashy than ChatGPT, I'll admit. But it's really part of the story that gets us to it. OK, questions. So to get stuff out of the-- during the encoder pretraining stage, encoder usually outputs some hidden values. How do we correlate those to words that we are trying to test against? So the question is, the encoder output is a bunch of hidden values? And how do we actually correlate those values to stuff that we want to predict? I'm going to move on to the next slide here to bring up this example here, right? So the encoder gives us, for each input word token, a vector of that token that represents the token in context. And the question is, how do we get these representations and turn them into answers for the tasks that we care about? And the answer comes back to something like this maybe. I'm not sure. So when we were doing the pretraining, we had the transformer that was giving us our representations. And we had this little last layer here, this little affine transformation that moved us from the encoder's hidden state size to the vocabulary to do our prediction. And we just remove this last prediction layer here. And let's say we want to do something that is classifying the sentiment of the sentence. We just pick, arbitrarily, maybe the last word in the sentence. And we stick a linear classifier on top and map it to positive or negative, and then fine-tune the whole thing. OK. So yeah, the BERT model had two different models. One was 110 million parameters. One was 340 million. Keep that sort of in the back of your head sort of percolating as we talk about models with many, many more parameters later on. It was trained on 800 billion words plus that is definitely wrong. Maybe 25 million words. But on the order of less than a billion words of text. Quite a bit still. And it was trained on what was considered at the time to be a whole lot of compute. It was Google doing this, and they released it, and we were like, oh, who has that kind of compute but Google? Although, nowadays, it's not considered to be very much. But fine-tuning is practical and common on a single GPU. So you could take the BERT model that they spent a lot of time training and fine-tune it yourself on your task on even a very small GPU. OK. So one question is, well, this seems really great. Why don't we just use this for everything? Yeah. And the answer is, well, what is the pretraining objective? What's the structure of the pretrained model good for? BERT is really good for filling in the blanks, but it's much less naturally used for actually generating text. So I wouldn't want to use BERT to generate a summary of something because it's not really built for it. It's not. It doesn't have a natural notion of predicting the next word, given all the words that came before it. So maybe I want to use BERT if I want a good representation of, say, a document, to classify it, give it one of a set of topic labels, just say it's toxic or non-toxic or whatever. But I wouldn't want to use it to generate a whole sequence. OK, some extensions of BERT-- so we had a question earlier of whether you just mask things out randomly. One thing that seems to work better is you mask out whole contiguous spans because the difficulty of this problem is much easier than it would otherwise be because this is part of irresistibly, and you can tell very easily based on the subwords that came before it. Whereas if I have a much longer sequence, it's a trade-off. But this might be a harder problem, and it ends up being better to do this span-based masking than random masking, and that might be because subwords make very simple prediction problems when you mask out just one subword of a word versus all the subwords of a word. So this ends up doing much better. There's also a paper called the RoBERTa paper which showed that the next sentence prediction wasn't necessarily. They also showed that they really should have trained it on a lot more text. So RoBERTa is a drop-in replacement for BERT. So if you're thinking of using BERT, just use RoBERTa. It's better. And it gave us this intuition that we really don't know a whole lot about the best practices for training these things. You sort of train it for as long as you're willing to, and things do good stuff and whatever. So this is very-- but it's very difficult to do sort of iteration on these models because they're big. It's expensive to train them. Another thing that you should know for your final projects in the world ahead is this notion of fine-tuning all parameters of the network versus just a couple of them. So what we've talked about so far is you pre-train all the parameters, and then you fine-tune all of them as well. So all the parameter values change. An alternative, which you call parameter efficient or lightweight fine-tuning, you choose little bits of parameters, or you choose some very smart way of keeping most of the parameters fixed and only fine-tuning others. And the intuition is that these pretrained parameters were really good. And you want to make the minimal change from the pretrained model to the model that does what you want so that you keep some of the generality, some of the goodness of the pretraining. So one way that this is done is called prefix tuning. Prompt tuning is very similar. --where you actually freeze all the parameters of the network. So I've pretrained my network here. And I never change any of the parameter values. Instead, I make a bunch of fake sort of pseudo-word vectors that I prepend to the very beginning of the sequence. And I train just them, it's sort of unintuitive. It's like these would have been like inputs to the network, but I'm specifying them as parameters, and I'm training everything to do my sentiment analysis task just by changing the values of these sort of fake words. And this is nice because I get to keep all the good pretrained parameters and then just specify the sort of diff that ends up generalizing better. This is a very open field of research. But this is also cheaper because I don't have to compute the gradients, or I don't have to store the gradients and all the optimizer states with respect to all these parameters. I'm only training a very small number of parameters. Yeah. Does it it make any difference to put these state parameters [INAUDIBLE] as if [INAUDIBLE]? It doesn't make any difference to put these at the end or the beginning. In a decoder, you have to put them at the beginning because otherwise, you don't see them before you process the whole sequence. Yes. Can we just attach new layers and only train the new layers [INAUDIBLE]? The question is, can we just attach the new layers at the sort of top of this and only train those? Absolutely. This works a bit better. Another thing that works well, sorry we're running out of time, is taking each weight matrix. So I have a bunch of weight matrices in my transformer, and I freeze the weight matrix and learn a very low-rank little diff. And I set the weight matrix's value to be the original value plus my very low-rank diff from the original one. And this ends up being a very similarly useful technique. And the overall idea here is that, again, I'm learning way fewer parameters than I did via pretraining and freezing most of the pretraining parameters. OK, encoder-decoders. So for encoder-decoders, we could do something like language modeling. I've got my input sequence here, encoder output sequence here. And I could say this part is my prefix for having bidirectional context. And I could then predict all the words that are in the latter half of the sequence, just like a language model, and that would work fine. And so this is something that you could do, right? You take a long text, split it into two, give half of it to the encoder, and then generate the second half with the decoder. But in practice, what works much better is this notion of span corruption. Span corruption is going to show up in your assignment 5. And the idea here is a lot like BERT but in a sort of generative sense where I'm going to mask out a bunch of words in the input "Thank you--" mask token one, "me to your party--" mask token two, "week." And then, at the output, I generate the mask token, and then what was supposed to be there, but the mask token replaced it. So "Thank you," then predict "--for inviting--" at the output, "--me to your party last week." And what this does is that it allows you to have bidirectional context. I get to see the whole sequence, except I can generate the parts that were missing. So this feels a little bit like BERT. You mask out parts of the input, but you actually generate the output as a sequence like you would in language modeling. So this might be good for something like machine translation, where I have an input that I want bidirectional context in, but then I want to generate an output and I want to pre-train the whole thing. So this was shown to work better than language modeling at the scales that these folks at Google were able to test back in 2018. This is still quite popular. Yeah, there's a lot of numbers. It works better than the other stuff. I'm not going to worry about it. There's a fascinating property of these models also. So T5 was the model that was originally introduced with Salient Span Masking. And you can think of-- at pretraining time, you saw a bunch of things like "Franklin D. Roosevelt was born in--" blank, and you generated out the blank. And there's this task called Open-Domain Question Answering, which has a bunch of trivia questions, like "When was Franklin D. Roosevelt born?" And you're supposed to generate out the answer as a string, just from your parameters. So you did a bunch of pretraining, you saw a bunch of text, and then you're supposed to generate these answers. And what's fascinating is that this sort of Salient Span Masking method allowed you to pre-train and then fine-tune on some examples of questions, trivia questions. And then when you test it on new trivia questions, it would-- the model would implicitly extract from its pretraining data somehow the answer to that new question that it never saw explicitly at fine-tuning time. So it learned this sort of implicit retrieval sometimes, less than 50% of the time or whatever, but much more than random chance, yeah. And that's sort of fascinating. So you've learned to access this sort of latent knowledge that you've stored up by pretraining. And so, yeah, you just pass it the text "When was Roosevelt born?" And it would pass out an answer. And one thing to know is that the answers always look very fluent. They always look very reasonable, but they're frequently wrong. And that's still true of things like ChatGPT. Yeah, OK. So that's like encoder-decoder models. Next up, we've got decoders. And we'll spend a long time on decoders. So this is just our normal language model. So we get a sequence of hidden states from our decoder. The models-- the words can only look at themselves and not the future. And then, I predict the next word in the sentence. And then, here again, I can do sentiment, analysis maybe take the last state, the last word and then predict "happy" or "sad" based on that last embedding, back-propagate the gradients through the whole network, train the whole thing, or do some kind of lightweight or parameter efficient fine-tuning like we mentioned earlier. So this is our pretraining a decoder. And I can just pre-train it on language modeling. So again, you might want to do this if you are wanting to generate text, generate things. You can use this like you use an encoder-decoder. But in practice, as we'll see, a lot of the sort of biggest, most powerful pretrained models tend to be decoder only. It's not really clear exactly why, except they seem a little bit simpler than encoder-decoders. And you get to share all the parameters in one big network for the decoder, whereas with an encoder-decoder, you have to split them, sort of some into the encoder, some into the decoder. So for the rest of this lecture, we'll talk only about decoders. And in modern things the biggest networks do tend to be decoders. So we're coming all the way back again to 2018, and the GPT model from OpenAI was a big success. It had 117 million parameters. It had 768-dimensional hidden states, and it had this vocabulary that was 40,000-ish words that was defined via a method like what we showed at the beginning of class trained on BooksCorpus. And actually, GPT never actually showed up in the original paper. It's unclear what exactly it's supposed to refer to. But this model was a precursor to all the things that you're hearing about nowadays. If you move forward-- oh, yeah, so if you-- it hm, let's see here. So if we wanted to do something like natural language inference, right, which says take these pairs of sentences, "The man is in the doorway," "The person is near the door," and say that these mean-- that one entails the other, the sort of premise entails the hypothesis that I can believe the hypothesis if I believe the premise, I just sort of concatenate them together. So give it maybe a start token, pass in one sentence, pass in some delimiter token, pass in the other. And then, predict, yes/no entailment-- not entailment. Fine-tuning GPT on this. It worked really well. And then BERT came after GPT. BERT did a bit better. It had a bidirectional context. But it did sort of an excellent job. And then came GPT 2, where they focused more on the generative abilities of the network, right? So we looked at now a much larger network. We've gone from 117 million to 1.5 billion. And given some sort of prompt, it could generate, at the time, a quite surprisingly coherent continuation to the prompt. So it's telling this sort of story about scientists and unicorns here. And this size of model is still small enough that you can use on it a small GPU and fine-tune it and whatever. And its capabilities of generating long coherent texts was just sort of exceptional at the time. And it was also trained on more data, although I don't-- something like 9 billion words of text. And then so, after GPT-2, we come to GPT-3. We're walking through these models. And then, we come with up a different way of interacting with the models. So we've interacted with pretrained models in two ways so far. We've sampled from the distribution that they define. We generated text via a machine translation system or whatever, or you fine-tune them on a task that we care about, and then we take their predictions. But GPT-3 seems to have an interesting new ability. It's much larger, and it can do some tasks without any sort of fine-tuning whatsoever. GPT-3 is much larger than GPT-2. So we went from GPT 100-ish million parameters. GPT-2, 1.5 billion. GPT-3, 175 billion, much larger, trained on 300 billion words of text. And this notion of in-context learning that it could define or figure out patterns in the training or in the example that it's currently seeing and continue the pattern is called in-context learning. So you've got the word "thanks." And I pass in this little arrow and say, OK, "thanks" goes to "marci," and then "hello" goes to "bonjour." And then, give they it all of these examples and ask it what "otter" should go to. And it's learned to continue the pattern and say that this is the translation of "otter." So now, remember, this is a single input that I've given to my model. And I haven't said, oh, do translation or fine-tune it on translation or whatever. I've just passed in the input, given it some examples. And then it is able, to some extent, do this seemingly complex task. That's in-context learning. And here are more examples. Maybe you give it examples of addition, and then it can do some simple addition afterward. You give it-- in this case, this is sort of rewriting typos. It can figure out how to rewrite typos in context learning for machine translation. And this was the start of this idea that there were these emergent properties that showed up in much larger models. And it wasn't clear when looking at the smaller models that you'd get this new-- this qualitatively new behavior out of them. It's not obvious from just the language modeling signal. GPT-3 is just trained on that decoder only, just next-- predict the next word, that it would, as a result of that training, learn to perform seemingly quite complex things as a function of its context. Yeah, OK. One or two questions about that. This should be quite surprising, I think. So far, we said-- we've talked about good representations, contextual representations, meanings of words, and context. This is some very, very high-level pattern matching. It's coming up with patterns in just the input data and that one sequence of text that you've passed it so far and it's able, to identify how to complete the pattern. And you should think, what kinds of things can this solve? What are its capabilities? What are its limitations? This ends up being an open area of research. What are the kinds of problems that you maybe saw in the training data log? Maybe GPT-3 saw a ton of pairs of words, right? It saw a bunch of dictionaries, bilingual dictionaries, in its training data. So it learned to do something like this. Or is it doing something much more general, where it's really learning the task in context? The actual story, we're not totally sure. It's something in the middle. It seems like it has to be tied to your training data in ways that we don't quite understand. But there's also a non-trivial ability to learn new, sort of at least, types of patterns just from the context. So this is a very interesting thing to work on. Now, we've talked a lot about the size of these models so far. And as models have gotten larger, they've always gotten better. We trained them on more data, right? So GPT-3 was trained on 300 billion words of text, and it was 175 billion parameters. And at that scale, it costs a lot of money to build these things. And it's very unclear whether you're getting the best use out of your money, is bigger really what you should have been doing in terms of the number of parameters? So the cost of training one of these is roughly you take the number of parameters, you multiply it by the number of tokens that you're going to train it on, the number of words. And some folks at DeepMind, I forgot the citation on this-- some folks at DeepMind realized through some experimentation that actually GPT-3 was just comically oversized. So Chinchilla, the model they trained, is less than half the size and works better, but they just trained it on way more data. And this is an interesting of trade-off about how do you best spend your compute. You can't do this more than a handful of times, even if you're Google. So open questions there as well. Another sort of way of interacting with these networks that has come out recently is called Chain of Thought. So the prefix, right, we saw in the in-context learning slide that the prefix can help sort of specify what task you're trying to solve right now. And it can do even more. So here's standard sort of prompting. We have a prefix of examples of questions and answers. So you have a question and then an example answer. So that's your prompt that's specifying the task. And then you have a new question, and you're having the model generate an answer, and it generates it wrong. And Chain-of-Thought Prompting says, well, how about in the example? In the demonstration we give, we give the question, and then we give this sort of decomposition of steps towards how to get an answer, right? So I'm actually writing this out as part of the input. I'm giving annotations as a human to say, oh, to solve this sort of word problem, here's how you could think it through-ish. And then I give it a new question. And the model says, oh, I know what I'm supposed to do. I'm supposed to first generate a sequence of steps, of intermediate steps, and then next say the answer is-- and then say what the answer is. And it turns out, and this should again be very surprising, that the model can tend to generate plausible sequences of steps and then much more frequently generates the correct answer after doing so relative to trying to generate the answer by itself. So you can think of this as a scratchpad. You can think of this as increasing the amount of computation that you're putting into trying to solve the problem. You writing out your thoughts. As I generate each word of this continuation here, I'm able to condition on all the past words so far. And so maybe it just, yeah, allows the network to decompose the problem into smaller, simpler problems, which is more able to solve each. No one's really sure why this works exactly, either. At this point, with networks that are this large, their emergent properties are both very powerful and exceptionally hard to understand and very hard you should think to trust because it's unclear what its capabilities are, and what its limitations are, where it will fail. So why do we think pretraining is teaching? Gosh, a wide range of things, even beyond what I've written in this slide, which I mostly wrote two years ago. So it can teach you trivia, and syntax, and co-reference, and maybe some lexical semantics, and sentiment and some reasoning, way more reasoning than we would have thought even three years ago. And yet they also learn and exacerbate racism and sexism, all manner of biases. There will be more on this later. The generality of this is really, I think, what's taken many people aback. And so, increasingly, these objects are not just studied for the sake of using them but studied for the sake of understanding anything about how they work and how they fail. Yeah. Any questions? Has anyone tried benchmarking GPT for programming tasks like how accurately it does, et cetera? Yeah, the question is, has anyone tried benchmarking GPT for programming tasks? Has anyone seen how well it does? Yes, so there's definitely examples of people using GPT-3 for simple programming things and then the modern state-of-the-art competitive programming bots are all based on ideas from language modeling. And I think they're all also based on pretrained language models themselves. If you just take all of these ideas and apply it to GitHub, then you get some very interesting emergent behaviors relating to code fallout. And so yeah, I think all of the best systems use this more or less. So lots of benchmarking there for sure. Is this the basis for what GitHub Copilot is trying to do? The question is, is this the basis? Is that what we just mentioned, the basis for the GitHub Copilot system? Yes, absolutely. We don't know exactly what it is in terms of details, but it's all these ideas. What if you have a situation where you have still a large amount of data for general data, and then you have also a large amount of data for your fine-tuning task? At what point is it better to train a new model for that fine-tuning versus get data from both? Yeah, the question is, what if you have a large amount of data for pretraining and a large amount of data for fine-tuning? When is it better to do a separate training on just the fine-tuning data? Almost never. If you have a bunch of data for the task that you care about, what's frequently done instead is three-part training where you pre-train on a very broad corpus. Then, you sort of continue to pre-train using something like language modeling on an unlabeled version of the labeled data that you have. You just strip the labels off and just treat it all as text, and do language modeling on that, adapt the parameters a little bit, and then do the final stage of fine-tuning with the labels that you want. And that works even better. There's an interesting paper called Don't Stop Pretraining. Nice. Final question. Just one question, sorry. Anyone new? Someone new with a question? Yes. Yeah, I was wondering, do you know if there's a lot of instances where a pretrained model can do some task that it has not seen before, even without fine-tuning? Yeah, so are there any instances of where a pretrained model can do a task that it hasn't seen before without fine-tuning? The question is, what does "it hasn't seen before" mean, right? These models, especially GPT-3 and similar very large models, during pretraining did it ever see something exactly like this sort of word problem arithmetic? Maybe. Maybe not, it's actually sort of unclear. It's clearly able to recombine sort of bits and pieces of tasks that it saw implicitly during pretraining. We saw the same thing with trivia, right? Language modeling looks a lot like trivia sometimes, where you just read the first paragraph of a Wikipedia page, and it's kind of answering a bunch of little trivia questions about where someone was born and when. But it's never seen something quite like this. And it's actually still kind of astounding how much it's able to do things that don't seem like they should have shown up all that directly in the pretraining data. Quantifying that extent is an open research problem. OK, that's it. Let's call it.
Stanford_CS224N_Natural_Language_Processing_with_Deep_Learning_2023
Stanford_CS224N_NLP_with_Deep_Learning_2023_Lecture_15_Code_Generation.txt
So this is lecture 15, and today we'll be talking about code generation. So a little bit unusual since we'll be generating unnatural languages this time, but it will connect in a number of ways to natural language generation. So before we start, just a few announcements. The project milestone is due this Thursday. You are certainly all aware of that. And also, when doing the projects, it's always good to keep track of how much you're spending on Azure and AWS. And one thing to notice is that disk costs money. It doesn't cost that much compared to GPUs, but it still costs something. Be sure to not be spending all your money on disk. So tomorrow, John will be running a discussion on training large language models. It'll be really cool. So it'll be at 3:30 in the Skilling Auditorium. There's more details on that. And this Thursday, we have our first invited talk in our regular lecture time, and attendance is expected. So please everyone show up. It'll be really cool. All right. So let's get started. So when we're talking about a problem that, in the literature, is called program synthesis. And let's see what that means. So program synthesis is actually a pretty old challenge of artificial intelligence, and the goal is to create programs that can take some sort of specification and write a program that satisfies that specification. So it's a program that writes a program. So that's what a program synthesizer is, right? It's a program that takes your specification, and is able to generate some program. And then you can ask, "what kind of specification"? So one possible specification, for example, could be a logical formula. It could be like a mathematical formula that specifies what behavior we want from the program. It could be an equivalence program. So I could say, OK, here is a slow implementation of a sorting algorithm-- bubble sort for example-- and it runs in o of n squared, and I want to synthesize another program that's equivalent, so it generates all the same outputs given the same inputs, but it's maybe faster. So that could be a form of specification. I could give examples, right? I could say, OK, I want a program that if I give it this input, it should generate this output, if I give this string, it should give me back this string. Or, as more popular these days, we could also maybe in addition to or instead of these other kinds of specifications, also give a natural language description, right? I could just write, I want a program that performs a certain operation. So just to warm up, let's see how this synthesis from logical specifications could look like. So when would it make sense to use the program synthesizer at all? So it would only make sense to use a program to write a program for us if that's in some way easier than writing the program ourselves, right? It should be easier to specify what the program does compared to exactly how it should do that. And this is different than natural language generation in an important way in that we usually have ways to test our output automatically, right? So if I give the synthesizer, OK, I want a program that given these inputs, generates these outputs, and the synthesizer gives me back a program. I can go there and execute the program on the inputs that I gave and verify that it generates the correct outputs. And this is different than a natural language task. For example, if I ask it to summarize an article or a paragraph and it gives me back a response, and I can evaluate it in some ways. I can compare it to human reference summaries or I can use a language model to evaluate the output of another language model, but I can't execute the summary and verify that it's a good summary. So-- Yes. How can you make certain that the output is always correct considering like-- I mean without formal verification, how can you just make sure that the output program is correct, since you'll be pushing to the BIOS on the test cases on the-- Yeah. That's a good question. So the question was, how can we make sure that the output is correct in general? Well, it depends on what specification we have, right? If the specification is input/output examples, all we can do is verify that it satisfies those examples. We'll talk about the problem of that in a little bit. Any other questions about this set up? I'll give you an example, so it will be very concrete starting now. OK. So let's see how this could work. Let's try to specify a program using this sort of logical specification. So our first attempt will be to specify how do I sort an array, right? I want a program that receives an array as input and returns a sorted array. So how would I write that mathematically? Our first attempt could be, well, let's say that this program takes an array A and outputs an array B. I can specify that I want the array B to be sorted. So, mathematically, I could write that as for all of the indices i of the output, I want the element at that index to be at most the next element, right? So sorted in increasing order. So I can look at this statement and say, oh yes, if the output satisfies that, then it's a sorted array. Does this look good? Maybe, right? So I can give that specification to a synthesizer, and then it will go and search for programs that satisfy this, and then it returns this program, which is called sort, takes an array A and returns the array 1, 2. So if you look at the mathematical formula, it'd say, well, for all of the indices of the output, that element is smaller than or equal to the next element. So it satisfies the specification that we gave, but of course not the program we wanted. OK. So maybe we missed something. We missed that the output not only should be sorted, but also should have the same elements as the input. So I can specify that as I want the array B have the same length as array A and has to be a permutation for each element of the output. It has to be somewhere there in the input. And then writing a little bit more formally in first order logic it would look like that. Don't have to try to parse it. And then, if I give that to the synthesizer, maybe it will go and search for some programs and return like QuickSort or some function that actually sorts the array. So note that the problem here is quite non-trivial because the formula, as ugly as it is, it doesn't tell us how to sort the array. It just says that the array should be sorted in some way. So it's not just a syntactical translation between the formula that we gave and the programming language that we're targeting. But the thing that's obvious here is that these logical specifications are quite hard to read, they're quite hard to write, of course, and also to check, right? If I just gave you the formula that says an array's sorted, maybe at first. It's not easy to see the corner case that just being sorted is not enough. And I mean if I tell you that we are making a synthesizer that takes this formula and returns like a function that sorts an array, you could reasonably say that maybe it's just easier to write the function yourself, but it is quite a challenge to do so even then. Any questions about the setup here? OK. So maybe logical formulas are too much, right? We don't want to be specifying even simple programs like sorting with those ugly first order formulas. We could try something simpler. We could try examples. So input output examples is a very natural kind of specification. And in fact, when writing programs, software engineers usually already write tests, which are kind of like input output examples, right? Like if I call the function with this input, it should return this. I assert that it does that. So how could I specify sorting in that case? I could say, well, if I give the array 3, 2, 1, 0, it should return 0, 1, 2, 3. For 1, 4, 2, it should return 1, 2, 4. And for 9, it should return 9. Right? Any human looking at these inputs and outputs could reasonably guess, that oh, it's just sorting the input array, right? But as we just saw with the logical synthesizer, we could also get a program that looks like this. Well, if the array has exactly four elements returned, 0, 1, 2, 3, and if it has 3 returns this exact array and otherwise always return 9, right? It satisfies the input output examples, but somehow it's still not what we want. Of course this is a kind of an adversarial output, and synthesis by example was actually massively used in the last decade because of this feature in Excel called "FlashFill," which was released in 2013, and it was for a while one of the hottest things to have happened to Microsoft Excel. So FlashFill is this really cool feature where the goal is for Excel to guess what string transformation you're applying. So you can write-- for example, if you have a column that has people's first and last names, and you want to just get the first name, for example, of everyone. And you create a second column and you type, like in this example, Ned, then Excel, if you click on the FlashFill button, it will magically guess that what you're doing is you're splitting on the space and maybe taking the first of those strings and suggest you complete that as the second column. And it can actually do quite complex transformations and usually from one or two examples, and it's quite cool. But as is clear at this point, synthesis from examples has this inherent problem of ambiguity, right? For any set of examples-- input/output examples-- that I give, there will be usually an infinite number of programs that have exactly that behavior on those examples, right? But somehow that's very non-human because humans for some reason have a very specific preference over this infinite space of programs. Like if I look at this program that does this, even if I don't tell you what kind of program was I looking at in the first place, it's very obvious that this program probably not useful for anything. But it's obvious for you, not to a synthesizer, necessarily, that's trying to find a program. So, for example, what program I'm specifying here with these two examples? Jan transforms to January and Feb transforms to February. Any human guesses about what this should do? It takes a short name for a month and expands it to [INAUDIBLE]. Yeah. Exactly. It should obviously do that, but for a while-- I think maybe not-- I'm not sure if this fix was released or is going to be released, but for a while, this is what FlashFill would do. It would complete Feb with February, March with Maruary, April, Apruary, and so on, right? So it guessed from one example what you're doing is just concatenating "uary" on the string that you had. So clearly extrapolates from any other possible string that you might want. So how do we do we deal with this ambiguity? We'll talk a little bit about that. But just to summarize what we've seen so far, a synthesizer is this program that takes some form of specification of what a program should do, and then generates a program. And if we get this to work, this would actually have massive impact in a number of ways. It can lower the barrier to access programming to a lot of people that maybe don't want to spend four years taking CS classes. So for example, people can automate a lot of things just by using FlashFill in Excel things-- things that would take a lot more time. And even programmers ourselves can benefit from much higher productivity if we can program at higher level ways. So this is quite an interesting goal, but it, of course, has many challenges, right? It has this infinite space of programs. A lot of them are unreasonable in this human way. And here we're talking about-- at least for now, right-- searching in the space of programs in a very specific language where we can do search, but it's of course impractical to do search in any real-word language like Python, and we have this ambiguity problem, right? Like how do you capture human preferences? So we'll talk here about the connection between this problem of ambiguity in program synthesis and ambiguity in natural language, which is extremely common. So human languages are extremely ambiguous, and if you stop to look at it more closely, it's actually quite surprising that we manage to communicate so well and so easily even though if you look up almost any word in the dictionary, it will have a large number of meanings that it might have. Even sentences out of context can usually have multiple interpretations, but we somehow do just fine talking in English, in this very ambiguous medium. And in fact, ambiguity is not even a bug of human languages. It's a feature, and it's a feature for efficiency. So, actually, there's this paper here that's pretty cool that provides true arguments based on information theory that any communication channel where, basically, the meaning of words can be disambiguated in context, we'll make those words at some point collide to make them both short. So, for example, if I have "bear" the animal and "bear" the verb, they usually appear in very different contexts, right? So it would actually be very inefficient to create a word to separate those because, at some point, I would be adding both more and longer words in my vocabulary, right? So if they can be disambiguated to be optimal from a communication perspective, I'll actually get ambiguity at some point. And there's one very interesting challenge for computers to resolve this kind of ambiguity called the Winograd Schema Challenge. And if you read the examples, they're quite entertaining because you read them and it's very obvious what's going on, but it's also obvious what's the challenge. So here we have these two sentences. "The city councilmen refused the demonstrators a permit because they feared violence." And the obvious ambiguity here is that "they" could refer to the city councilmen or the demonstrators, right? But when you hear "they feared violence," what's the obvious candidate here for what "they" refers to? Yeah. Exactly. And when you say "they advocated violence," then you suddenly process the sentence in a different way. And syntactically, the sentences are exactly the same, but just because of your prior knowledge about how these actors behave in the world. You use that to disambiguate the your different meanings. Yeah. So this is very easy for us, handling this kind of ambiguity. How do we do it? It's an interesting question. How do humans do this? And the linguistic term for the kind of reasoning that we do in this setting is called "pragmatic reasoning." So in linguistics, we have this distinction between semantics and pragmatics of how do we attribute meaning to things. Like semantics talks about the intrinsic meaning of words in a certain sense. And pragmatics, how does that change in context. And to do this kind of resolution of ambiguity, we have to operate with some sort of assumption that helps us get off the ground. And one important assumption here is this assumption of cooperativity. So when we're talking to someone, we assume that they're trying to help us understand what they're saying. So they won't be adversarial as the program synthesizer was in those examples. And we can use that assumption to do reasoning context and perform pragmatic reasoning. So I'll show here one model of pragmatic reasoning called the "RSA" or "Rational Speech Acts," which is a Bayesian model of how this could work in simple scenarios. So here we assume that we have two people like a speaker and a listener. The speaker wants to refer to a certain object or person and is going to choose an utterance for that like a word or a sentence to refer to that object, right? And then, the listener on the other side is receiving this utterance and trying to infer, OK, what does this speaker mean? What are they referring to? What object or what person? So one really cool example here on the right is this where you have these two people. And then, the person on the right has, "my friend has glasses," and there are three people here. There is one person wearing no glasses and no hat, there's a person just wearing glasses, and a person wearing a glasses and a hat. When you hear that this person's saying, "my friend has glasses," well it's of course ambiguous in this case because there are two people wearing glasses, but does anyone have an intuition of who would you guess they're talking about? Yeah. Maybe just the middle one because that's the most distinguished or the only distinguishing factor that they have. Yeah. The middle one is the one you go to. [INAUDIBLE] Oh, yeah, because if you wanted to refer to the one on the right, you would have said, "my friend in the hat." Yeah. Exactly. So you just described RSA basically. So we do this kind of recursive reasoning apparently, right, where we think, OK, so if they wanted to refer to the person with a hat, they could have said hat and that would have not been ambiguous, but they did not say hat, which probably means something about what they intended to refer to. So RSA is a very simple Bayesian model of exactly this process. So just to work through an example, let's say that we have these three objects. A blue square, a circle, which is also blue, and then a green square, and we have these four utterances that we can use. A very small vocabulary. Blue, green, circle, and square. So in RSA we will bootstrap this process from a literal listener, which is a listener that can only understand literal meaning. So if you give this listener some utterance u, the literal listener, which we'll call L0, will put uniform probability on all the objects that satisfy u. So if you say blue it will put uniform over all the blue objects. If you say square, it'll put uniform probability over the squares. And that's the distribution of beliefs that the literal listener puts. So assuming that you're talking to that literal listener, now you can create a pragmatic speaker which will choose some utterance to refer to an object based on the probability that the literal listener will understand what they're saying. So, basically, for each of the words in our-- or utterances that I could say, maybe it could be extremely specific. Like I could write a text exactly describing that object, but that would be very costly, right? So I want to be concise, but at the same time I can't be too concise because otherwise I might not specify what I want to say. Like I will not be understood, right? So I can imagine this pragmatic speaker S1, which is trying to maximize this balance between the probability that the literal listener will guess the intended object minus some cost, which, in this case, could be uniform probability. And then, from that pragmatic listener, now I can create a pragmatic speaker that will choose an utterance based on the probability that the pragmatic speaker would have chosen that utterance to refer to that object. Sorry, the listener L1 will choose an object, will guess a belief over the object based on the probability that the speaker S1 would have chosen that utterance to refer to each of the objects. And here I could recourse, right? I could create a listener L2 which reasons about the speaker. Sorry, I could choose a speaker S2 which is talking with the listener L1 in their head and then a listener L2 and so on. But usually this listener-speaker pair, S1 and L1, is often enough to model human judgments in these settings. Does this make sense? How this recursive process is happening? OK. Yeah. So assuming these three objects and a speaker says blue, again following the same example, the glasses and hats, what would you guess or what's your first intuition about what object they would refer to? The square. Yeah. The square is typically what people do. So a literal listener would say, OK, it's completely ambiguous, right? Like 50% on the square and on the circle. But if you set up a human experiment where people are receiving these utterances and saying how much they believe each of the objects is the intended object, they will put around 40% probability on the circle and 60% on the square, which is very close to what RSA predicts. OK. So this gives a mechanism for resolving ambiguity in this listener-speaker setting. And one way to see program synthesis is as a setting where we are the speakers, we're talking to the synthesizer, and we are speaking, for example, input/output examples. And we want to refer to a certain program from a set of programs and we're speaking examples and the synthesizer is our listener. We're just trying to infer what program are we referring to. And the examples that we were seeing, the synthesizer was being extremely literal, right? So like, oh, if you say that given A, it should return B, it could be any of the programs that exist that return B, but now we have a process that can maybe refine this reasoning a little bit, right? We have RSA and we can almost directly apply it in the setting where we can build this meaning matrix where in one dimension we have all the programs. So let's assume for simplicity that we have a finite set of programs and also a finite set of examples that can be given to the synthesizer. So in that setting we can make this matrix where each entry corresponds to a program being ran on one example. And we have 1 if the program satisfies that example, like it returns 2 or an other example for example and 0 otherwise. So this matrix directly gives us a literal listener for this setting. If I give an example, a literal synthesizer could just look at this table and say, OK, these are all the programs that set aside those examples. Maybe I'll sample one of those at random. But I could use the RSA recursion to derive L1 and L2, and those would be like pragmatic synthesizers. And in a human experiment ran in this paper, which I won't get into a lot of detail in their setting, but they ran this experiment where people were trying to specify a program that draws a pattern on a grid. And the specification was through examples by basically saying, OK, like, the pattern contains the square or does not contain this square, and people had a much easier time communicating the pattern that they wanted with the pragmatic synthesizer, which is just a quite cool result, I think. Yeah. So, of course, the assumptions here are that the set of programs and of examples is finite, which is quite restrictive. It's not true of real programming languages, but it does present an interesting challenge right? Like can we extend this kind of approach to an infinite set of programs like real programming languages, and maybe also we want richer kinds of specifications. Instead of just saying, the behavior of the program specific examples, we could try to handle natural language. Any questions about this connection between-- yes. Have you ever considered in this whole program synthesis just generally how we would typically want like a simple-- with this sort of example, like how we had different edge cases on like if, if, if. Do we do we account for the fact that-- would we penalize a longer program or more complicated program when trying to consider something like that? Yeah. So the question was, in program synthesis do people use biases like, "find the shortest program" for example, or the simplest program that satisfies the specification. And the question is both yes and no. It's "yes" in the sense that most search based synthesizers will usually find very short programs, but not because people use that as a bias necessarily for disambiguating, but just because it's much easier to find threaded programs. So like if you're doing search on a space of programs, the chance that you find like a 100 line program that satisfies the specification is naturally much smaller than you finding a short one. Now, to be able to do search in the first time-- a lot of research in this area in the last decades has been exactly how to design specific languages and search spaces so that this can be done. Does that make sense? Any other questions? OK. So we've been talking a lot about language models in the class, and as you know, if I give a prefix of anything that can show up in the internet, that language model gives me a distribution of what can come next. So, for example, if I say, "Stanford University is located in the state of," the model having been trained on Wikipedia and other sources would put much higher probability on the sentence continuing with "California" rather than another US State. Language modeling is quite hard. It's like a really, really hard problem because, as John talked about in his lecture, a lot of things can be reduced through language modeling. So if I say, "Theodore Landon Streleski a former grad student in mathematics at Stanford," for example, and I ask GPT-3 to give plausible completions, it gives "became known for his advocacy of the use of psychedelic drugs" or "a homeless advocate." And I mean this sounds plausible, maybe. The ground truth in this case from Wikipedia is he murdered his former advisor, which might be quite hard to predict, given this prefix. And it turns out that if I give GPT-3 a prefix such as "the following is a Python function that when given the list 1, 3, 2 returns 1, 2, 3," it will complete exactly with this program. "def sort_list, lst.sort, return lst", which, depending on what year you were born, is quite surprising. It's quite amazing that a model that can predict California from Stanford University is located in with the exact same mechanism can generate valid Python code. So this was a realization that people made very quickly after GPT-3 came out, right? Given simple Python docstrings, it was able to generate Python functions that implemented those docstrings even without having been trained explicitly for that or even code was not a large part of GPT-3's training set anyway. So the natural question was, how far can we push that capability? So code is massively available on the internet. GitHub has tens of millions of open source repositories. Actually over 120 million as of, I think, end of last year. So what happens if you just train a language model on a lot of code? So that was basically the idea behind OpenAI Codex, which is the name of the language model that backs GitHub Copilot. Just out of curiosity, how many of you have used Copilot? OK. Less than I would have thought. Maybe 30%? Yeah. So Copilot is basically autocomplete on steroids that runs this language model called "Codex" in the back end. And as a lot of papers in this age we live in, the technical description of what was done was, we took the architecture of GPT-3, maybe changed the number of parameters, and trained on this data. Yes? [INAUDIBLE] these types of models. Would you just be looking kind of for similarity to gold standard code or are you also checking for correctness of the code, will compile just like the original one does? Yeah. That's a great question. We'll talk a lot about how these models are evaluated in some settings. The answer just-- jumping ahead a little bit-- will be that we'll mostly execute the code and see what it does rather than just comparing to reference solutions. There is a literature on how to evaluate when you can't do that, but as happens with natural language, people realize that BLEU scores and adaptations are not actually very good for functional correctness especially in code where you can change one token and not change the BLEU score by much but completely change the semantics of the program. Yes? In the training data did we include natural language for if we have a function in Python-- just like natural language like describing the function does like in a comment or something? Yeah. So the question was, did they include natural language in the training data? And yes, in two forms. So code already has a lot of natural language like comments and strings, and this was all kept. None of it was stripped. So that's one form of natural language that Codex got, and the other one was just a subset of the training set of GPT-3. So it was not trained on 100% just code, it also had like-- So in the training data were there examples of a natural language description of a function and then the corresponding Python? So the question was, was there a description-- were there examples in the training data of a description and then the function? Yeah. Yes. So there are some examples of that form that naturally appear on GitHub. They're not a lot compared to all code that exists. We'll talk a little bit about-- [INAUDIBLE] on Stack Overflow-- [INTERPOSING VOICES] Yes. Yes. Yeah. So the web has a lot of that kind of thing in general, right? We'll be talking about one experiment that they did on fine tuning exactly that format, and has an impact because most code is not written like that on the internet. Although some fraction definitely is. The answer is-- [INAUDIBLE] Yes. So the version one of Codex was essentially the same architecture as GPT-3, which is a decoder only transformed model, but with 12 billion parameters and then trained on a training set that was constructed mostly from GitHub but also natural language sources. Yeah. So how do we evaluate a model, right? We train it and we can prompt it with a few examples and see that it does interesting things, but how do we get a better sense of its capability? So the authors in the paper in the Codex paper, they set up this challenge of given a Python docstring, just generate a function that implements that docstring where the doc string always had input/output examples in the form of assertions. So in this example here on the right, which is one from the paper, right? So the first one, the goal is to return a list with all the elements increased by one. So you would infer that the elements are numbers. And then, they give two examples, which are like pydoc tests. You can actually run these tests automatically. So if I call it with 1, 2, 3, it should return 2, 3, 4, and they give one more example. And besides those examples because-- as machine learning people, you should know if you just give all the examples that are evaluating on, your subject to the program just working on those examples but not on held out examples. So for each of these problems they of course also had held out inputs that the model was evaluated on. But since this model has seen a lot more code than any person has any chance of ever looking at in their lifespan, how do you even know that the problems that you're giving have not been seen before? So this becomes an increasingly difficult challenge with these large models. So they did a best attempt, which was to create a data set of their own. Since the goal here is not to train on that data set, you don't need that many examples as you would need to train a model from scratch. So they came up with these 164 problems of this form that they basically manually authored. So that's a way of saying that, OK, the model at least hasn't seen these problems in this exact form, right? And for each one, they had a set of hidden tests. So here the evaluation will be the generated program run correctly on all the tests. The seen and unseen ones. And the main metric that we'll be looking at is what they call "pass@k," which is the probability that out of k samples of programs that I take from the model, at least one of them passes all of the tests. And the main result here is that GPT-3, which is also a quite large model trained on a lot of code relatively speaking does exactly at 0 in this baseline, in this benchmark that they came up with. So it doesn't solve any of the problems, which is good. They're at least not trivial problems, right? And all of the Codex models have some non-trivial performance. So Codex alone, looking at pass@1, which is just sample one program from the model, does above 20%. And of course, we have to take all these numbers relative. 20% in general doesn't mean much, but it solve some problems that GPT-3 alone doesn't solve, right? And if they generated a set of problems with this exact format of Python docstring and the other function to evaluate if this format was kind of unusual for the model. so they kind of synthetically generated a training set to fine tune and call the resulting model Codex-S, and yes, Codex-S does a little bit better. So it seems like there's a little bit of benefit of designing training data exactly with this format. And besides just sampling one program and returning that as your answer, one thing that we'll see here is that it's usually worth to sample a lot more programs and somehow choose which one is your best bet. One simple way to do that is just by taking the model's log probability over the sample rate. So this is the red line here, which improves on top of the others. And if you look at the examples that-- sorry. Can you speak to the purple line? Oh, yes. So the purple line is the Oracle reranking, which is basically like if I take all the programs that are generated and actually run them on the hidden tests and take the ones that pass the hidden tests then-- so what the purple line is saying is that it's often the case that Codex generates some program that satisfies all the tests, but it might be hard to identify without actually running the program which one is it. Yeah. So if you look at the examples of samples from the model, it's quite non-trivial, right? So if I describe a function like def is_prime, returns true if a number is prime, which is, of course, a problem that the model has seen before in some form. It will fail a lot of times but most of the times it will do something reasonable. So here you see that it's trying to test for divisors of the number. In this case, it's just missing the corner case that's true, I think-- or, no, that one is returning as a prime number. It often returns the same program. So by resampling it, you don't have any guarantees. It was trained on GitHub, so it's also seen a lot of incomplete code. So it might say, to do, pass. Do it later. But, yeah, sometimes it works. Sometimes it will do exactly the primary test with all the corner cases and all. And if you specify a more complicated function with maybe some more corner cases it will again. In this case, it will not solve it completely with any of the samples, but a lot of the samples are surprisingly reasonable. It will often at least partially do what the specification is asking. Yes. So just to clarify. How difficult are those tasks? Is there like the score made by humans to specify whether some tasks are more difficult than others for humans? Yeah. So the question was, how hard are the tasks in general? And these problems are not hard for human programmers in general. So they test, basically, basic capabilities of coding in Python. So this is maybe a problem of like medium difficulty in the training set in the data set, right? Like a function that like counts vowels, but has a special case for y. y Should only be a vowel if it's at the end, for example. So this is the general flavor of these problems in the Codex paper. We'll talk about different data sets later. That makes sense? Yeah. So the finding here-- Oh, yes. So it fails in a lot of cases but many times produces reasonable guesses of what the function should do. And one thing that they noticed, which was an important observation for many of the works that came after, is that there seems to be quite a large benefit in just sampling more programs and trying more. So the space of programs that the model can generate usually contains some correct programs. And when sampling more, there is a trade off between the sampling temperature and how likely it is that the program is correct. So if I sample with temperature 0, then I basically get deterministic behavior. I don't get any benefit from sampling. But if I sample with too high of a temperature, then I get more and more random outputs, right? But, of course, just sampling more programs is maybe fine for this kind of evaluation with a benchmark, but when interacting with a user, I of course don't want to give the user 100 options to choose from. Right? there is a high probability that one of these many programs satisfies what you want, but I don't know which one, it would not be very usable. So, of course, I could just sample a small number of programs, but knowing that it's usually the case that in a large number of samples, one of them will be correct. It, a lot of times, makes sense to sample a large number of programs and then try to rerank them in some way and then only show maybe my top guesses. So the Oracle here would be, I run all the programs in a test, but a lot of times I don't have that. If I'm in the middle of writing a function then I want some guess for how to write a certain line of the function, I might not have tests for that specific line, but I can, for example, use the model's own log probabilities to rank. And yeah, what they found was that basically taking the average token log probability among a number of slightly more fancy ways of trying to rank was the best and that they could get. And here we were trying to sample code given docstring, but one of the magics of language models is that I can just condition on anything to try to get anything. I'm not guaranteed to get good things, but I can always try. So what if we try to use a model to give me a docstring given the code? So basically describe what a function does. So that's a very natural inversion of the problem that we had before. And that kind of data is certainly way less frequent in the training set, although it certainly exists in some cases because naturally in Python, docstrings comes before the code, but this is also a very common thing with code data. I can usually manufacture synthetic data sets that change the structure in some ways. So I can basically write deterministic program that takes Python functions and inverts the code in the docstring and make a training set for this task. And in this case I lose the ability to automatically evaluate if a doc string actually describes the code that well. I get a problem with natural language generation where-- Lisa talked about-- evaluation is quite hard. In the Codex, paper they evaluated this by hand. So basically pass@k where pass is a human said that the docstring describes the function. And surprisingly, this task seems to be harder than generating code from docstrings itself. So even a fine tuned model like-- so here Codex-S is the Codex that we saw that was fine tuned solve the tasks and Codex-D was fine tuned on this data set of generating docstrings given code. And in this case, they didn't get any benefit from fine tuning or any improvement from the base model that they started with. So it seems like maybe describing code is not that easy compared to writing the code. So-- sorry. You have a question. I was wondering, how do we ensure that the programs that are generated compile? Do we take advantage of like parsing trees and stuff? Yeah. So the question is, how do we know that they compile? In this case, they just literally save the code and randomly the Python [INAUDIBLE].. So if it threw an exception, it failed basically. If it ran and produced the exact output, then it succeeded. I'm just curious. For the second task, to what degree do we just think of it as like a reading comprehension task because I couldn't actually think of a measurement there, but is there any similarity between that task and the way to evaluate that task and the specific task to describe it? Yeah. So so the question was, can we see it as like a reading comprehension task of sorts for code? And yes, basically it's a way to probe how well can the model understand quote unquote, "what the code does." That is one task that is like code understanding, so to speak. Another one is code execution. Like given this code and this input, what output does it produce. Which I'll talk a little bit about, but it's also quite a hard task for these models. So they're often able to produce code that works, but if you give it the code, then it's hard to predict what the code does from the model. That makes sense? So, just to clarify, it's more difficult than a normal reading comprehension task. [INAUDIBLE]? Just why is code specifically different from this than just a normal-- Yeah. So how is code a different corpus? Is it more difficult? Yeah. I think, more or less, difficult depends to whom, right? An average human certainly can't describe what a Python function does, but not necessarily because it's inherently more complex task. So I guess it depends on who you ask. Yeah. So in the examples that the model got wrong, is there a way to do an analysis of the source of the error like if there was an error with the algorithm versus just a syntax error? Yeah. So the question is, what kind of errors does the model make and can we evaluate it automatically? Yes. I didn't include this here, but one of the papers that I'll talk a little bit about did this analysis of what kind of error does the model make at different scales, and the result there was that as the models grow in number of parameters, they tend to make less syntactic errors and less compilation errors and have more semantic errors like the program still runs but fails on some tests. And at the smaller sizes, it's way more common to get like syntax errors like didn't close the parentheses or something like that. OK. So as you've noticed, the base technology here was still just transformers. We're sampling from a transformer and running the code, and maybe sampling more and reranking using log probabilities, but not nothing extremely specific to code besides the fact that we can execute the output. DeepMind came up with AlphaCode, which was very talked about-- I'm sure at least some of you have heard of it-- which was basically a system that expanded on these ideas of training language models to generate code. And in this case, their target was to solve programming competition problems, which some of you might heard about. These are competitions just like math competitions, but where the challenge is to come up with algorithms and then write code that solve a computational problem. And the foundation of AlphaCode was still sampling from Transformers. A lot of their technical design choices were basically targeted at allowing faster sampling. So they came up with a cheaper version of attention where you share the key value heads but have multiple query heads because that was an engineering bottleneck in their sampling. And they use an encoder-decoder Transformer because it was faster to just encode the problem once, but aside from that, very similar ideas. So they pre-trained their transformer on basically, in this case, mostly code. I think from their description, it was basically just a data set composed of GitHub code where the encoder was additionally trained with masked language modeling laws. And they fine tuned the model then on a much smaller data set of human solutions to programming competition problems, which are much sparser than like arbitrary GitHub code. They used one variant of reinforcement learning fine tuning called "GOLD," not RLHF but kind of similar idea. Just in this period that you don't want to penalize the model for not being able to produce all valid solutions. You just want it to be able to output some solution. So if sampling from the model is giving you some solution then it should be getting the reward. And one interesting trick that they did was value conditioning. So basically, since we don't have that many submissions to these competitive programming problems, it's a little bit bad to simply discard all of the wrong solutions, which-- we have a lot more wrong solutions than correct solutions. So we want to train on them somehow but, we don't want to make them autogenerate wrong solutions, but there's still some interesting statistics to be learned there. So to train on those solutions, they basically designed their training set so that the code starts with a comment that says whether it's correct or incorrect. So I can make training examples where the correct solution start with, this correct solution, and the incorrect one say, this is an incorrect solution. And then at test time, of course, when generating a program that I want to be correct I'll start with the comment, this is the correct solution. That lets the model in some way benefit from seeing correct solutions as well. And the thing that they really pushed in this paper was sampling. So in the Codex paper we were talking of up to 100 samples per problem, which is already a lot. Like it's something that just using the Codex API you would have a lot of trouble doing. In AlphaCode, they massively parallelized this and did 100,000 samples per problem. And as we're talking, if you are to participate in a programming competition and they actually did run off a code on a real one, you can't afford at all to submit 100,000 attempts of solving a problem. So in some way, you have to narrow that down to a very small set. And in this case, they set the limit of making up to 10 submissions, which is the range of what a human participant would do. So how do we do that? Well, the first obvious step is filtering. So each of these problems comes with some examples of inputs and outputs. So I can immediately discard all the programs that don't satisfy even those example inputs. That's already removed like 90% of these 100K samples. Then we still have a quite significant number of programs that work at least on the basic tests. So what do we do? So what they did was they trained a separate model that generates inputs for a program. And for these generated inputs, we don't really what's the expected output unless we are really good at interpreting the problem statement. But even without knowing what's the expected output, I can use those generated inputs to basically group the programs that I have by behavior, right? So if I generate a string and I run all the programs on that input string, some of them produce this result and some of that produce that result. Then I can infer that maybe these programs are semantically the same, right? So if I had two submissions to make, maybe I would do one of each instead of two in the same cluster. So this is basically what they did. They generated a lot of inputs, clustered the programs based on their behavior on those inputs, and then picked one submission from each of the largest clusters, All right. What is the point of using incorrect submissions to augment training? How does that help the model to do better? Yeah. So the question is, how do the wrong solutions help the model in any way? So they didn't really do an ablation of not training the model on the incorrect solutions to measure the benefit of that specifically, but the intuition is that even the incorrect solutions have some interesting information for you to learn from, right? So you might learn that they are incorrect, for example. You might learn bug patterns. So you might learn that if somewhere in the code I forget to close the parentheses, for example, it is probably incorrect. And since in this case, we don't really have that much training data, anyway that you can get to use the training data that you have probably helps. That makes sense? Yeah, but that's a good question. It's not exactly clear what the model learns from the wrong solution. In the competitive programming context, I was exposed to, you do usually get a grade at least. You don't see the specific test, but get a grade for your submissions before the time is up. Was this the best use of that information by not looking that at all, since you submit [INAUDIBLE] 10 you get like clustering instead of trying to incorporate the feedback you get from the greater? Yeah. So in the competitions that they tried which was basically Codeforces, you only get like a binary response. Was it accepted or not. Yes, it's harsher than IOI, for example. Yeah so the result in offline experiments of basically solving this problems from a benchmark that they collected was basically that if your sample more, you solve more problems. So they get this log linear scaling with how many programs they sample at all of the model scales that they tried. Which essentially means that if you sample 10 times more programs, your solve rate increases in this linear rate of 6%, approximately. And also with compute. So with how many TPU days they took to train the model, it has also a roughly log linear scale, and also TPU seconds spent sampling for each problem. And so that was an offline evaluation on a set of problems that they collected, but they also tried this model live on competition on this website called Codeforces. And their model did get non-trivial performance in a bunch of contests. So they actually ran this in past contexts. They didn't run it live, but they tried to simulate as much as possible the setting where you would be in the competition, And yeah. In some of the contests, they would place in the top 30%, top 50%. Like a median coder in Division 2, which is important to notice. So as they describe, in the paper this is approximately a few months to a year of training programming, which is not to say that they're like winning these competitions anytime soon, but at the same time not trivial. And the main component of getting the performance that they did was sampling. Sampling more programs. So they did all this engineering to make sure that they could sample 100k programs per problem, and they had like an accumulation of techniques like the MLM pre-training on the encoder, like sampling with random problem tags, and the GOLD fine tuning and all that, and none of them would have helped if at test time they were just doing 1,000 samples. So the effects of basically all of those techniques only showed up when they scaled it up to 100K to a million samples. So on one hand, this shows the potential of very simple set of techniques that you've seen in this class of just sampling things from Transformers but taken at this extreme scale, but, of course, this also shows a limit, right? So at this rate of having to take 10x more samples to get 6% more problem solved, this won't get Division I any time soon. So we have to do something different if that's the goal. And one kind of problem that seems to be inherent to these models, if all you're doing is just like sampling complete programs, is this challenge that humans don't really have with compositionality. So this is an actual result that was presented in the Codex paper. And if you ask a person that knows basic Python programming how to solve problem x and they say it's trivial like reverse a string, for example, and if you separately ask them how do you compute the length of a string, and they also think that's trivial. If you give them the problem, can you reverse a string and then take the length, they'll say, OK, of course. That's a very simple composition of two things that are trivial, but that does not seem to be the case with these language models. So the Codex authors did the experiment where they manufactured tasks by basically chaining these very simple tasks, and the result was that as the number of these components grow, the probability that the samples from the model solves the composite problem decays kind of exponentially even if the model knows how to do each of the components individually. So this is something which is a challenge to these models and not to people. Yeah. So just some quick takeaways. It seems like Transformers just trained at scale on code, have non-trivial performance in this task. And these results, maybe for people that download Copilot and just test it and it sort of works, don't seem that surprising, but for the program synthesis field that had been for decades working on these very specific, very constrained domain specific languages, these results were just unimaginable a few years ago. And it seems like sampling and testing and filtering can get quite far, but it also gets expensive quite fast. So the AlphaCode, for example. Just training and evaluating their largest model used the equivalent energy of like 16 American households a year, for example. We can't also have everyone using these models at this scale all the time. And the other caveat here, of course, is that this setting where you get the extremely well specified problem which has tests and you can run the program and determine exactly when it passes all the tests, it's very different from real world programming where most of the time is spent understanding what's the problem, deciding what to do, revising the tests. A lot of time is spent editing code, and so on. So there's a lot of progress being made, but this of course still has a lot to go. Yes. One question is-- it's similar to a question that was asked earlier-- if we can do error analysis. Is it possible because one thing when we're doing this type of code generation is if we just assume it to be right, it's a lot harder for us to debug our code because we didn't write it ourselves? Are there any kind of like ideas or things people are considering in the field for how to go about debugging code that was written by an AI? Are there like AI debuggers as well? Yeah. So the question was about debugging. I think there are two things. So one of them is-- Yeah, I had a lot more things that I didn't get to, but one of them was this notion of automation bias which people have, which is we have a general tendency to believe things that are automated, and this is quite a problem. For example, there was this study run here at Stanford, even, where-- Codex introduces security bugs at a non-trivial rate, for example. Yeah, it's still hard to use these models without understanding what they're doing. And the problem of doing this process more interactively of writing the program and then looking at what it does and then maybe revising the code is still much harder than just trying to write the program from scratch exactly because-- well, one of the reasons is certain that we don't have that much data on that process happening with people. We see GitHub, which is kind of the published version of the code, but all the processes to get from an empty file to that is still not recorded. But, yeah, that's a very active area of research as well. Like models to revise and edit and debug. Yes, so-- are we at time? You have up to 5:50 actually. 5:50? OK, so we do have time to talk about something. OK. Awesome. Yes. So I'll try to go over a little bit. So one fun thing connecting to what we talked about back is that Codex can do some simple pragmatic reasoning. So for example, if I give you these inputs, list 1, 3, 2 returns 1, 2, 3, you'll would probably say sorting. It sorts the list. But what about 1, 2, 3 returns 1, 2, 3? Probably just identity, but it could also be sorting, right? It's consistent with sorting as well. But you would reason that if I wanted to specify the sorting function, I would probably give a different input. And if I give these inputs to Codex, it does predict that the first one is sorting but the second one is identity. Just an interesting thing to come out of just regular language modeling training. There are also experiments with using these models in a dialogue style, which is a little bit more about-- like the question asked. So I can ask it to write a function, and then model comes up with some implementation, but maybe it has an error. They can sometimes describe the change like, oh, but can you do the sort in reverse or only return the top four results, and it can often revise what it did, which is quite interesting. Yes. So last topic here is using programs not as the output that you want from the model directly, but rather as a representation for other things. So one general thing about humans is that our efficacy in a lot of tasks depends on using external tools, right? So if I ask you to multiply these two numbers, 1, 2, 3, 4, 5, 6, you can do it, but you probably won't just do it in your head, right? You use a calculator. Or if I ask you what time is it. Well, you, don't keep track of time that precisely, right? So you use a clock. Or what are the five largest airports in the world? You'll do some Google search. You figure it out, but you won't just take it out of your head. And when we are training a language model to just give us answers condition on the question or maybe on some context, we're basically asking it to come up with the answer all by itself. And a lot of these problems aren't reasonably solved in that manner. The problem with just telling what time is it, for example, is one that you fundamentally can't get out of the model that was trained and frozen and has to produce an output now. And for example, there was this language model that came out last year called Minerva which was trained on mathematical problems and solutions. And it a lot of times got the strategy right in solving these problems, but still makes a lot of arithmetic errors. So it says, OK, the solution will be this number plus this number equals something wrong, for example. So it seems limiting that we're asking the model to do all these things by itself. So this OpenAI paper from 2021 had this very simple idea of solving math word problems using language models but providing them with a calculator. And the way to let the model use a calculator is basically to assign a a special token in the input such that when the model generates that token, your decoder, instead of keeping conditioning on the model's probabilities, will then deterministically do something with the input like a calculator and paste the output in and the model's output sequence. So they generated this training set kind of semi-automatically where solutions to math word problems would have these annotations in angle brackets. And by seeing those annotations at in the training set-- and for training you don't really need to do anything special-- at test time you can give the model a calculator by basically watching until the moment where it outputs an equal sign. And then, once it does, instead of generating from the model, you can take the numbers that come before "call calculator," and then just paste the exact output after, right? And this, as you can imagine, gives a quite significant boost in solving these problems because you kind of isolate one kind of error. The model won't make arithmetic errors anymore. This same idea but taken a little bit farther was used to solve word problems but instead of the model outputting the solution in natural language, it kind of interspersed natural language with Python code, and the final answer was not given by the model, but by running the Python code that it provided. So here's an example. You can look at it in more detail later. And this also gives a big benefit over just having the model try to figure out what's the answer on its own. And more generally, there was this paper that came out on arXive called "Toolformer," where they basically extended this idea little bit further with a self-supervised approach of-- let's say, you come up with a list of tools and you want to kind of teach your model how to use those tools. And in this case they tried quite a few tools. So one of them was a calculator. Another was a machine translation system. So when the model outputs in its-- when decoding from the model it outputs like empty and a string, you go and call another neural network, which is a translation and do the translation that pays that back. Another one was doing search on Wikipedia, for example, or calling a question answering system. And with the right set of techniques to teach them how to output these sequences, you can get very interesting behavior of the model and of deciding on the fly which tool to use. And yeah, so this is basically-- the program here is not the final result that you want, but it's rather just a way for to represent this usage of external tools. Yes. So we talked a little bit about this before. So I guess one natural question for people graduating in computer science is, will I have a job after I graduate or will Codex replace me? And as it turns out, in software engineering, a lot of time is not spent writing code in the real world. So there's one study, but there are a lot more that show that when you track developers time, they spent a lot of time just reading code, a lot of time outside of the IDE. This is just IDE time, navigating. And 5% is actually editing code. And even editing code a lot of time is not writing new code, but rather like fixing bugs and maintaining. So there's quite a lot of time that's not spent writing code for people that are paid to write code. Yeah. And there's this whole process of deciding what to build, which is usually more important than just writing, just building the thing right then. Yeah, this is still quite far from what Codex can do. And Yeah there's this notion we talked a little bit about that debugging is very interactive. We run, go back, revise, and this process is mostly lost by just sampling more from the model and trying again basically from scratch. And there's active research even here at Stanford on using models also true to fix bugs automatically. When you write a program has some syntax error, how to go back and maybe change, but it's still very different from the more open ended kind of debugging that people can do. Yeah. Of course, even all the code on GitHub is still not all the code that you can imagine, right? There are new libraries all the time there's internal libraries for companies that will just not be on GitHub at any point. So there are challenges in teaching models to use those as well. And as we mentioned even, if models can generate code, they still fail a lot of code understanding challenges like just executing code, for example. Like asking what this code outputs. And even fine tuning doesn't seem true to solve that problem. And yeah. And the other thing is that public code also has a lot of bugs, as you can imagine. And they're being fixed all the time, so training on buggy code will also mean that sometimes you generate buggy code. And so you still have to understand what the model outputs. And there are security bugs that can be introduced by language models as well. And yes, so just to conclude-- a little bit past time-- a lot of these capabilities were completely out of reach even a few years ago. So this is a really exciting time to be watching this happen. And I think there's a fascinating intersection here between natural language, which is extremely ambiguous and flexible and contextual and we do it so easily, and programming languages are these extremely rigid languages. You forget a parentheses and compiler has no idea what you're trying to do anymore. And we can bridge between these two worlds very easily. Now language models are also starting to-- and besides models that write programs, programs are also just a general interesting representation for reasoning. You can represent mathematics, legal contracts. This notion of calling and combining different tools. Yeah, so all of these are very active topics of research. And so hope you guys enjoy. Yes.
Stanford_CS224N_Natural_Language_Processing_with_Deep_Learning_2023
Stanford_CS224N_NLP_with_Deep_Learning_Winter_2021_Lecture_18_Future_of_NLP_Deep_Learning.txt
good afternoon folks uh welcome to lecture 18. today we'll be talking about some of the latest and greatest developments in neural nlp where we've come and where we're headed uh chris just to be sure uh are my present and what's visible from this part is it fine you're visible okay uh but none of my presenters right correct okay great thank you um so just as a reminder note that your guest lecture reactions are due tomorrow at 11 59 pm uh great job with the project milestone reports you should have received feedback now if not contact the co-staff i think uh you know we had some last minute issues uh but if that's not resolved please contact us um uh finally the project reports are due very soon uh on the march 16th which is next week there's one question on ed about the leaderboard and uh the last day to summon on the leaderboard is march 19th uh as well okay so for today we'll start by talking about extremely large language models and gpd3 that have recently gained a lot of popularity we'll then take a closer look at compositionality and generalization of these neural models um while transformer models like bird and gpt have really high performance on all benchmarks they still fail in really surprising ways when deployed how can we strengthen our understanding of evaluating these models so they more closely reflect task performance in the real world and then we end by talking about how we can move beyond this really limited paradigm of teaching models language only through text and look at language grappling finally i'll give some practical tips on how to move forward in your neural nlp research and this will include some practical tips for the final project as well okay so uh you know this this beam really kind of captures uh you know what's been going on in the field really and it's it's just that our ability to harness unlabeled data has vastly increased over the last few years and this has been made possible due to advances in not just hardware but also systems and our understanding of like self-supervised uh training so we can use like lots and lots of un-given data um so based on this here's a general representation learning recipe that just works for you know all basically most modalities so the the recipe is uh is basically as follows so convert your data if it's images converted or like it's not uh it's it's really modality agnostic so you take your data if it's images text or videos and you convert it into a sequence of integers and in step two we define a loss function to maximize data likelihood or create a denoising autoencoder loss finally in step three train on lots and lots of data um certain properties emerge only when we scale up model size and this is really the surprising fact about scale so to give some examples of this recipe in action here's gpd3 which can learn to do a really non-trivial classification problem with just two demonstrations and we'll talk more about this soon um another example as we saw in lecture 14 is d5 which does really effective close book qa by storing knowledge in parameters uh finally just so i covered another modality here's a recent text to image generation model with really impressive zero shot generalization okay so now let's talk about gpd3 so how big really are these models uh this table kind of presents some numbers to put things in perspective um so this so we have a collection of models starting with medium-sized lstms which was sort of a staple in pre-2016 nlp all the way to humans who have 100 trillion synapses and some in the middle we have gbt2 with over a billion parameters and gpt3 with over 150 billion parameters and this exceeds the number of synaptic connections in a honeybee brain so obviously anyone with little knowledge of neuroscience and knows that this is not an apples to oranges comparisons uh that this is an apple store in this comparison but the point here is that the scale of these models is really starting to reach astronomical numbers um so here are some facts about gbt3 uh for one it's a large transformer with 96 layers um it has more or less the same architecture as gpd2 with the exception that to scale up attention computation uh it uses these locally banded sparse attention patterns and i really encourage you to look at the paper to understand the details the reason we mentioned this here is because it kind of highlights that scaling up is simply not just changing hyper parameters as many might believe and it involves really non-trivial engineering and algorithms to make computations efficient finally all of this is trained on 500 billion tokens taken from the common crawl toronto books corpus wikipedia [Music] so what's new about gpp3 right so let's let's look at some of the results on the paper first so obviously it does better on language modeling and text completion problems as you can see from this table it does better than gpt2 at language modeling in the pentree bank as well as better on story completion on the story completion data set called ambada to give a flavor of what's to come uh let's take a closer look at this limbaugh story completion data set so the task here is that we're given a short story and we are supposed to fill in the last word um satisfying the constraints uh of the problem can be hard for a language model which could generate a multi-word completion with gpd3 the really new thing is that we can just give a few examples as prompts and sort of communicate a task specification to the model and now gpt3 knows how the completion must be a single word this is a very very powerful paradigm and we give some more examples of this in context learning in a couple more slides so apart from language modeling it's really good at these knowledge intensive tasks like uh close book qa as well as reading comprehension and here we observe that scaling our parameters results in a massive improvement in performance so now let's talk about in context learning uh gbt3 demonstrates some level of fast adaptation to completely new tasks this happens via what's called in context learning as shown in the figure the model training can be characterized as having an outer loop that learns a set of parameters that makes the learning of the inner loop as efficient as possible and with this sort of framework in mind we can really see how a good language model can also serve as a good few short learner so in this segment we will have some fun with gpd3 and look at some demonstrations of this in context learning um [Music] so uh to start up here is an example where someone's trying to create an application that converts language a language description uh to bash one language the first three examples are prompts followed by generated examples from gpd3 uh so it gets a list of running processes right this one's easy probably just involves looking at a hash table some of the more challenging ones that involve copying over um you know some uh spans from the text like the scp example is kind of interesting as well as the harder one to parse grip the scp example comes up a lot uh during office hours so gpd3 knows how to do that here's a somewhat more challenging one where the model is given a description of a database in natural language and it starts to emulate that behavior so the text in bold is sort of the prompt given to the model the prompt includes somewhat of a functional function specification of what a database is so it says that the database begins knowing nothing the database knows everything that's added to it the database does not know anything else and when you ask a question to the database if the answer is there in the database the database must return the answer otherwise it should say it does not know the answer so this is very new and very powerful um and you know the prompt also includes some example usages so when you ask two plus two the database does not know you ask the capital of france the database does not know and then you add in a fact that tom is 20 years old to the database and now you can start asking it questions like where does tom live and as expected it says that the database does not know but now if you ask it what's tom's age uh the database says that tom is 20 years old and if you ask what's my age the database says basically that it does not know because that's not been added so this is really powerful um here's another one uh now uh in this example the model is asked to blend concepts together and so there's a definition of what does it mean to blend concepts so if you take airplane and car you can blend that to your flying car that's essentially you know there's a wikipedia definition of what concept blending it concept blending is along with some examples and now let's look at uh you know some some some problems followed by what gp3 answers so the first one is straightforward two-dimensional space uh blended with 3d space gives 2.5 dimensional space the one that is somewhat interesting is old and new gives recycled um then triangular square gives trapezoid that's also interesting the one that's like really non-trivial is a geology plus neurology used to sediment neurology and i had no idea what this was it's apparently correct um so clearly it it's able to do these very flexible things just from a just from prompt so here's another you know class of examples that gbt3 uh you know gets somewhat right and these are uh these copycat analogy problems which have been really well studied in cognitive science science and the way it works is that i'm going to give you some examples and then ask you to uh you know introduce a function from these examples and apply it to you apply to like new queries so if abc changes to abt what does pqr change to well pqr must change to pqs because the function we've learned is that the last letter must be incremented by one and and this function uh humans can now apply to examples of like you know varying types so like uh p repeated twice q repeated twice r repeated twice much change to be repeated twice q repeated twice and s repeated twice um and it seems like gpd3 is able to get them right uh more or less but uh the problem is that if you if you ask it to generalize to uh you know examples that have increasing number of repetitions then were seen in the prompt it's not able to do that so in this situation uh you ask it to you know make an analogy where um the the the letters are repeated four times and it's never seen that before it doesn't know what to do and so it gets all of these wrong so you know there's a point to be made here about uh just like maybe these prompts are not enough to convey uh you know the function the model should be learning and maybe even more examples that you can learn but the point is that it probably doesn't um it probably it probably does not have the same kind of generalization that humans have and that brings us to sort of the limitations of these modules and some some open questions so just looking at the paper and uh you know passing through the results it seems like the model is bad at logical logical and mathematical reasoning anything that involves doing multiple steps uh of reasoning and that explains why it's bad at arithmetic why it's bad at work problems why it's not great at analogy making and even like traditional textual entailment data sets that seem to require logical reasoning like rte so second most subtle point is that it's unclear how we can uh make permanent updates to the model like maybe if i want to teach a model a new concept that's possible to do it while i'm interacting with the system but once the interaction is over it kind of restarts and does not have a notion of knowledge and it's not that this is something that the model cannot do in principle but just something that's not really been explored [Music] um it doesn't seem to exhibit human-like generalization which is often called systematicity and i'll talk a lot more about that and finally language is situated and gpt3 is just learning from text and there's no exposure to other modalities there's no interaction so maybe the aspects of meaning that it requires are like somewhat limited and maybe we should explore how we can bring in other modalities so we'll talk a lot more about uh these last uh a lot last few limitations the rest of the lecture but maybe i can possibly some questions now if there are [Music] any i don't think there's a big outstanding question but i mean i think some people aren't really clear on you know few shot setting and prompting versus learning and i think it might actually be good to explain that a bit more okay yeah so um so maybe let's let me pick a simple example um let me pick this example here so uh prompting just means that so gpd3 like if you go back to first principles right gbt3 is basically just a language model and what that means is uh given a context it'll tell you what's the probability of of the next word right so if i give it a context uh w1 through wk uh gpd3 will tell me what's the probability of w uh k plus one for you opens the vocabulary so that's that's what a language model is uh a prompt is essentially a context that gets pre-bended before gt3 can start uh generating and what's happening with in context learning is that the uh the context that you append uh that that you that you pre-pen to gp3 are basically xy examples um so that's that's the prompt and the reason why it's also uh it's equivalent to few short learning is because you pre-bend a small number of xy examples so in this case if i just prepend this uh this one example that's highlighted in purple then that's essentially one shot learning because i just give it a single example as context and now like given uh you know given this query which is also appended due to the model it has to make a prediction so um so the input output format is the same as how a few shot learner uh would receive but since it's a language model the training data set is essentially presented as a context so someone is still asking can you be more specific about the in-context learning setups what is the task right so um so let's see maybe i can go to um yeah so maybe i can go to this slide so the task is just that i'm it's a language model so it gets a context which is just a sequence of tokens and the task is just to you know uh uh so you have a sequence of tokens and then the model has to generate given a sequence of tokens and the way you can convert that into an actual machine learning classification problem is that uh so for this example maybe you give it 5 plus 8 equals 13 7 plus 2 equals 9 and then one plus zero equals and now gpd3 can fill in uh you know a number there so that's how you convert it into a classification problem the context here would be these two examples of uh of arithmetic like five plus eight equals thirteen and seven plus two equals nine and then the query is one plus zero equals and then the model since it's just a language model has to fill in one plus zero equals question mark so it fills in something that doesn't have to fill in numbers it could fill in anything and but if it fills in a one uh you know it does the right job so that's how you can take like a language model and do few shot learning with it i'll keep on these questions how is in context learning different from transfer learning so i i guess the like in in context learning i mean you can think of in context learning as being a kind of transfer learning but like transfer learning does not specify the mechanism through which the transfer is going to happen within context learning the mechanism is that the training examples are sort of appended to the model which is a language model just uh you know in order so let's say you have x y x one y one x two y two and these are just appended directly to the model and now it makes prediction on you know some query uh some some queries that are drawn from this data set so yes it is uh it is a sub-category of transfer learning but transfer learning does not specify um exactly how this transfer learning is achieved but in context learning is very specific and says that for language models you can essentially concatenate the training data set and then present that to the language model people still aren't sufficiently clear on what is or isn't happening with learning and prompting so you know another question is so in context learning still needs fine tuning question mark we need to train gpt 3 to do in context learning question mark right so um so there are two parts to this question right so uh so the answer is yes and no so of course the the model is a language model so it needs to be trained so you start with some random parameters and you need to train them but the model is trained as a language model right and once the model is trained you can now use it uh to do transfer learning and the model parameters in in context learning are fixed you do not update the model parameters all you do is that you give it these uh you know small training set to the model which is just appended to the model as context and now the model can start generating from that point on so in this example if 5 minus 8 equals 13 and 7 plus 2 equals 9 are two xy examples in in vanilla transfer learning what you would do is that you would take some great in steps update your model parameters and then make a prediction on one plus zero equals what right but within context learning all you're doing is you just concatenate uh 5 plus 8 equals 13 and 7 plus 2 equals 9 to the model's context window and then make it uh predict what one plus 0 should be equal to maybe we should end for now with one other bigger picture question which is do you know of any research combining these models with reinforcement learning for the more complicated reasoning tasks so that is an excellent question uh there is some recent work on kind of trying to align um language models with human preferences where yes there is like uh you know some amount of fine tuning with reinforcement learning based on like these preferences from humans so maybe you want to do a summer you want to do a summarization problem with gbd3 and the model produces multiple summaries and for each summary maybe you have a reward that is essentially a human preference like maybe i want to include some facts and i don't want to include you know some other uh non-important facts and so i can construct a reward out of that and i can fine-tune the parameters of my language model uh basically using uh reinforcement learning based on this reward which is essentially human preferences uh so there's some very recent work that tries to do this but i'm not sure uh yeah i'm not aware of any work that tries to use reinforcement learning to teach her reasoning i did these models but i think it's a interesting future direction to explore maybe you should go on at this point okay okay so we'll talk a bit more about these last two points uh so systematicity and language grounding um so just to start off like how do you define systematicity so really the definition is that there is a definite and predictable pattern among the sentences that native speakers of a language understand and so there's a systematic pattern among the sentences that we understand what that means is let's say there's a sentence like john loves me right and if a native speaker understands the sentence then they should also be able to understand the sentence mary loves john and closely related to this idea of systematicity is the principle of compositionality and for now i'm going to uh you know ignore the definition by montague and just look at the rough definition and then we can come back to this other like more concrete definition the rough definition is essentially that the meaning of an expression is a function of the meaning of its parts so that brings us to the question are human languages really compositional and here are some examples that you know make us think that maybe uh yes so like if you look at what is the meaning of the noun phrase brown cow so it is composed of the meaning of the adjective brown and and the noun cow um so all things that are brown and all things are a cow take the intersection and get brown cows similarly red rabbits so all things that are red all the things are rather combining them get red and then kick the ball this work phrase can be understood as you have some agent that's you know performing a like kicking operation on the ball uh but this this is not always the case that uh you can like get the meaning of the whole by combining meanings of parts so here we have some counter examples that people often use so like a red herring does not mean all things that are red and all things that are heading and kick the bucket definitely does not mean that there's an agent that's kicking the bucket so uh while these examples like are supposed to be provocative like we think that language is like mostly compositional there's lots of exceptions but for vast majority of sentences that we've never heard before we're able to understand what they mean by piecing together the words that the sentence is composed of and so what that means is that maybe compositionality of representations are helpful prior that could lead to systematicity in behavior um and that brings us to the questions that we ask in the segment are neural representations compositional and the second question is if so do they generalize systematically um so how do you even measure if representations that on your network learns exhibit compositionality um so let's uh let's go back to this definition from logic u which says that compositionality is about the existence of a homomorphism from syntax to something um and to look at that we have this example which is lisa does not skateboard and we have a syntax tree uh corresponding to this example and the meaning of the sentence can be composed in uh according to according to the structure that's decided by the syntax so meaning of lisa does not skateboard it's a function of the meaning of lisa and does not skateboard the meaning of does not skate but is a function of does and not skateboard meaning of not skateboard is a function of north and skateboard so that's good um and so this gives us one way of formalizing how we can measure compositionality in neural representations and so compositionality of representations could be thought of as how well the representation approximates an explicitly homomorphic homomorphic function and learned in a large representation space so what we are going to do is essentially measure if we were to construct a neural network that whose computations are based exactly according to these parse trees how far are the representations of our learnt model from this explicitly compositional uh representation and that gave us some understanding of how compositional the neural networks representations really are uh so to unpack that a little bit uh instead of having um yeah so so instead of having uh denotations we have uh representations uh uh in the node uh and to like kind of be more concrete about that uh we first start by choosing a distance function that tells us how far away two representations are and then we also need a way to compose together two constituents to give us uh sort of the meaning of of the whole and but once we have that we can start by uh we can create like an explicitly compositional function right so what we do is uh we have these uh we have these uh representations at the leaves that are initialized randomly and the composition function that's also initialized randomly and then a forward pass according to this syntax is used to compute the representation of lisa does not skateboard and now once you have this representation you can create a loss function and this loss function measures how far are the representations of my neural network from this second sort of proxy neural network that i've created and then i can uh basically optimize both the composition function and the embeddings of the leaves and then once the optimization is finished i can measure how far was the representation from of my neural net from this explicitly compositional network on a held outside and that then tells me whether the representation of my neural net learnt were actually compositional or not so uh to see how well this works let's look at a plot and um this is relatively uh complex but uh just to unpack this a little bit uh it it it plots uh the mutual information between uh the input that uh the neural network receives versus the representation against this tree reconstruction error that that we were talking about and to give some more background about what's to come uh there is a theory of the which is called the information bottleneck theory which says that uh as a neural network trains uh it first tries to maximize the mutual information between the representation and the input in an attempt to memorize the entire data set and that is called uh that is our memorization phase and then once memorization is done there is a learning or a compression phase where this mutual information starts to decrease and the model is essentially trying to compress the data or consolidate the knowledge in the data into its parameters and what we are seeing here is that as a model learns which is characterized by decreasing major information we see that the representations themselves are becoming more and more compositional and overall we observe that learning is correlated with increased compositionality as measured by this tree reconstruction error so that's really encouraging so uh now that we have a method of measuring compositionality uh of representations in these neural nets uh how do we you know start to create benchmarks now you know that let's see if they are generalizing systematically or not so to do that uh here's a method for taking any data set and splitting it into a trained test split uh that explicitly tests for this kind of generalization so to do that we use this principle called maximizing the compound divergence and to illustrate how this principle works uh we look at this toy example so in this toy example we have a training data set that consists of just two examples and test data set of just two examples um the atoms that are defined as the primitive elements uh so entity words predicates question types so you know in this toy example goldfinger christopher nolan these are all so the primitive elements and the compounds are compositions of these primitive elements so who directed entity would be the composition of the question type did x predicate y and and the predicate direct so here's a basic machinery for producing compositionally challenging splits so uh let's start by introducing two distributions the first distribution is the normalized frequency distribution of the atoms so given any data set if we know what the notion of atoms are we can basically compute the frequency of all of the atoms and then normalize that by the total count and that's going to give us um one one distribution and we can repeat the same thing for the compounds and that'll give us a second uh frequency distribution so uh note that these are just two probability distributions and once we have these two distributions we can essentially define the atom and compound divergence simply as uh this quantity here and where there is the journal coefficient between two categorical distributions the churn of coefficient basis basically measures uh how far two categorical distributions are so just to get a bit more intuition about this uh if we set p to q then the turn off coefficient is one which means these these representations are like maximally similar and then if p is non-zero everywhere q is zero um or or if or if p is zero in all the places where q is zero then the channel coefficient is exactly uh is exactly zero which means that these two distributions are maximally far away and uh the overall goal by uh describing uh this objective is that uh this loss objective is just that we are going to maximize the compound divergence and minimize the atom divergence and so what is the intuition behind doing such a thing so what we want is to ensure that the unigram distribution in some sense is constant between the train and test split so that it's uh so that the model does not encounter any new words but we want the compound divergence to be very high which means that these same words that the model has seen many times must appear in new combinations which means that we are testing for systematicity and so if you do uh if you follow this procedure for a semantic passing data set let's say what we see uh is that as you increase the scale we see that the smaller just does better and better at a compositional generalization but uh just pulling out a quote from this this paper pre-training helps for compositional generalization but doesn't fully solve it and what that means is that maybe as you keep scaling up these models you'll see better and better performance or maybe it starts to saturate at some point in any case we should probably be thinking more about this problem instead of just trying to brute force it so now uh this segment kind of tells us that the way we split a data set you know we can measure for like different kinds of um we can measure like different behaviors of the model and that tells us that maybe we should be like thinking more critically about how we're evaluating models in nlp in general so you know there has been a revolution basically over the last few years in the field where we're seeing all of these large transform models be all of our benchmarks at the same time there's uh you know still not complete confidence that once we deploy these systems in the real world they're going to you know be like they're going to maintain their performance and so it's unclear if these gains are coming from spurious correlations or some real task understanding and so how do we design benchmarks that accurately tell us how well this model is going to do in the real world and so i'm going to give one example of works that try to do this and that's the idea of dynamic benchmarks and what dynamic what the the idea of dynamic benchmarks is basically saying that instead of testing our models on static uh on static test sets we should be evaluating them on an ever-changing dynamic benchmark and there's many recent examples of this and and the idea dates back to a 2017 workshop at emlp and so the overall schematic looks something like this that we start with a training data set and a test data set which is the static uh static order we train a model on that and then once the model is trained uh we deploy that and then have humans create new examples that the model fails to classify and uh crucially we're looking for examples the model does not get tried but humans have no issue figuring out the answer to so by playing this game of whack-a-mole where you know we humans figure out what are sort of the holes in the model's understanding and then add that back into the training data retrain the model deploy it again have humans create new examples we can essentially construct this never-ending uh you know data set this never-ending test set um which can hopefully be a better proxy of estimating real-world performance um so so this is some really cutting-edge research and one of the main challenges of you know this class of works is that it's unclear how much this can scale up because uh maybe after certain after multiple iterations of this whack-a-mole uh humans are just fundamentally limited by creativity so figuring figuring out how to uh you know deal with that is is really an open problem and kind of approaches just use examples from other datasets to you know prompt humans to think more creatively but maybe we can come up with like better like more automated methods of doing this so uh this brings us to sort of the final segment or actually let me stop for questions at this point and see if people have questions here's a question with dynamic benchmark doesn't this mean that the model creator will also need to continually test slash evaluate the models on the new benchmarks new data test new data states uh wait a second sorry um yeah so with with dynamic benchmarks yes it's absolutely true that uh you will have to continuously keep training your model and that's just to ensure that um you know the the reason your model is not doing well on the test set doesn't have to do with like this domain mismatch um and what we're really trying to do is like you know measure how like just come up with a better estimate of the model's performance for the overall task and just trying to get like more and more data so yes to answer to answer your question yes we need to keep like training the model again and again but this can be automated okay so uh i'll move on to sort of uh language grounding so uh in this final step segment i'll talk about how we can move beyond just training models uh on text alone um so many have articulated the need to use modalities other than their ex if we someday want to get at real language understanding and uh this has you know ever since we've had like these big language models you know this there has been sort of a rekindling of this debate and recently there was uh multiple papers on this and so at acl last year there was this paper that argues uh through multiple thought experiments that it's actually impossible to acquire meaning from form alone where meaning refers to the communicative intent of a speaker and form refers to text or speech signals um a more modded version of this was put forward by the second paper where they say that training on only web scale data kind of limits the world scope of models and kind of limits uh the aspects of meanings that the model can actually acquire um and so here's sort of a diagram that i've borrowed from the paper and what they say is uh the era where we were training modernism like supervised data sets uh models were limited in words called one and now that we've moved on to exploiting like unlabeled data we're now in world scope 2 where models just have strictly more signal to get more aspects of the minimum if you mix an additional modalities into this so maybe you make some videos and maybe you make some images then that expands out the world scope of the model further and now maybe it can acquire more aspects of meaning such that now it knows that the lexical item in red refers to you know red images maybe and then if you go beyond that you can have a model that is embodied and it's actually living in an environment where it can interact uh with its data conduct um interventions and experiments and then if you go out uh go even beyond that you can have models that live in a social world where they can interact with other models because after all the purpose of language is to communicate and so she can have like a social world where models can communicate with other models that kind of expands out uh aspects of meaning [Music] and so gpd3 is in world scope queue so there are a lot of open questions in this space so given that there are all of these good arguments about how we need to move beyond text what is the best way to do this at scale um we know that you know babies cannot learn language from watching tv alone for example so there has to be some interventions and there has to be interactions that need to happen but at the same time the question is how far can models go by just training on static data as long as we have additional modalities especially when we combine this with scale and if interactions with the environment are really necessary how do we collect data and design systems that interact minimally or in a cost effective way and then finally put pre-training on text still be useful if any of these other uh if any of these other like um any of these other research directions uh become more sample efficient so if you're interested in learning more about this topic i highly encourage you to take cs224u which is offered in spring they have like multiple lectures on just language okay so in this final segment i'm gonna talk a little bit more about how you can get involved with uh you know nlp and deep learning research and how uh you know how you can make more progress so uh here are some general principles for how to make progress in english and research so i think the most important thing is to just kind of read broadly which means not just read the latest and greatest papers in archive but also read like pre-2010 statistical nlp um learn about the mathematical foundations of machine learning to understand how generalization works so take cs 29 m uh learn more about language which means taking uh classes in the linguistics department in particular i would recommend universe 138 and also take cs224u and finally if you wanted uh if you want to take inspiration from how babies learn then definitely read about child language acquisition literature it's fascinating uh finally learn how to learn your software tools which involves scripting tools uh version control data wrangling uh learning how to visualize quickly with jupiter notebooks and deep learning often involves um you know running multiple experiments with different hyper parameters and different ideas all in parallel and sometimes it can get really hard to keep track of everything so learn how to use experiment management tools like weights and biases and uh finally i'll talk about some really quick final project tips um so firstly let's just start by saying that if your approach doesn't seem to be working please do not panic uh put assert statements everywhere and check if the computations that you're doing are correct use breakpoints extensively and i'll talk a bit more about this uh check if the loss function that you've implemented is correct and one way of debugging that is to see that uh the initial values are correct so if you're doing a kva classification problem then the initial loss should be the natural log of k always always always start by creating a small training data set which has like five to ten examples and see if your model can completely overfit that if not there's a problem with your training loop um check for saturating activations and dead values and often this can be fixed by you know like maybe there's some problems to gradient so maybe there's some problem with the initialization which brings me to the next point check your gradient values see if they're too small which means that maybe you should be using residual connections or lstms or if they're too large then you should use gradient clipping in fact always use gradient clipping um overall be methodical if your approach doesn't work come up with hypotheses or for why this might be the case design oracle experiments to debug it look at your data and look at the errors that it's making and just try to be systematic about everything so um i'll just say a little bit more about uh breakpoints uh so there's this great library called pdb it's like gdp but it's for python so that's why pdb um to create to create a breakpoint just add the line import pdb pdb set trace before the line you want to inspect so earlier today i was trying to play around with uh with the transformers library so uh and i was trying to do question answering so i have a really small training corpus and the context is one morning i shot an elephant in my pajamas how he got into my pajamas i don't know and the question is what did i shoot and to do to solve this problem i basically imported a tokenizer and a birth model um and i you know initialize my tokenizer initialize my model like tokenize my input i set my model into the eval mode and i try to look at the artwork but i get this error and i'm very sad it's not clear what's causing this error and so the best way to look at what's causing this error is to actually put a breakpoint um so right after modular eval i put a breakpoint because i know that that's where the problem is so the problem is in 21 so i put a breakpoint at line 21 and now once i put this breakpoint i can just run my script again and it stops before executing line 21 and at this point i can examine all of my variables so i can look at the token as input because maybe that's where the problem is and lo and behold i see that it's actually a list so it's a dictionary of lists whereas modules typically expect a dodge tensor so now i know what the problem is and that means i can quickly go ahead and fix it and everything just works uh so this just shows that you should use breakpoints everywhere uh if your code is not working and it can just like help you debug really quickly um okay so uh finally i'll say that if you want to get involved with nlp and deep learning research and if you really like the final project uh we have the clips program at stanford and this is a way for undergrads master students and phds who are interested in doing nlp research and want to get involved with the nlp group um so we highly encourage you to apply to clips um and so yeah so i'll uh conclude uh continue today's class spicing that you know we've made a lot of progress uh in the last decade and that's mostly due to you know clever understanding of neural networks data hardware all of that combined with scale we have some really amazing technologies that can do really exciting things and we saw some examples of that today um in the short term uh i expect that we'll see more scaling uh because it just seems to help so perhaps even larger models uh but this is not trivial so you know i i said that before and i'll just say it again scaling requires really non-trivial engineering efforts and sometimes even you know clever algorithms and so we there's a lot of interesting systems work to be done here but in the long term uh we really need to be thinking more about these bigger problems of like systematicity generalization how can we make our models you know learn a new concept really quickly so that's fast adaptation uh and then we also need to you know create benchmarks that we can actually trust so if my model has some performance on some sentiment analysis data set and deployed in the real world that should be reflected in the number that i get from the benchmark so we need to make progress uh in in the way we evaluate models and then also figuring out a way to move beyond text in a more tractable way this is also really essential so yeah that's that's it good luck with your final projects i can take more questions at this point so i answered a question earlier that actually i think you uh could also find on um it was the question of whether you have a large model that's pre-trained on language if it will actually help you in other domains like you apply it to vision stuff uh yeah yeah so i guess uh the answer is actually yes like there was a paper that came out really really recently like just two days ago that just takes uh i think it was gpt too i'm not sure it's like one large transformer model that's featuring on text and like other dialogues definitely apply to images and i think they apply to like uh math problems and some more modalities and show that it's actually really effective at like transfer so if you pre train on text and then you move to a different modality that helps i think part of the reason for that is just that you know across modalities there is a lot of auto aggressive structure that is shared um and i i think one reason for that is that uh language is really referring to the world around it and so you might expect that uh there is you know some there is like some correspondence that's just beyond the autoregressive structure so there's also works that show that uh if you have just text only representations and image only representations you can actually learn a simple linear classifier that can learn to align both of these representations and all of these works are just showing that there's actually a lot more common between mortalities than we thought in the beginning uh so yeah i think yeah it's it's possible to create a text and then fine-tune on your modality of interest and it should probably be effective of course based on what the modality is but yeah for like images and videos it's certainly certainly effective more questions well a couple of questions have turned up one is what's the difference between cs 224 you and this class in terms of the topics covered and focus do you want to answer that one shakar or should i have a go at answering it maybe you should answer this one okay so next quarter um cs224u natural language understanding is co-taught um by chris potts um and bill mccartney um so you know in essence um it's meant to be different that natural language understanding focuses on what its name is um sort of how to build computer systems that understand the sentences of natural language now you know in truth the boundary is kind of complex because um we do some natural language understanding in this class as well and certainly for the people who are doing the default final project um question answering well that's absolutely a natural language understanding task but the distinction is meant to be that you know at least a lot of what we do in this class things like you know the assignment 3 dependency parser or building the machine translation system in assignment 4 that they're in some sense natural language processing tasks where you know processing can mean anything but commonly means you're doing useful useful intelligent stuff with um human language input but you're not necessarily deeply understanding it so there is some overlap in the classes um if you do cs224u you'll certainly see word vectors and transformers again but the emphasis is on doing a lot more with natural language understanding tasks and so that includes things like building semantic parsers so they're the kind of devices that um will you know respond to questions and commands such as an alexa or google assistant will do building relation extraction systems which get out particular facts out of a piece of text of all this person took on this position at this company looking at grounded language learning and grounded language understanding where you're not only using the language but the world context to get information and other tasks that sort i mean i guess you can look at the website to get more details of it i mean you know relevant to this class i mean a lot of people also find it an opportunity to just get further in doing a project in the area of natural language processing that sort of by the nature of the structure of the class since you know it more assumes that people know how to build deep learning natural language systems at the beginning that rather than a large percentage of the class going into okay you have to do all of these assignments although there are little assignments earlier on that there's sort of more time to work on a project for the quarter okay here's one more question that maybe chicago could do do you know of attempts to crowd source dynamic benchmarks eg uses uploading adversarial examples for evaluation or online learning yeah so actually like uh the main idea there is to use crowdsourcing right so in fact there is this bench uh so there is this um platform that was created by bear it's called dyno bench and the objective is just that that to construct this slight dynamically evolving uh benchmark we are just going to offload it to you know users of this platform and you can you know it essentially gives you utilities for like uh deploying your model and then having uh you know humans kind of try to fool the model um yeah so so this is like it's it's basically how the dynamic evaluate the dynamic benchmark uh collection actually works so like you uh deploy a model um on some platform and then you get humans to like fool the system yeah is a question can you address the problems of nlp models not able to remember really long contexts and techniques to infer on really large input length yeah so so i guess like there have been like a few works recently right that kind of tried to scale up transformers to like really large uh context lens uh one of them is like the reformer um and there's also like the transformer excel that was i think the first one to try and do that um i think what is unclear is whether you can combine that with the scale of these gpt-like models and if you see like qualitatively different things once you do that like um and part of it is just that all of this is just like so recent right uh but yeah i think the open question there is that you know can you take these like really long context transformers that can operate over long context combine that with scale of gpd3 and then get models that can actually reason over these like really large contexts um because i guess the hypothesis of scale is that once you train language models on uh at scale it can start to do these things and so to do that for long context we actually need to like have long context transformers that are trained at scale and i i don't think people have done that yet so i'm seeing this other question about language acquisition [Music] because do you have some thoughts on this or maybe i can just do something with that um yeah so the question is um what do you think we can learn from baby language acquisition can we build a language model in a more interactive way like reinforcement learning do you know any of these attempts oh that's that's a big huge question and you know i think the the short non-helpful answer is that there are kind of no answers at the moment you know people have certainly tried to do things at various scales but you know we just have no technology that is the least bit convincing um for being able to replicate the language learning ability of a human child um but after that prologue what i could say is i mean yeah there are definitely ideas to have in your head so you know there are sort of clear results which is that little kids don't learn by watching videos so it seems like interaction is completely key um little kids don't learn from language alone they're in a very rich environment where people are sort of both learning stuff from the environment in general and in particular you know they're learning a lot from what language acquisition risk um researchers refer to as attention which is different what we mean by attention but it means that the caregiver will be looking at the object that's the focus of interest and you know commonly other things as well like sort of you know picking it up and bringing it near the kid and all those kinds of things um and you know babies and young kids get to experiment a lot right so regardless of whether it's learning what happens when you have um some blocks that you stack up and play with them or you're learning language you sort of experiment by trying some things and see what kind of response you get and again that's essentially building on the interactivity of it that you're getting some kind of response to any upfronts you make and you know this is something that's sort of been hotly debated in the language acquisition literature so a traditional chomp skin position is that you know human beings don't get effective feedback you know supervised labels when they talk and you know in some very narrow sense well that's true right it's just not the case that after a baby tries to say something that they get feedback of you know syntax error in english on word four or they get given here's the semantic form i took away from your utterance but in a more indirect way they clearly get enormous feedback they can see what kind of response um they get from their caregiver at every um corner and so like in your question um you were suggesting that well somehow we should be making use of reinforcement learning because we have something like a reward signal there um and you know in a big picture way i'd say hi yeah i agree um in terms of a much more specific way as to well how can we possibly get that to work to learn something with the richness of human language i you know i think we don't have much idea but you know there has started to be some work so people have been sort of building um virtual environments which you know you have your um avatar in and that can manipulate in the virtual environment and there's linguistic input and it can succeed in getting rewards for sort of doing a command where the command can be something like you know pick up the orange block or something like that and you know to a small extent people have been able to build things that work i mean as i as you might be picking up i mean i guess so far at least i've just been kind of underwhelmed because it seems like the complexity of what people have achieved um is sort of you know just so primitive compared to the full complex complexity of language right you know the kind of languages that people have been able to get systems to learn are ones that can yeah do pick up commands where they can learn you know blue cube versus um orange sphere and that's sort of about how far people have gotten and that sort of such a teeny small corner of what's involved in learning a human language one thing i'll just add to that is i i think there are some principles of uh how kids learn that people have tried to apply to deep learning and one example that comes to mind is curriculum learning um where there's like a lot of literature that shows that you know babies uh they tend to pay attention to things that they just that is just slightly challenging for them and they don't pay attention to things that are extremely challenging and also don't pay attention to things that they know how to solve and many researchers have really tried to get curriculum learning to work um and the verdict on that is that it seems to kind of work when you're in like reinforcement learning settings but it's unclear if it's going to work on like supervised learning settings but i still think that it's like underexplored and maybe you know there should there should be like more attempts to kind of see if we can like add in curriculum learning and if that improves anything yeah i agree curriculum learning is an important idea which we haven't really talked about but it seems like it's certainly essential to human learning um and there's been some minor successes with it in the machine learning world but it sort of seems like it's an idea you should be able to do a lot more with in the future as you move from um models that are just doing one narrow task that's trying to do a more general language acquisition process should i attempt this next question as well okay the next question is is the reason humans learn languages better just because we are pre-trained over millions of years of physics simulation maybe we should um pre-train a model the same way so i mean i presume what you're saying is physics simulation um you're evoking evolution when you're talking about millions of years so you know this is a controversial debated big question um so you know again if i invoke chomsky again so noam chomsky is sort of the most famous um linguist in the world um and you know essentially noam chomsky's career starting in the 1950s is built around the idea that little children get such um dubious linguistic input because you know they hear a random bunch of stuff they don't get much feedback on what they say etc that language could not be learned empirically just from the data observed and the only possible assumption to work from is significant parts of human language um uh innate or in the sort of human genome babies are born with that and that explains the miracle by which very little humans um learn amazingly fast how human languages work um now to speak in credit for that idea for those of you who have not been around um little children i mean i i think one does just have to acknowledge you know human language acquisition by live little kids i mean it does just seem to be miraculous right but you go through this sort of slow phase for a couple of years where you know the the kids sort of goose and gars some syllables and then there's a fairly long period where they picked up a few words and they can say juice juice um when they want to drink some juice and nothing else and then it just sort of seems like there's this phase change where the kids suddenly realize wait this is a productive generative sentence system i can say whole sentences and then in an incredibly short period they sort of seem to transition from saying one and two word utterances to suddenly they can say you know daddy come home in garage um pudding bike in garage and you go wow how do they suddenly discover language um so you know so it is kind of amazing but um personally for me at least you know i've just never believed the strong versions of the hypothesis that human beings have much in the way of language specific knowledge or structure in their brains that comes from genetic inheritance like clearly humans do have these very clever brains and if we're at the level of saying being able to think or being able to interpret the visual world that's things that have developed over tens of millions of years and um evolution can be a large part of the explanation and humans are clearly born with lots of vision specific hardware in their brains as are a lot of other creatures but when you come to language you know no one no one knows when language was in a sort of a modern like form first became available because you know there aren't any fossils of people saying you know the word um spear or something like that but you know to the extent that there are estimates based on sort so what you can see of the sort of spread of um proto-humans and their sort of apparent social structures from so what you can find in fossils you know most people guess that language is at most a million years old and you know that's just too short a time for any significant eve for evolution to sort of build any significant structure inside human brains that's specific to language so i kind of think that the working assumption has to be that sort of there's just about nothing specific to language and human brains and you know the most plausible hypothesis not that i know very much about neuroscience when it comes down to it is that humans were being able to repurpose hardware that was originally built for other purposes like visual scene interpretation and memory and that that gave a basis of sort of having all this clever hardware that you could then use for language so you know it's kind of like gpus were invented for playing computer games and we were able to repurpose that hardware to do deep learning we've got a lot of have uh come out at the end okay so this one is answered live um let's see yeah if you could name i guess this is for either of you one main bottleneck as to um uh if we could provide feedback efficiently to our systems like babies are given feedback what's the bottleneck that remains in uh trying to have more human-like language acquisition um i mean i sort of i cannot find on this again or would you start releasing something yeah i was just gonna say that i think it's a bit of everything right like i i think in terms of models um one thing i'll say is that we know that there's more feedback connections and feedback connections in the brain um and we haven't really figured out a way of kind of uh so you know of course we had rnns um you know which sort of implement like you know you can like look through an item that sort of implements a feedback loop but we still haven't really figured out how to you know use that knowledge that the brain has a lot of feedback connections and then apply that to uh like practical practical systems i think on the modeling and like maybe that's one problem um there is like yeah i think curriculum learning is maybe one of them but i think the one that's probably gonna have most bang for buck is really figuring out how we can move beyond text i think there's just like so much of more information that's available that we're just not using and so i think that's where most of the progress might come from like figuring out what's the most practical of going beyond text uh this is what i think okay um let's see uh what are some important nlp topics that we have not covered in this class i do that um you know well sort of one answer is a lot of the topics that are covered in cs224u because you know we do make a bit of an effort to keep them disjoint they're not fully um right so there's sort of lots of topics in language understanding that we haven't covered right so if you want to make um a voice assistant like alexa siri or google assistant well you need to sort of be able to interface with systems apis that can do things like delete your mail or buy you concert tickets and so you need to be able to convert from language into a explicit semantic form that can interact with the systems of the world we haven't talked about that at all um so there's lots of language understanding stuff there's also lots of language generation things so you know effectively for language generation all we have done is neural language models they are great um run them and they will generate language and you know in one sense that's true right like it's just awesome the kind of generation you can do with things like gpt two or three but you know where that's missing is that's really only giving you the ability to produce fluent text where rabbits often produces fluent text that if you actually wanted to have a good natural language generation system you also have to have higher level planning of what you're um going to talk about and how you are going to express it right so that in most situations in natural language you think okay well i want to explain to people something about why it's important to do math classes at college let me think how to organize this maybe i should talk about some of the different applications where math turns up and how it's a really good grounding you know whatever you kind of plan out here's how i can present some ideas right and that kind of natural language generation um we're not doing um any we haven't done any of um yeah i so that's sort of saying more understanding more generation which is most of nlp you can say i mean obviously there are then sort of particular tasks that we can talk about that we either have or haven't not explicitly addressed okay is there has there been any work in putting language models into an environment in which they can communicate to achieve a task and do you think this would help uh with unsupervised learning so again i guess there's been a lot of work on immersion communication um and also self-play where you have like these uh different uh models which are initialized as language models that attempt to communicate with each other to solve some tasks and then you know you have a reward at the end um whether they were able to finish the task or not and then based on that reward you attempt to learn like a communication strategy and this started out as like emergent communication and self-play and then there was like recent work i think it was like i clear last year or the year before that where they showed that if you initialize these models with like uh with like language model pre-training you um basically prevent this problem of like language drift where the language that or the communication protocol that your models end up learning has nothing to do with like actual language um and so yeah i mean from that sense there has been some work um but it's like very limited i think there's like some groups that try to study this but not beyond that okay i mean the last two questions are about gene as well as one question about whether gene smith some correlations from social cues a reward based system i don't know if either of you have opinions about this uh but if you do yeah i mean i don't have anything very deep to say about this question so it's on the importance of social cues as opposed to pure reward based systems well i mean in some sense a social cue you could also regard as a reward that people you know like to um have other people put a smile on their face when you say something um but you know i do think generally um you know not when people are saying what have we not covered another thing that we've barely covered is the social side of language so you know a huge a huge interesting thing about language is it has this very dynamic big dynamic range so on the one hand you can talk about very precise things in language so you can sort of talk about math formulas and steps in a proof and things like that so that there's a lot of precision and language but you know on the other hand you can just sort of emphatically mumble mumble whatever words at all and you're not really sort of communicating anything in the way of a propositional content um what you're really trying to communicate is you know i'm oh i'm thinking about you right now and oh i'm concerned um with how you're feeling or whatever it is in the circumstances right so that a huge part of language use is in forms of sort of social communication between human beings and you know that's another big part of actually building um successful natural language systems right so if you you know if you think negatively about something like the virtual assistants i've been falling back on a lot is you know that they have virtually no ability as social language users right so we're now training a generation of little kids that what you should do is sort of bark out commands as if you were you know serving in the german army in world war ii or something and um that there's none of the kind of social part of how to you know use language um to communicate um satisfactorily with human beings and to maintain a social system and that you know that's a huge part of human language use that kids have to learn and learn to use successfully right you know a lot of being successful in the world is you know you know when you want someone to do something for you you know that there are good ways to ask them for it you know some of its choice of how to present the arguments but you know some of it is by building social rapport and asking nicely and reasonably and making it seem like you're a sweet person that other people should do something for and you know human beings are very good at that and being good at that is a really important skill for being able to navigate the world well you
Stanford_CS224N_Natural_Language_Processing_with_Deep_Learning_2023
Stanford_CS224N_NLP_with_Deep_Learning_Winter_2021_Lecture_1_Intro_Word_Vectors.txt
um hi everybody um welcome to stanford cs224 n also known as ling 284 natural language processing with deep learning i'm christopher manning and i'm the main instructor for this class so what we hope to do today is to dive right in so i'm going to spend about 10 minutes talking about the course and then we're going to get straight into content for reasons i'll explain in a minute in a minute so we'll talk about human language and word meaning i'll then introduce the ideas of the word to vec algorithm for learning word meaning and then going from there we'll kind of concretely work through how you can work out objective function gradients with respect to the word deveck algorithm and say a teeny bit about how optimization works and then right at the end of the class i then want to spend a little bit of time giving you a sense of how these word vectors work and what you can do with them so really the key learning for today is i want to give you a sense of how amazing deep learning word vectors are so we have this really surprising result that word meaning can be represented not perfectly but really rather well by a large vector of real numbers and you know that's sort of in a way a common place of the last decade of deep learning but it flies in the face of thousands of years of tradition and it's really rather an unexpected result to start focusing on okay so um quickly what do we hope to teach in this course so we've got three primary goals um the first is to teach you the foundations i a good deep understanding of the effect of modern methods for deep learning applied to nlp so we are going to start with and go through the basics and then go on to key methods that are used in nlp recurrent networks attention transformers and things like that we want to do something more than just that we'd also like to give you some sense of a big picture understanding of human languages and what are the reasons for why they're actually quite difficult to understand and produce even though humans seem to do it easily now obviously if you really want to learn a lot about this topic you should enroll in and go and start doing some classes in the linguistics department but nevertheless for a lot of you this is the only human language content you'll see during your master's degree or whatever and so we do hope to spend a bit of time on that starting today and then finally we want to give you an understanding of an ability to build systems in pi torch for some of the major problems in nlp so we'll look at learning word meanings dependency parsing machine translation question answering let's dive in to human language once upon a time i had a lot longer introduction that gave lots of examples about human how human languages can be misunderstood and complex i'll show a few of those um examples in later lectures but since right for today we're going to be focused on word meaning i thought i'd just give um one example which comes from a very nice xkcd cartoon and um that isn't sort of about some of the sort of syntactic ambiguities of sentences but instead it's really emphasizing the important point that language is a social system constructed and interpreted by people and that's part of how and it changes as people decide to adapt its construction and that's part of the reason why human languages are great as an adaptive system for human beings but difficult as a system or our computers to understand to this day so in this conversation um between the two women one says anyway i could care less and the other says i think you mean you couldn't care less saying you could care less implies you care at least some amount and the other one says i don't know where these unbelievably complicated brains drifting through a void trying in vain to connect with one another by blindly fleeing words out into the darkness every choice of phrasing spelling and tone and timing carries countless signals and contexts and subtexts and more and every listener interprets those signals in their own way language isn't a formal system language is glorious chaos you can never know for sure what any words will mean to anyone all you can do is try to get better at guessing how your words affect people so you can have a chance of finding the ones that will make them feel something like what you want them to feel everything else is pointless i assume you're giving me tips on how you interpret words because you want me to feel less alone if so then thank you that means a lot but if you're just running my sentences past some mental checklist so you can show off how well you know it then i could care less okay so that's ultimately um what our goal is is to how to do a better job at building um computational systems um that try to get better at guessing how their words will affect other people and what other people are meaning by the words that they choose to say so an interesting thing about human language is it is a system that was constructed by human beings um and it's a system that was constructed you know relatively recently in some sense so in discussions of artificial intelligence um a lot of the time um people focus a lot on human brains and the neurons buzzing by and this intelligence um that's meant to be inside people's heads but i just wanted to focus for a moment on the role of language there's actually you know this is kind of controversial but you know it's not necessarily the case that humans are much more intelligent than some of the higher apes like chimpanzees of bonobos right so chimpanzees and bonobos have been shown to be able to use pools to make plans and in fact chimps have much better short-term memory than human beings do so relative to that if you look through the history of life on earth human beings develop language really recently um how recently we kind of actually don't know because you know there's no fossils that say okay here's a language speaker um but you know most people estimate that language arose for human beings sort of you know somewhere in the range of a hundred thousand to a million years ago okay that's a while ago but compared to the process of evolution of life on earth that's kind of um blinking an eyelid um but that powerfulness communication between human beings quickly set off our ascendancy over other creatures um so it's kind of interesting that the ultimate power turned out not to be have been poisonous fangs or being super fast or super big but having the ability to communicate with other members of your tribe it was much more recently again that humans developed writing which allowed knowledge to be communicated across distances of time and space and so that's only about 5 000 years old the power of writing so in just a few thousand years the ability to preserve and share knowledge took us from the bronze age to the smartphones and tablets of today so a key question for artificial intelligence and human computer interaction is how to get computers to be able to understand the information conveyed in human languages simultaneously artificial intelligence requires computers with the knowledge of people fortunately now our ai systems might be able to benefit from a virtuous cycle we need knowledge to understand language and people well but it's also the case that a lot of that knowledge is contained in language spread out across the books and web pages of the world and that's one of the things we're going to look at in this course is how that we can sort of build on that virtuous cycle a lot of progress has already been made and i just want to very quickly um give a sense of that so in the last decade or so and especially in the last few years with newer methods of machine translation we're now in a space where machine translation really works moderately well so again from the history of the world this is just amazing right for thousands of years learning other people's languages was a human task which required a lot of effort and concentration but now we're in a world where you could just hop on your web browser and think oh i wonder what the news is in kenya today and you can head off over to a kenyan website and you can see something like this and you can go huh and you can then ask google um to translate it for you um from swahili and you know the translation isn't quite perfect but it's you know it's reasonably good so the newspaper tuko has been informed that local government minister lingsan and his transport counterparts city me died within two separate hours so you know within two separate hours is kind of awkward but essentially we're doing pretty well at getting the information out of this page and so um that's quite amazing um the single biggest development in nlp for the last year certainly in the popular media media was gpt um which was a huge new model that was released by open ai um what gpt 3 is about and why it's great is actually a bit subtle and so i can't really go through all the details of this here but it's exciting because it seems like it's the first step on the path to what we might call universal models where you can train up one extremely large model on something like that library picture i showed before and it just has knowledge of the world knowledge of human languages knowledge of how to do tasks and then you can apply it to do all sorts of things so no longer are we building a model to detect spam and then a model to detect pornography and then a model to detect um whatever foreign language content and just building all these separate supervised classifiers for every different task we've now just built up a model that understands so exactly what it does is it just predicts following words so on the left it's being told to write um about elon musk in the style of dr seuss and it started off with some text and then it's generating more text and the way it generates more text is literally by just predicting one word at a time following words come to complete its text but this has a very powerful facility because what you can do with 3 is you can give it a couple of examples of what you'd like it to do so i can give it some text and say i broke the window change it into a question what did i break i gracefully saved the day i changed it into a question what did i gracefully save so this prompt um tells gpt 3 what i'm wanting it to do and so then if i give it another statement like i gave john flowers i can then say gpt-3 predict what words come next and it'll follow my prompt and produce who did i give flowers to or i can say i gave her a rose and a guitar and it will follow the idea of the pattern and do who did i give a rose and a guitar to and actually this one model can then do an amazing range of things including many there's quite surprising to do at all to give just one example of that another thing that you can do is get it to translate human language sentences into sql so this can make it much easier to do cs145 so having given it a couple of examples of sql translation of human language text which i'm this time not showing because it won't fit on my slide i can then give it a sentence like how many users have signed up since the start of 2020 and it turns it into sql or i can give it another query what is the average number of influences each user subscribe to and again it then converts that into sql so gpt gpt-3 knows a lot about the meaning of language and the meaning of other things like sql and can fluently manipulate it okay so that leads us straight into this top meaning and how do we represent the meaning of a word well what is meaning well we could look up something like the webster dictionary and say okay the idea is represented by a word the idea that a person wants to express by using words signs etc um those webster's dictionary definitions really focused on the word idea somehow but this is pretty close to the commonest way that linguists think about meaning so that they think of word meaning as being appearing between a a word which is a signifier or symbol and the thing that it signifies the signified thing which is an idea or thing so that the meaning of the word chair is the set of things that are chairs and that's referred to as denotational semantics a term that's also used and similarly applied for the semantics of programming languages this model isn't very deeply implementable like how do i go from the idea that okay chair means the set of chairs in the world just something i can manipulate meaning within my computers so traditionally the way that meaning has normally been handled in natural language processing systems is to make use of resources like dictionaries and thesauri in particular a popular one is wordnet which organized words and terms into both synonym sets words that can mean the same thing and hypernyms which correspond to is a relationships um and so for the is a relationships you know we can kind of look at the hyponyms of panda and panda is a kind of proceed whatever those are like i guess that's probably with red pandas um which is a kind of carnivore which is a kind of placenta which is kind of mammal and you sort of head up this um hyponym um hierarchy so wordnet has been a greater resource for nlp but it's also been highly deficient so it lacks a lot of nuance so for example in word net proficient is listed as a synonym for good but you know maybe that's sometimes true but it seems like in a lot of context it's not true and you mean something rather different when you say proficient versus good um it's limited as a human constructed um thesaurus so in particular there's lots of words and lots of uses of words that just aren't there including you know anything um that is you know sort of more current terminology like um wicked is there for the wicked witch but not for more modern colloquial uses um ninja certainly isn't there for the kind of description some people make of programmers and it's kind of impossible to keep up to date um so it requires a lot of human labor but even when you have that you know it has a sets of synonyms but doesn't really have a good sense of words that means something similar so um fantastic and great means something similar without really being synonyms and so this idea of meaning similarity is something that would be really useful to make progress on and where deep learning models excel okay so what's the problem with a lot of traditional nlp well the problem with a lot of traditional nlp is that words are regarded as discrete symbols so we have symbols like hotel conference motel our words which in deep learning speak we refer to as a localist representation and that's because if you in statistical or machine learning systems want to represent these symbols that each of them is a separate thing so the standard way of representing them and this is what you do in something like a statistical model if you're building a logistic regression model with words as features is that you represent them as one hot vectors so you have a dimension for each different word so maybe like in my example here are my representations as vectors for motel and hotel um and so that means that we have to have huge vectors corresponding to the number of words in our vocabulary so the kind of if you had a high school english dictionary it probably had about 250 000 words in it um but there are many many more words in the language really so maybe we at least want to have a 500 000 um dimensional vector to be able to cope with that okay um but the bigger even bigger problem with the discrete symbols is that we don't have this notion of word relationships and similarity so for example in web search if a user searches for seattle motel we'd also like to match on documents containing seattle hotel but our problem is we've got these one-hot vectors for the different words and so in a formal mathematical sense these two vectors are orthogonal that there's no natural notion of similarity between them whatsoever well there are some things that we could do but try and do about that and people did do about that um in you know before 2010 we could say hey we could use word net synonyms and we count things that list the synonyms is similar anyway or hey maybe we could somehow build up representations of words that have meaning overlap and people did all of those things but they tended to fail badly from incompleteness so instead what i want to introduce today is the modern deep learning method of doing that where we encode similarity in a real value vector themselves so how do we go about doing that okay and the way we do that is by exploiting this idea called distributional semantics so the idea of distributional semantics is again something that when you first see it maybe feels a little bit crazy because rather than having something like denotational semantics what we're now going to do is say that a word's meaning is going to be given by the words that frequently appear close to it jr firth was a british linguist from the middle of last century and one of his pithy slogans that everyone quotes at this moment is you shall know a word by the company it keeps and so this idea that you can represent a sense for words meaning as a notion of what context it appears in has been a very successful idea one of the most successful ideas that's used throughout statistical and deep learning nlp it's actually an interesting idea um more philosophically so that there are kind of interesting connections for example in wittgenstein's later writings he became enamored of a use theory of meaning and this is a sin in some sense a use theory of meaning but whether you know it's the ultimate theory of semantics it's actually still pretty controversial but it proves to be an extremely computational sense of semantics which has just led to it being used everywhere very successfully um in deep learning systems so when a word appears in a text it has a context which are the set of words that appear and so for a particular word my example here is banking we'll find a bunch of places where banking occurs in texts and we'll collect the sort of nearby words as context words and we'll see say that those words that are appearing that kind of muddy brown color around banking that those context words will in some sense represent the meaning of the word banking um while i'm here let me just mention one distinction that will come up regularly when we're talking about a word um in our natural language processing class we sort of have two senses of word which are referred to as types and tokens so there's a particular instance for word so there's in the first example government debt problems turning turning into banking crises there's banking there and that's a token of the word banking but then i've collected a bunch of instances of quote unquote the word banking and when i say the word banking and a bunch of examples of it i'm then treating banking as a type which refers to you know the uses and meaning the word banking has across instances okay so um what are we going to do with these distributional models of language well what we want to do is we're going based on looking at the words that occur in context as vectors that we want to build up a dense real valued vector for each word that in some sense represents the meaning of that word and the way it all represent the meaning of that word is that this vector will be useful for predicting other words that occur in the context of this um so in this example to keep it manageable on the slide vectors are only eight dimensional um but in reality we use considerably bigger vectors so a very common size is actually 300 dimensional vectors okay so for each word that's a word type we're going to have a word vector these are also used with other names they're referred to as newer word representations or for a reason they'll become clearer on the next slide they're referred to as word embeddings so these are now a distributed representation not a localist representation because the meaning of the word banking is spread over all 300 dimensions of the vector okay these are called word embeddings because effectively when we have a whole bunch of words these representations place them all in a high dimensional vector space and so they're embedded into that space now unfortunately human beings are very bad at looking at 300 dimensional vector spaces or even eight dimensional vector spaces so the only thing that i can really display to you here is a two-dimensional projection of that space now even that's useful um but it's also important to realize that when you're making a two-dimensional projection of a 300 dimensional space you're losing almost in all the information in that space and a lot of things will be crushed together that don't actually deserve to be better um so here's um my word embeddings of course you can't see any of those at all but if i zoom in and then i zoom in further what you'll already see is that the representations we've learned distributionally do a just a good job at grouping together similar um words so in this sort of overall picture i can zoom into one part of the space is actually the part that's up here in this view of it um and it's got words for countries so not only are countries generally grouped together um even the sort of particular subgroupings of countries make a certain amount of sense and down here we then have nationality words if we go to another part of the space we can see different kind of words so here are verbs and we have ones like come and go are very similar um saying and thinking words say think expect a kind of similar and by nearby over in the bottom right we have sort of verbal auxiliaries and copulas so have had hairs um forms of the verb to be and certain contentful verbs are similar to copula verbs because they describe states you know he remained angry he became angry and so they're actually then grouped close together to the word the verb to be so there's a lot of interesting structure um in this space um that then represents the meaning of words so the algorithm i'm going to introduce now is one that's called word to vec which was um introduced by tamash mikulov and colleagues in 2013 as a framework for learning word vectors and it's sort of a simple and easy to understand place to start so the idea is we have a a lot of text from somewhere which we commonly refer to as a corpus of text corpus is just the latin word for body so it's a body of text and so then we choose a fixed vocabulary which will typically be large but nevertheless truncated so we get rid of some of the really rare words so we might say vocabulary size of four hundred thousand and we then create for ourselves a vector for each word okay so then what we do is we want to work out what's a good vector um to for each word and the really interesting thing is that we can learn these word vectors from just a big pile of text by doing this distributional similarity task of being able to predict well what words occur in the context of other words so in particular we're going to iterate through words in the text and so at any moment we have a center word um c and context words outside of it which we'll call o and then based on the current word vectors we're going to calculate the probability of a context word occurring given the center word according to our current model but then we know that certain words did actually occur in the context of that center word and so what we want to do is then keep adjusting the word vectors to maximize the probability that's assigned to words that actually occur in the context of the center word as we proceed through these texts so to start to make that a bit more concrete this is what we're doing um so we have a piece of text we choose our center word which is here into and then we say well if a model of predicting the probability of context words given the center word and this model will come to in a minute but it's defined in terms of our word vectors so let's see what probability it gives to the words that actually occurred in the to the context of this word it gives them some probability but maybe be nice if the probability of the sign was higher so then how can we change our word vectors to raise those probabilities and so we'll do some calculations with into being the center word and then we'll just go on to the next word and then we'll do the same kind of calculations and keep on chunking so the big question then is well what are we doing for working out the probability of a word occurring in the context of the center word and so that's the central part of what we develop as the word object so this is the overall model that we want to use so for each position in our corpus our body of text we want to predict context words within a window of fixed size m given the center word wj and we want to become good at doing that so we want to give high probability to words that occur in the context and so what we're going to do is we're going to work out what's formally the data likelihood as to how good a job we do at predicting words in the context of other words and so formally that likelihood is going to be defined in terms of our word vectors so they're the parameters of our model and it's going to be calculated as taking the product of using each word as the center word and then the product of each word and a window around that of the probability of predicting that context word in the center word and so to learn this model we're going to have an objective function sometimes also called a cost or a loss that we want to optimize and essentially what we want to do is we want to maximize the likelihood of the context we see around center words but following standard practice we slightly fiddle that because rather than dealing with products it's easier to deal with sums and so we work with log likelihood and once we take log likelihood all of our products turn into sums we also work with the average log likelihood so we've got a one on t term here for the number of words in the corpus and finally for no particular reason we like to minimize our objective function rather than maximizing it so we stick a minus sign in there and so then by minimizing this objective function j of theta that comes maximizing our predictive accuracy okay so that's the setup but we still haven't made any progress in how do we calculate the probability of a word occurring in the context given the center word and so the way we're actually going to do that is we have vector representations for each word and we're going to work out the probability simply in terms of the word vectors now at this point there's a little technical point we're actually going to give to each word two word vectors one word vector for when it's used as the center word and a different word vector when it's used as a context word um this is done because it just simplifies the math and the optimization so it seems a little bit ugly but actually makes building word vectors a lot easier and really we can come back to that and discuss it um later but that's what it is and so then once we have these word vectors um the equation that we're going to use for giving the probability of a context word appearing given the center word is that we're going to calculate it using the expression in the middle bottom of my slide so let's sort of pull that apart just a little bit more um so what we have here with this expression is so for a particular center word and a particular context word o we're going to look up the vector representation of each word so they're u of o and v of c and so then we're simply going to take the dot product of those two vectors so dot product is a natural measure for similarity between words because in any particular dimension uh positive you'll get some a component that adds to the dot products um if both are negative it'll add a lot to the dot product some if one's positive and one's negative um it'll subtract from this similarity measure um if both of them are zero it won't change the similarity so it sort of seems a sort of plausible idea to just take a dot product and thinking well if two words have a larger dot product that means they're more similar and so then after that we're sort of really doing nothing more than okay we want to use dot products to represent word similarity and now let's do the dumbest thing that we know how to turn this into a probability distribution well what do we do well firstly well taking a dot product of two vectors that might come out as positive or negative but well if we want to have probabilities we can't have negative probabilities so a simple way to avoid negative probabilities is to exponentiate them because then we know everything is positive and so then we're always getting a positive number in the numerator but for probabilities we also want to have the numbers add up to one so we have a probability distribution so we're just normalizing in the obvious way where we divide through by the sum of the numerator quantity for each different word and the vocabulary and so then necessarily that gives us a probability distribution so all the rest of that that i was just talking through what we're using there is what's called the softmax function so the softmax function will take any rn vector and turn it into things between um zero to one um and so we can take numbers and put them through this soft max and turn them into probability distribution right so the name comes from the fact that it's sort of like a max um so because of the fact that we exponentiate that really emphasizes the big contents in the different dimensions of calculating similarities so most of the probability goes to the most similar things um and it's called soft because well it doesn't do that absolutely it'll still give some probability to everything that's in the slightest bit similar i mean on the other hand it's a slightly weird name because you know max normally takes a set of things and just returns one the biggest of them whereas the softmax is taking a set of numbers and is scaling them but is returning the whole probability distribution okay so now we have all the pieces of our model and so how do we make our word vectors well the idea of what we want to do is we want to fiddle our word vectors in such a way that we minimize our loss i that we maximize the probability of the words that we actually saw in the context of the center word and so theta the theta represents all of our model parameters in one very long vector so for our model here the only parameters are our word vectors so we have for each word two vectors it's context vector and its center vector and each of those is a d dimensional vector where d might be 300 and we have v many words so we end up with this big huge vector which is 2 v long which if you have a 500 000 vocab times the 300 dimensional the time um small method i can do in my head but it's got millions and millions of parameters so i've got millions and millions of parameters and we somehow want to fiddle them all to maximize the prediction of context words and so the way we're going to do that then is we use calculus so what we want to do is take that math that we've seen previously and say huh well with this objective function um we can work out derivatives and so we can work out where the gradient is so how we can walk downhill to minimize loss so we're at some point and we can figure out what what is downhill and we can then progressively walk downhill and improve our model and so what our job is going to be is to compute all of those vector gradients okay um so at this point i then want to kind of um show a little bit more as to how we can actually um do that and a couple more slides here but maybe i'll just try and things again and move to my interactive whiteboard what we wanted to do right so we had our overall we had our overall j theta that we were wanting to minimize our average neglog likelihood so that was the minus one on t of the sum of t equals one to big t which was our text length and then we were going through the words in each context so we're doing j between m words on each side um except itself um and then what we wanted to do was in the side there we were then we were working out the log probability of the context word at that position um given the word that's in the center position t and so then we converted that into um our word vectors by saying that the probability of o given c is going to be expressed as the um this soft max of the dot product okay and so now what we want to do is work out the gradient the direction of downhill for this last gen and so the way we're doing that is we're working out the partial derivative of this expression with respect to every parameter in the model and all the parameters in the model are the components the dimensions of the word vectors of every word and so we have um the center word vectors and the outside word vectors so here um i'm just going to do the center word vectors but on homework on a future homework assignment 2 the outside word vectors will show up and they're kind of similar so what we're doing is we're working out the partial derivative with respect to our center word vector which is you know maybe a 300 dimensional word vector of this probability of o give c um and since we're using log probabilities of the log of this probability of o given c of this x of u of o t v c over my writing will get worse and worse sorry um i've already made a mistake haven't i the sum um the sum of w equals one to the vocabulary of the expert uwt vc okay um well at this point things start off pretty easy so what we have here is something that's log of a over b so that's easy we can turn this into log a minus log b but before i go further i'll just make a comment at this point um you know so at this point um my audience divides on into right there are some people in the audience um for which maybe a lot of people [Music] ah this is um really elementary math i've seen this a million times before and he isn't even explaining it very well um and if you're in that group well feel free to look at your email or on the newspaper or whatever else is best suited to you but i think there are also um other people in the class who oh the last time i saw calculus was when i was in high school for which that's not the case and so i wanted to spend a few minutes um going through this a bit concretely so that to try and get over the idea that you know even though most of deep learning and even word vector learning seems like magic that it's not really magic um it's really just doing math and one of the things we hope is that you do actually understand this math that's being done so i'll keep along and do a bit more of it okay so then what we have is sort of use this way of writing the log and so then we can say that that expression above equals the partial derivatives with of vc of the log of the numerator log x u o t v c minus um the partial derivative of the log of of the denominator so that's then the sum of w equals 1 to v of the x of u w t v c okay so at that point i have um my numerator here and my former denominator there um so at that point there are starts the first part is the numerator part so the numerator part is really really easy so um we have here that log and x but just inverses of each other so they just go away so that becomes the derivative with respect to vc of just what's left behind which is you u0 dot product and with vc okay um and so the thing to be aware of is you know we're still doing this multivariate calculus so what we have here is calculus with respect to a vector like hopefully you saw some of in math 51 or some other place not um high school um single variable calculus on the other hand um you know to the extent you and half remember some of this stuff most of the time you can just do fi perfectly well by thinking about what happens um with one dimension at a time and it generalizes the multivariable calculus so if about um all that you remember of calculus is that d dx of a x equals a really it's the same thing that we're going to be using here that here we have the the outside word dot producted with the vc well at the end of the day that's going to have terms of sort of u0 component 1 times the center word component 1 plus u um zero component two plus um this were component two and so we're sort of using this bit over here and so what we're going to be getting out is the u0 and u01 and the u0 2 so this will be all that is left with respect to vc1 when we take its derivative with respect to vc1 and this term will be the only thing left when we take the derivative with respect to the variable um vc2 so the end result of taking the vector derivative of u0 dot product and with vc is simply going to be u0 okay great so that's progress so then at that point we go on and we say oh damn we still have the the denominator and that um slightly more complex but not so bad so then we want to take the partial derivatives with respect to vc of the log of the denominator okay and so then at this point um the one tool that we need to know and remember is how to use the chain rule so the chain rule is when you're wanting to work out um of having derivatives of compositions of functions so we have f of g of whatever x but here it's going to be vc and so we want to say okay what we have here is we're working out a composition of functions so here's our f um and here is our x which is g of v c actually maybe i shouldn't call it x um oops maybe i was it's probably better to call it z or something um okay so when we then want to work out um the chain rule well what do we do we take the derivative of f at the point z and so at that point we have to actually remember something we have to remember that the derivative of log is the one on x function so this is going to be equal to the 1 on x for z so that's then going to be 1 over the sum of w equals 1 to v of x of u t v c multiplied by the derivative of the inner function so so the derivative of um the part that is remaining i hope i'm getting this right the sum of oh and there's one trick here at this point we do want to have a change of index so we want to say the sum of x equals 1 to v of x of u of x v c since we can get into trouble if we don't change that variable um to be using a different one okay so at that point we're making some progress but we still want to work out the derivative of this and so what we want to do is apply the chain rule once more so now here's our f and in here is our new z equals g of vc and so we then sort of repeat over so we can move the um derivative inside uh some always so we're then taking the derivative of this and so then the derivative of x is itself okay so we're going to just have x of u x t v c times um there's the sum of x equals 1 to v times the derivative um of u x t v c okay and so then this is what we've worked out before we can just rewrite as ux okay so we're now making progress um so if we start putting all of that together what we have is um the derivative or the partial derivatives with vc of this log probability right we have the numerator which was just u0 um minus um we then had the sum of the numerator sum over x equals 1 to v of x u x t dc times u of x then that was multiplied by our first term that came from the one on x which gives you the sum of w equals one to v of the x of u w t v c and this is the fact that we change the variables um became important and so by just sort of rewriting that a little um we can get that that equals u0 minus um the sum of v equals oh sorry x all right x equals 1 to v of this x view of x t v c over the sum of w equals 1 to v of x u w t v c times u of x and so at that point this sort of interesting thing has happened that um we've ended up getting straight back exactly the soft max formula probability that we saw when we started um we can just rewrite that more conveniently as saying this equals u0 minus the sum over x equals 1 to v of the probability of x given c times ux and so what we have at that moment is this thing here is an expectation and so this is an an average over all the context vectors weighted by their probability according to the model and so it's always the case with these softmax style models that what you get out for the derivatives is you get obs the observed um minus the expected so our model is good if our model on average predicts exactly um the word vector that we actually see and so we're going to try and adjust the parameters of our model so it does that much of all um now i mean we try and make it do it as much as possible i mean of course as you'll find you can never get close right you know if i just say to you okay the word is croissant which words are going to um occur in the context of croissant i mean you can't answer that there are all sorts of sentences that you could say that involve the word croissant so actually our particular probability estimates are going to be kind of small but nevertheless we want to sort of fiddle our word vectors to try and make those estimates as high as we possibly can so i've gone on about this stuff um a bit but haven't actually sort of shown you any of what actually happened sorry i just want to quickly um show you a bit of that as to what actually happens with word vectors um so here's a simple little ipython notebook which is also what you'll be using for assignment one only um so in the first cell i import a bunch of stuff um so we've got numpy for our vectors matplotlib plotting off it learns kind of your machine learning um swiss army knife gensim is a package that you may well not have seen before it's a package that's often used for word vectors it's not really used for deep learning so this is the only time you'll see it in the class but if you just want a good package for working with word vectors and some other application it's a good one to know about okay so then in my second cell here i'm loading a particular set of word vectors so these are our glove word vectors that we made at stanford in 2014 and i'm loading a hundred dimensional word vectors um so that things are a little bit quicker for me um while i'm doing things here sort of do this model of bread and croissant um well what i've just got here is um word vectors so i just wanted to sort of um show you that there are um word vectors hmm well maybe i should have loaded those word vectors in advance hmm let's see oh okay well i'm in business um okay so right so here are my word vectors for um bread and croissant and while and seeing that maybe these two words are a bit similar so both of them are negative in the first dimension positive and the second negative in the third positive and the fourth negative and the fifth so it sort of looks like they might have a fair bit of dot product which is kind of what we want because bread and croissant are kind of similar um but what we can do is actually ask the model and these are gen sim functions now you know what are the most similar words so i can ask for croissant um what are the most similar words to that and it will tell me it's things like brioche baguette focaccia so that's pretty good pudding is perhaps a little bit more questionable we can say most similar to the usa and it says canada america usa with periods united states that's pretty good most similar to banana um i get out coconuts mangoes bananas sort of fairly tropical very great um now before finishing though i want to show you something slightly more than just similarity which is one of the amazing things that people observed with these word vectors and that was to say you can actually sort of do arithmetic in this vector space that makes sense and so in particular people suggested this analogy task and so the idea of the analogy task is you should be able to start with a word like king and you should be able to subtract out a male component from it add back in a woman component and then you should be able to ask well what word is over here and what you'd like is that the word over there is queen um and so um this sort of little bit of so we're going to do that um with this sort of same most similar function which is actually more so as well as having positive words you can ask for most similar negative words and you might wonder what's most negatively similar to a banana and you might be thinking oh it's um i don't know um some kind of meat or something actually that by itself isn't very useful because when you could just ask for most negatively similar things you tend to get crazy strings that were found in the data set that you don't know what they mean if anything but if we put the two together we can use the most similar function with positives and negatives to do analogies so we're going to say we want a positive king we want to subtract out negatively man we want to then add in positively woman and find out what's most similar to this point in the space so my analogy function does that precisely that by taking a couple of most similar ones and then subtracting out um the negative one and so we can try out this analogy function so i can do the analogy i show in the picture with man as to king as woman is fight sorry i'm not saying this right um yeah man is the king as woman is too oh sorry i haven't done my cells um okay man is the king as a woman as the queen so um that's great and that's um works well i mean and you can do it the sort of other way around king is to man as queen as to woman um if this only worked for that one freakish example um you maybe um wouldn't be very impressed but you know it actually turns out like it's not perfect but you can do all sorts of fun analogies with this and they actually work so you know i could ask for something like an analogy um oh here's a good one australia is to be uh as france is to what and you can think about what you think the answer that one should be and it comes out as champagne which is pretty good or i could ask for something like analogy pencil is to sketching as camera is to what um and it says photographing um you can also do the analogies with people um at this point i have to point out that this data was um and the model was built in 2014 so you can't ask anything about um donald trump in it well you can trump is in there but not as president but i could ask something like analogy obama is to clinton as reagan is to what and you can think of what you think is the right analogy there the analogy it returns is nixon so i guess that depends on what you think of bill clinton as to whether you think that was a good analogy or not you can also do sort of linguistic analogies with it so you can do something like analogy tall is to tallest as long is to what and it does longest so it really just sort of knows a lot about the meaning behavior of words and you know i think when these um methods were first developed and hopefully still for you that you know people were just gobsmacked about how well this actually worked at capturing of words and so these word vectors then went everywhere as a new representation that was so powerful for working out word meaning and so that's our starting point for this class and we'll say a bit more about them next time and they're also the basis of what you're looking at for the first assignment can i ask a quick question about the distinction between the two vectors per word yes um my understanding is that there can be several context words per uh word in the vocabulary like word in the vocabulary um but then if there's only two vectors i kind of i thought the distinction between the two is that one it's like the actual word and one's like the context word but the multiple context words like how do you how do you pick just two then well so we're doing every one of them right so like um maybe i won't turn back on the screen share but you know we were doing in the objective function there was a sum over you so you've got you know this big corpus of text right so you're taking a sum over every word which is it appearing as the center word and then inside that there's a second sum which is for each word in the context so you are going to count each word as a context word and so then for one particular term of that objective function you've got a particular context word and a particular um center word but you're then sort of summing over different context words for each center of the word and then you're summing over all of the the decisions of different center words and and to say um a little just a sentence more about having two vectors i mean you know in some sense it's an ugly detail but it's was done to make things sort of simple and fast so you know if you look at the math carefully if you sort of treated this two vectors as the same so if you use the same vectors for center and context and you say okay let's work out the derivatives things get uglier and the reason that they get uglier is it's okay when i'm iterating over all the choices of um context word oh my god sometimes the context word is going to be the same as the center word and so that messes with working out um my derivatives whereas by taking them as separate vectors that never happens so it's easy um but the kind of interesting thing is you know saying that you have these two different representations sort of just ends up really sort of doing no harm and my wave my hands argument for that is you know since we're kind of moving through each position the corpus one by one you know something a word that is the center word at one moment is going to be a context word at the next moment and the word that was the context word is going to have become the center word so you're sort of doing the um the computation both ways in each case and so you should be able to convince yourself that the two representations for the word end up being very similar and they do not not identical for technical reasons at the ends of documents and things like that but very very similar and so effectively you tend to get two very similar representations for each word and we just average them and call that the word vector and so when we use word vectors we just have one vector for each word that makes sense thank you i have a question purely of curiosity so we started when we projected the vectors the word vectors onto the 2d surface we saw like little clusters of words that are similar to each other and then later on we saw that um with the analogies thing we kind of see that there's these directional vectors that sort of indicate like the ruler of or the ceo of something like that and so i'm wondering is there are there relationships between those relational vectors themselves such as like is the um the ruler of vector sort of similar to the ceo of vector which is very different from like is makes a good sandwich with vector is there any research on that that's a good question um how will you stump me already in the first lecture ah i mean that yeah i can't actually think of a piece of research and so i'm not sure i have a confident and i'm not sure i have a confident answer i mean it seems like that's a really easy thing to check um with how much you have one of these sets of um word vectors that it seems um like and for any relationship that is represented well enough by word you should be able to see if it comes out kind of similar um huh i mean i'm not sure we can we can look and see yeah that's totally okay just just curious i'm sorry i missed the last little bit to your answer to first question so when you wanted to collapse two vectors for the same word did you say you usually take the average um different people have done different things but the most common practice is after you've uh you know there's still a bit more i have to cover about running word divec that we didn't really get through today so i still got a bit more work to do on thursday but you know once you've run your word to vec algorithm and you you sort of your output is two vectors for each word and kind of like when it's center and when it's context and so typically people just average those two vectors and say okay that's the representation of the word croissant and that's what appears in the sort of word vectors file like the one i loaded that makes sense thank you oh thanks so my question is if a word have two different meanings or multiple different meanings can we still represent it as a same single vector yes that's a very good question um and actually there is some content on that in thursday's lecture so i can say more about that um but yeah the first reaction is you kind of should be scared because um something i've said nothing about at all is you know most words especially short common words have lots of meaning so if you have a word like star that can be astronomical object or it can be you know a film star a hollywood star or it can be something like the gold stars that you've got in elementary school and we're just taking all those uses of the word star and collapsing them together into one word vector um and you might think that's really crazy and bad um but actually turns out to work rather well um maybe i won't go through all of that um right now because there is actually stuff on that on thursday's lecture oh i see i think you can put ahead of the slides for next time oh wait i know this let's see [Music] is do we look at how to implement or do we look at like the stack of like something like alexa or something provide speech to uh context actions in this course was it just primarily uh understanding so this is an unusual con an unusual quarter but for this quarter there's a very clear answer which is um this quarter um there's also a speech class being taught which is cs 224 s um a speech class being taught by andrew mars and you know this is a class that's been more regularly offered sometimes it's only been offered every third year but it's being offered right now so if what you want to do is learn about speech recognition and learn about sort of methods for building dialogue systems you should do cs224 yes so you know for this class in general um the vast bulk of this class is working with text and doing various kinds of text analysis and understanding so we do tasks like some of the ones i mentioned we do machine translation um we do question answering um we look at how to pass this structure of sentences and things like that you know in other years i sometimes say a little bit about speech um but since this quarter there's a whole different class that's focused on speech that seem a little bit silly i guess you have the the part of partnering with your audience [Music] more on speech i'm now getting a bad echo i'm not sure if that's my fault or your fault but anyway um anyway answer yeah so the speech class does a mix of stuff so i mean the sort of pure speech problems classically have been um doing speech recognition so going from a speech signal to text and doing text-to-speech going from text to us a speech signal and both of those are problems which are now normally done including by the cell phone that sits in your pocket using neural networks and so it covers both of those but then between that the class covers quite a bit and in particular it starts off with looking at building dialogue system so this is sort of something like alexa google assistant siri as to well assuming you have a speech recognition a text-to-speech system um then you do have text in and text out what are the kind of ways that people go about building um um dialogue systems like the ones that i just mentioned um i actually had a question so i think there was some people in the chat noticing that the uh like opposites were really near to each other which was kind of odd but i was also wondering um what about like positive and negative uh valence or like affect um is that captured well in this type of model or is it like not captured well like well like with the opposites how those weren't really yeah so the short answer is for both of those and so this is a good question a good observation and the short answer is no both of those are captured really really badly i mean there's there's a definition um oh you know when i say really really badly i mean what i mean is if that's what you want to focus on um you've got problems i mean it's not that the algorithm doesn't work so precisely what you find is that you know antonyms generally occur in very similar topics because you know whether it's um saying you know john is really tall or john is really short or that movie was fantastic or that movie was terrible right you get antonyms occurring in the same context so because of that their vectors are very similar and similarly for sort of affect and sentiment based words well like um great and terrible example their contexts are similar um they're for um that if you're just learning this kind of predict words and context models um that no that's not captured now that's not the end of the story i mean you know absolutely people wanted to use neural networks for sentiment and other kinds of sort of connotation effect and there are very good ways of doing that but somehow you have to do something more than simply predicting words in context because that's not sufficient to um capture that dimension um more on that later adjectives too like very basic adjectives like so and like not because those would like appear in like similar context right what was your first example before not uh like so this is so cool so that that's actually a good question as well so yeah so there are these very common words that are commonly referred to as function words by linguists which you know includes ones like um so and not but other ones like and and prepositions like you know two and on um you sort of might suspect that the word vectors for those don't work out very well because they occur in all kinds of different contexts and they're not very distinct from each other in many cases and to a first approximation i think that's true and part of why i didn't use those as examples in my slides yeah but you know at the end of the day we do build up vector representations of those words too and you'll see in a few lectures time when we start building what we call language models that actually they do do a great job in those words as well i mean to explain what i'm meaning there i mean you know another feature of the word to vect model is that actually ignore the position of words right so it said i'm going to predict every word around the center word but you know i'm predicting it in the same way i'm not predicting differently the word before me or versus the word after me or the word two away in either direction right they're all just predicted the same by that one um probability function and so if that's all you've got that sort of destroys your ability to do a good job at um capturing these sort of common more grammatical words like so not an and but we build slightly different models that are more sensitive to the structure of sentences and then we start doing a good job on those too okay thank you i had a question about the characterization of word to fact um because i i which was slightly different from how it was presented in the microwave so are these like two complementary reasons yeah so i i've still got more to say so i'm stay tuned thursday um for more stuff on word vectors um you know so word to back is kind of a framework for building word vectors and that there are sort of several variant precise algorithms within the framework and you know one of them is how whether you're predicting the context words or whether you're predicting the center word so the model i showed was predicting the context words so it was the skip gram model but then there's sort of a detail of how in particular do you do the optimization and what i presented was the sort of easiest way to do it which is naive optimization with the equation the soft max equation for word vectors um it turns out that that naive optimization is sort of ex needlessly expensive and people have come up with um a faster ways of doing it in particular um the commonest thing you see is what's called skip gram with negative sampling and the negative sampling is then sort of a much more efficient way to estimate things and i'll mention that on thursday right okay thank you who's asking for more information about how word vectors are constructed uh beyond the summary of random initialization and then gradient based uh iterative upgrade optimization yeah um so i sort of will do a bit more connecting this together um in the thursday lecture i guess this sort of only so much one can fit in the first class um but the pic the picture is essentially the picture i showed the pieces of so to learn word vectors you start off by having a vector for each word type both for context and outside and those vectors you initialize randomly so that you just put small little numbers that are randomly generated in each vector component and that's just your starting point and so from there on you're using an iterative algorithm where you're progressively updating those word vectors so they do a better job at predicting which words appear in the context of other words and the way that we're going to do that is by using um the gradients that i was sort of starting to show how to calculate and then you know once you have a gradient you can walk in the opposite direction of the gradient and you're then walking downhill i you're minimizing your loss and we're going to sort of do lots of that until our word vectors get as good as possible so you know um it's really all math but in some sense you know word vector learning is sort of miraculous since you do literally just start off with completely random word vectors and run this algorithm of predicting words for a long time and out of nothing emerges these word vectors that represent meaning well
Stanford_CS224N_Natural_Language_Processing_with_Deep_Learning_2023
Stanford_CS224N_NLP_with_Deep_Learning_2023_Lecture_14_Insights_between_NLP_and_Linguistics.txt
Cool. Hi, everyone Hi, I'm Isabel. I'm a PhD student in the NLP group. It's about connecting insights between NLP and linguistics. Yeah. So hopefully, we're going to learn some linguistics and think about some cool things about language. Some logistics-- we're in the project part of the class, which is cool. We're so excited to see everything you guys do. You should have a mentor/grader assigned through your project proposal, the person whoever graded your project proposal, especially if you're on a custom project, we recommend that you go to your graders' office hours. They'll know the most and be most into your project. And project milestones are due next Thursday. So that's in one week from now. So hopefully, you guys are all getting warmed up doing some things for the project, and we'd love to hear where you are next week. Cool, so the main thing that I'm going to talk about today is that there's been kind of a paradigm shift for the role of linguistics in NLP due to large language models, right? So it used to be that there was just human language we created all the time. We're literally constantly creating it. And then we would like analyze it in all these ways. Maybe we want to make trees out of it. Maybe we want to make different types of trees out of it. And then, all that would kind of go into making some kind of computer system that can use language, right? And now we've cut out this middle part, right? So we have human language, and we can just immediately train a system that's very competent in human language. And so now we have all this analysis stuff from before, and we're still producing more and more of it, right? There's still all this structure, all this knowledge that we know about language. And the question is, is this relevant at all to NLP? And I'm going to show how it's useful for looking at these models, understanding these models, understanding how things work, what we can expect, what we can't expect from large language models. So, in this lecture, we'll learn some linguistics, hopefully. Language is an amazing thing. It's so fun to think about language, and hopefully we can instill some of that in you. Maybe you'll go take like Ling 1 or something after this. And we'll discuss some questions about NLP and linguistics, right? Where does linguistics fit in for today's NLP? And what does NLP have to gain from knowing and analyzing human language? What does a 224N student have to gain from knowing all this stuff about human language? So for the lecture today, we're going to start off talking about structure in human language, thinking about the linguistics of syntax and how structure works in language. We're going to then move on to looking at linguistic structure in NLP, in language models, the kind of analysis that people have done for understanding structure in NLP. And then we're going to think of going beyond pure structure, so beyond thinking about syntax, thinking about how meaning and how meaning and discourse and all of that play into making language language and how we can think of this both from a linguistics side and from a deep learning side. And then, lastly, we're going to look at multilinguality and language diversity in NLP. Cool. So starting off with structure in human language, just like a small primer in language in general, right, it's a kind of-- if you've taken any intro to linguistics class, you know all of this, but I think it's fun to get kind of situated in the amazingness of this stuff, right? So all humans have language, and no other animal communication is similar. This thing which is incredibly easy for any baby to pick up in any situation, and it's just this remarkably complex system. Very famously, linguists like to talk about the case of Nicaraguan Sign Language because it kind of emerged while people were watching in a great way. So after the Sandinista revolution, there's large public education in Nicaragua, and they made a school for deaf children. And there was no central Nicaraguan Sign Language. People had isolated language, and then you see this full language emerge in this school very autonomously, very naturally. I hope this is common knowledge. Maybe it's not. Signed languages are full languages, with morphology and things like pronouns and tenses and all the things. It's not like how I would talk to you across the room. Yeah. And what's cool about language is that it can be manipulated to say infinite things, right? And the brain is finite. So it seems like we have some kind of set of rules that we tend to be able to pick up from hearing them as a baby and then be able to say infinite things. And we can manipulate these rules to really say anything. We can talk about things that don't exist, things that can't exist. This is very different from the kind of animal communication we see, like a squirrel alarm call or something. It's like, watch out, there's a cat. Things are totally abstract, that have no grounding in anything-- we can express subtle differences between similar things. When I'm thinking about this point and this feature of language, I think of the Stack Exchange world building thing. I don't know if you've ever looked at the sidebar where there's thing where science fiction authors kind of pitch their ideas for their science fiction world. And it's the wackiest-- you can really create any world with English with the language that we're given. It's amazing. And so there's structure underlying language. I said "recap" here because we've done the dependency parsing lectures. We thought about this, right? But if we have some sentence, like "Isabelle broke the window." "The window was broken by Isabelle." Right? We have these two sentences-- there's some kind of relation between them. And then we have another two sentences that have a similar relation between them, right, this kind of passive alternation-- it's kind of something which exists for both of these sentences. And then we can even use made-up words, and you can still see that it's a passive alternation. And so it seems like we have some knowledge of structure that's separate from the words we use and the things we say, that's kind of above it. And then what's interesting about structure is that it dictates how we can use language. So if I have a sentence like "the cat sat on the mat," and then someone tells you, well, if you make a tree for it, it's going to look like this according to my type of tree theory. You would say, well, why should I care about that? And the reason that this stuff is relevant is because it kind of influences what you could do, right? So any subtree or-- in this specific case, any subtree. In other cases, many subtrees-- it can kind of be replaced with one item. So it's like "he sat on the mat" or "he sat on it" or "he sat there" or "he did so." "Did so" is two words, but there's a lot of ink spilled over do in English, especially in early linguistics teaching. So we're not going to spill any. It's kind of like one word. But then when something is not a subtree, you can't really replace it with one thing, right? So you can't express "the cat sat" and then have "the mat" as a different thing, right? In one, you could say "he did so on the mat." You'd have to do two things, and one way you could think about this is that, well, it's not a subtree, right? It's kind of like you kind of have to go up a level to do this. And so you can't really separate "the cat" from "on the mat" in this way. And we implicitly know so many complex rules about structure. We're processing these streams of sound or streams of letters all the time. And yet we have these-- the ways that we use them show that we have all these complex ideas, like the tree I just showed. Or for example, these are like-- I'm just going to give you some examples for a taste of the kinds of things people are thinking about now. But there's so many. So what can we pull out to make a question? So if we form a question, we form it by-- we're kind of referring to some part of-- there might be another sentence which is the statement version. And we've kind of pulled out some part to make the question. They're not necessarily fully related, but you know. So if "Leon is a doctor," we can kind of pull that out to make a question, like "what is Leon?" And if we have "my cat likes tuna," we can pull that out, "what does my cat like?" Again, "do"-- ignore the "do." If we have some "Leon is a doctor and an activist," we actually can't pull out this last thing, right? So if something's conjoined with an "and," it can't be taken out of that "and." So you could only say, "What is Leon?" You could be like, "oh, a doctor and an activist." But you can't really say, "What is the Leon a doctor and?" This is not how question formation works. And this is something that we all know, I think something without any of us having been taught. Even people who've been taught English as a second language, I don't think this is something which you're ever really taught explicitly. But most of us probably know this very well. Another such rule, right, is when can we kind of shuffle things around, right? So if we have something like "I dictated the letter to my secretary," right, we can make like a longer version of that. "I dictated the letter that I had been procrastinating writing for weeks and weeks to my secretary." This character is both a grad student and a high-ranking executive. And then we can move that long thing to the end. So it's like, "I dictated to my secretary of the letter that I'd been procrastinating writing for weeks and weeks," and that's fine. Maybe it's slightly awkwardly phrased. But it's not like-- I think this-- for me, at least, everyone varies, right, could appear in natural productive speech. But then something like this is much worse, right? So somehow, the fact that it becomes weighty is good, and we can move it to the end. But when it doesn't become weighty, we can't, right? And this sounds kind of more like Yoda-y than real language. And we have this rule. This one's not that easy to explain, actually. People have tried many ways to make sense of this in linguistics. But it's a thing we all know, right? And so when I say rules of grammar, these are not the kind of rules that we're usually taught as rules of grammar. So a community of speakers-- for example, standard American English speakers, they share this rough consensus of the implicit rules they all have. These are not the same. People have gradations and disagree on things. And then kind of like, a grammar is an attempt to describe all these rules. And you can kind of-- linguists might write out a big thing called the grammar of the English language, where they're trying to describe all of them. It's really not going to be large enough ever. This is a really hefty book. And it's still not describing all of them. Language is so complex. So what we are told is rules of grammar, these kind of prescriptive rules, where they tell us what we can and can't do, they often have other purposes than describing the English language, right? So for example, when they've told us things like, oh, you should never start a sentence with "and," that's not true. We start sentences with "and" all the time in English and it's fine. What they probably mean-- there's some probably reason that they're saying this, especially if you're trying to teach a high schooler to write. When you want them to focus their thoughts, you probably don't want them to be like, oh and this, oh and this. Again, you want them to-- until you tell them, oh, a rule of writing is you can never start a sentence with "and." And when they say something like, oh, it's incorrect to say I don't want nothing, this is bad grammar, this is-- in standard American English, you probably wouldn't have nothing there because you would have anything, but in many dialects of English, in many languages across the world, when you have a negation like the "not" and "don't" then everything it scopes over also has to be negated, also has to agree. And many dialects of English are like this. And so what they're really telling you is the dialect with the most power in the United States doesn't do negation this way, and so you shouldn't either in school. And so the way that we can maybe define grammaticality, rather than what they tell us is wrong or right, is that if we choose a community of speakers to look into, they share this rough consensus of their implicit rules. And so the utterances that we can generate from these rules are grammatical. Roughly, everyone has these gradations of what they can accept. And if we can't produce an utterance using these rules, it's ungrammatical. And that's where this is the descriptive way of thinking about grammar, where we're thinking about what people actually say and what people actually like and don't like. And so, for an example, in English, largely, we have a pretty strict rule that the subject, the verb, and the object appear in this SVO order. There's exceptions to this. There's exceptions to everything, especially things like "says I" in some dialects. But it is largely, if something is before the verb, it's a subject. If something is after the verb, it's an object. And you can't move that around too much. And we also have these subject pronouns, like I, she, he, they that have to be the subject and these object pronouns-- me, her, him, them-- that have to be the object. And so if we follow these rules, we get a sentence that we think is good, like "I love her." And if we don't, then we get a sentence that we think is ungrammatical, something like "Me love she." We don't know who is who-- who is doing the loving and who is being loved in this one, right? And it doesn't exactly parse. And this is also true-- even when there's no ambiguity, this continues to be true, right? So for a sentence like "me a cupcake ate," which is like-- the meaning is perfectly clear, our rules of grammaticality don't seem to cut us much slack, right? We're like, oh, this is wrong. I understand what you mean, but in my head, I know it's not correct, even not by the prescriptive notion of what I think is correct, by the descriptive notion. I just don't like it, right? And you can also-- sentences can be grammatical without any meaning. So you can have meaning without grammaticality, right, like "me a cupcake ate," and you can also a have classic example from Chomsky in 1957-- I introduced it earlier-- but yeah, classically from 1957, "Colorless green ideas sleep furiously," right, which this has no meaning because you can't really make any sense out of the sentence as a whole. But you know it's grammatical, and you know it's grammatical right, because you can make an ungrammatical version of it, like "colorless green ideas sleeps furious," right, which does make sense because there's no agreement, even though you don't have any meaning for any of this. And then, lastly, people don't fully agree. Everyone has their own idiolect, right? People usually speak more than one dialect, and they kind of move between them, and they have a mixture. And they also have their own way of thinking of things. They also have different opinions at the margins. People like some things more. Others don't. So an example of this is not everyone is as strict for some wh- constraints. So if you're trying to pull out something like "I saw who Emma doubted reports that would capture in the nationwide FBI manhunt" is from a paper by Hoffmeister and Ivan Sag from Stanford. This is like, some people like it. Some people don't. Some people can clearly see, oh, it's the who that we had captured, and Emma doubted the reports that we had captured. And some people are like, this is as bad as "what is Leon a doctor and--" I don't like it, right? Yeah, so that's grammaticality. And the question is why do we even need this? We accept these useless utterances, and we block out these perfectly communicative utterances, right? And I started off saying that this is a fundamental facet of human intelligence. It seems a strange thing to have. And so I think one thing I keep returning on when I think about linguistics is that a basic fact about languages is that we can say anything, right? Every language can express anything, and if there's no word for something, people will develop it if they want to talk about it. And so if we ignore the rules because we know what it's probably intended, then we would be limiting possibilities. So in my kitchen horror novel, where the ingredients become sentient, I want to say "the onion chopped the chef." And if people just assumed I meant the chef chopped the onion because SVO order doesn't really matter, then I can't say that. So then to conclude-- a fact about language that's very cool is that it's compositional. We have this set of rules that defines grammaticality, and then this lexicon, this dictionary of words that relate to the world we want to talk to, and we kind of combine them in these limitless ways to say anything we want to say. Cool, any questions about all this? I've tried to bring a lot of linguistic fun facts top of mind for this lecture. So I'll hopefully have answers for things if you want to ask. Cool, cool, yeah. So now that was a nice foray into a lot of '60s linguistics. How does that relate to us today in NLP? And so we said that, in humans, we can think about languages as there's a system for producing language that can be described by these discrete rules. So it's not like it's smaller than all the things that we can say. There's just kind of rules that we can put together to say things. And so do NLP systems work like that? And one answer is, well, they definitely used to, right, because, as we said in the beginning, before self-supervised learning, the way to approach doing NLP was through understanding the human language system, right, and then trying to imitate it, trying to see if you think really hard about how humans do something, then you kind of code up a computer to do it, right? And so for one example is parsing used to be super important in NLP, right? And this is because, as an example, if I want my sentiment analysis system to classify a movie review correctly, something like "my uncultured roommate hated this movie, but I absolutely loved it," how would we do this before we had ChatGPT? We might have some semantic representation of words like hate and uncultured. It's not looking good for the movie. But how does everything relate? Well, we might ask how would human structure this word-- so many linguists-- there's many theories of how to make of how syntax might work. But they would tell you something like this. So, OK, I know I'm interested in the "I" right, because that's probably what the reviewer relates to. There's worrying stuff about "uncultured" and "hated," but it seems like those are related syntactically together, right? It's like the "roommate hated"-- and that can't really connect to the "I," so the "I" can't really be related to the "hated" because they're separated-- they're separate subtrees separated by this conjunction, by this "but" relation. And so it seems that "I" goes with "love," which is looking good for the movie. So we have "loved it." And so then we have to move beyond the rules of syntax, the rules of discourse-- how would this-- what could it mean. And there's a bunch of rules of discourse. And if you say it, you're probably referring to the latest kind of salient thing that's matches, and it is probably nonsentient, right? And so in this case, it would be "movie." So then linguistic theory-- it helped NLP reverse-engineer language. So you had something like input, get syntax from it, you get semantics from the syntax. So you would take the tree, and then from the tree, kind of build up all these little-- you can build up these little functions of how things relate to each other. And then you'd go to discourse, right? So what refers to what, what nouns are being talked about, what things are being talked about. And then whatever else was interesting for your specific use case. Now we don't need all that, right? Language models just seem to catch on to a lot of these things. So this whole thing that I did with the tree is-- ChatGPT knows this. And it was much harder things than this. This isn't even slightly prompt engineered. I just woke up one morning, I was like, got to do the rest of the lecture. gonna put that into ChatGPT, and this exactly-- I didn't even get some-- yeah, stop. Well, I guess I got a bit of moralizing, but immediately just told me who likes it, who doesn't like it, and why I'm doing something slightly wrong, which is how it ends everything, right? And so NLP systems definitely used to-- this is where we were-- work in this kind of structured discrete way. But now NLP works better than it ever has before, and we're not constraining our systems to know any syntax, right? So what about structure in modern language models? And so the question that a lot of analysis work has been focused on-- and I think we'll have more analysis lectures later also, so this is going to be looked at in more detai, right, is how could you get from training data, which is just kind of like a loose set of just things that have appeared on the internet-- or sometimes not on the internet, rarely-- to rules about language, right, to the idea that there's this structure underlying language that we all seem to know, even though we do just talk in streams of things that, then, sometimes appear on the internet. And one way to think about this is testing. It's testing how novel words in old structures work, ri ght? So humans can easily integrate new words into our old syntactic structures. I had lived in Greece for a few years for middle school, not speaking English too much, and I came back for high school, and this is in Berkeley in the East Bay, and there was literally 10 new vocabulary words I had never heard of before, and they all had a very similar role to "dank" or "sick," but they were the ones that were being tested out and did not pass. And within one day, I immediately knew how to use all of them. It was not a hard thing for me. I didn't have to get a bunch of training data about how to use all these words. And so this is one way of arguing that the thing I was arguing for the whole first part of the lecture, that syntactic structures, they exist independently of the words that they have appeared with, right? A famous example of this is Lewis Carroll's poem "Jabberwocky"-- I was going to quote from it, but I can't actually see it there-- right, where he just made up a bunch of new words, and he just made this poem, which is all new, open-class words-- open-class word is what we call kind of like nouns, verbs, adjectives, adverbs, classes of words that we add new things to all the time, while things like conjunctions, like "and" or "but" are closed-class. There's been a new conjunction added recently-- I just remembered after I said that. Does anyone know of a conjunction that's in the past 30 years or something, maybe 40? Spoken "slash." Now we say "slash," and it kind of has a meaning that's not "and" or "but" or "or," but it's just a new one. But it's closed class, generally. This happens rarely. Anyway, and so you have "'Twas brillig and the slithy toves did gyre and gimble in the wabe." Toves is a noun. We all know that. We've never heard it before. And in fact, one word for from Jabberwocky, "chortle," actually entered the English vocabulary. It kind of means a little chuckle that's maybe slightly suppressed or something, right? So it shows that there was literally one example of this word, and then people picked it up and started using it as if it was a real word, right? And so one way of asking do language models have structure is do they have this ability? And I was thinking it would be cool to go over a benchmark about this, so the kind of thing-- so people make things where you can test your language model to see if it does this. Are there any questions until now before I go into this new benchmark? The COGS benchmark is Compositional Generalization from Semantics benchmark or something. It kind of checks if language models can do new word structure combinations, right? So the task at hand is semantic interpretation. I kind of glossed over it before, but if you have a sentence, like "the girl saw the hedgehog," you have this idea that-- and you've seen "saw" is a function that takes in two arguments, and it outputs that the first one, saw the second one. This is one way of thinking about semantics. There's many more, as we'll see, but this is one. And so you can make a kind of lambda expression about what the sentence means. And to get that, you have to use the tree to get it correct. But anyway, the specific mechanism is not very important, but it's like the semantic interpretation, where you take "the girl saw the hedgehog," and you output this function of "see" takes two arguments. First is the girl. Second is the hedgehog. And then the training of the test set-- they have distinct words and structures in different roles, right? So, for example, you have things like pullout, or the hedgehog is always an object in the training data. So when you're fine tuning to do this task, but then in the test data, it's a subject. So can you use this word that you've seen in a new place? Because in English, anything that's an object can be a subject. There's some subtlety around like some things are more likely to be subjects. And then, similarly, if you have something like the cat on the mat, and it always appears-- so this idea that a noun can go with like a prepositional phrase. But that's always, always in the subject. Emma saw the cat on the mat. And then can you do something like the cat on the mat saw Mary? So it's like, move that kind of structure to subject position, which is something that, in English, we can do, right? Any type of noun phrase that can be in object position can be in subject position. And so that's the COGS benchmark. Large language models haven't aced this yet. I wrote this, and I was looking over this slide, and I was like, well, I haven't checked the largest ones. They never do check the largest ones because it's really hard to do this kind of more analysis work. And things move so fast. The really large ones-- but you know T5, 3 billion-- 3 billion is a large number. It's maybe not a large language model anymore. But they don't ace this. They're getting like 80% well. When they don't have to do the structural generalization, when they can just do a test set which things appear in the same role as they did in the training set, they get 100% easy. It's not a very hard task. And so this is still pretty good, and it's probably like-- if a human had never ever seen something in subject position, I'm not sure that it would be like 100% as easy as if they had. We don't want to fully idealize how things work in humans, right? So similarly, you can take literal Jabberwocky sentences, so building on some work that John did that I'm going to talk about later, so I'm going to go in, but maybe I'm wrong on that assumption, right? We can kind of test the models embedding space. So if we go high up in the layers and test the embedding space, we can test it to see if it encodes structural information. And so we can test OK, is there like a rough representation of syntactic tree relations in this latent space? And then a recent paper asked, does this work when we introduce new words? So if we take Jabberwocky-style sentences and then ask, can the model find out these, the trees in these in its latent space, does it encode them? And the answer is it's kind of worse. In this graph, the hatched bars, the ones on the right, are the Jabberwocky sentences. And the clear ones or the not hatched ones, I guess, are the normal sentences. And we see performance is worse. So this is unlabeled attachment score on the y-axis. It performs probably worse than humans. It's easier to read a normal poem than to read Jabberwocky. So the extent to which this is damning or something is, I think, very, very small. I think the paper is-- I have linked it there, but I think the paper is maybe a bit more sure about this being a big deal, maybe, than it is. But yeah, it does show that this kind of process isn't trivial. Yeah? What type of words applies for Jabberwocky substitutions? Oh, so this is something called phonotactics. This is probably around what you're asking, that you want a word which sounds like it could be in English, like "povicated." Right? It sounds like can be in English. A classic example is looking like it could be an English word. Bnick can't. We can't start a sentence with B-N. And that's not an impossibility of the mouth, right? It's similar things like pterodactyl, pneumonia. These come from Greek words like [GREEK].. So I can say them. I'm a Greek native speaker-- P-N and P-T, I can put them at the beginning of a syllable. But in English, they don't go. And so if you follow these rules, and kind of also add the correct suffixes and stuff-- so, like, povicated we know is past tense and stuff-- then you can make words that don't exist but could exist. And so they don't throw people off. And this is important for the tokenizers, right? You don't want to do something totally wacky to test the models. So when you generate this test set, like with these substitutions, are these words generated by a computer or is there a human coming up with words that sound like English but aren't? There's some databases that people have thought of these. And I think they get theirs from some-- there's some list of them because if you have 200, that's enough to run this test because it's a test. I mean, I think that the phonotactic rules of English can be actually laid out kind of simply. Pt-- you can't really have two stops to get-- like puh, tuh, they're both the same. You can't really put them together. So you can probably make like a short program or a long-ish program, but not a very super complex one to make good Jabberwocky words in English. Yeah? So I'm wondering how the model would tokenize these Jabberwocky sentences. Would it not just mount all these words, like povicated, just to the unknown? So these are largely models that have wordpiece tokenizers, right? So if they don't know word, they're like, OK, what's the largest bit of it that I know? And then that's a subtoken. And this is how most models work now. Back in the day-- and this is back in the day meaning until maybe six or seven years ago. It was like very normal to have unk tokens, unknown tokens. But now, generally, there is no such thing as an unknown. At a bare minimum, you have the alphabet in your vocabulary. So at a bare minimum, you're splitting everything up into letter by letter tokens, character by character tokens. But if you're not, then yeah, it should find-- and this is why like the phonotactic stuff is important for this, that it tokenizes, hopefully, in slightly bigger chunks that have some meaning. And because of how attention works and how contextualization works, you can-- even if you have a little bit of a word, you can give the correct kind of attention to it once it figures out what's going on a few layers in, for a real unknown word. For a fake unknown word, then you know, yeah. Cool. I went back, but I want to go forward. Cool. Any more questions about anything? Yeah? A few slides back, it was 80% scores that you are saying these are not-- this isn't a solved problem yet. I'm just trying to get a sense of what 80% means in that context. It is 80% of exact-- Yeah, it was exact. I think the relevant comparison is that when you didn't have this kind of structural difference, where something that was-- sometimes a subject was then-- something which was never an object was then an object. The accuracy on that test set is 100% easy. And so there was no good graph which showed these next to each other. They kind of mentioned it. And so I think that's the relevant piece of information that somehow this swapping around of roles kind of slightly trips it up. That being said, you're right. Exact match of semantic parts is kind of a hard metric. And none of this stuff-- and I think this is important-- none of the stuff is damning. None of this stuff is, like, they do not have the kind of rules humans have. This was like, well, there's a bit of confusion. There's a bit of confusion in humans. It actually gets quite a bit-- it gets quite subtle with humans. And I'm going to go into that in the next section too. Overall, I think the results are surprisingly not damning, I would say. There's clearly maybe not the fully programmed, discrete kind of rules, but yeah. I'd say cool. Another thing we could do is test how syntactic structure maps onto meaning and role. And so as we said before, in English, the syntax of word order gives us the who did what to whom meaning. And so if we have, for any combination of A verb and B, if I have something like A verb B, A is the doer. B is the patient. And so we ask, is this kind of relationship strictly represented in English language models as it is in the English language? And so what we could do is that we could take a bunch of things which appear in subject position, a bunch of things which appear in object position, and take their latent space representation and kind of learn a little classifier. This should be a pretty clear distinction in latent space. In any good model-- which these models are good-- there should be a pretty clear distinction between-- just a linear classifier to separate them. And the more on the one side you are, you're more subject. And the more on the other side you are, you're more object, right? And so then, we can test does the model know the difference between when something is a subject and when something is an object? Does it know that you're going to go on opposite sides of this dividing line, even if everything else stays the same and all the clues point to something else? So does this index map onto role in this way? You might think, well, I could just check if it's second or fifth. This is a proof that we did compare-- we try to control for position stuff in various ways, and these are like-- and so hopefully, we claim we're showing the syntax to role mapping. And what we see is that it does. So if we kind of graph the distance from that dividing line on the y-axis, we see the original subjects, when we swap them and put them in object POSITION they do diverge as we go up layers in that dimension. And we try this-- again, all these analysis experiment have been kind of small models with some BERT, with some GPT-2, which is a bigger version of GPT-2. And it worked out. But none of this is the big, big stuff. And I think now we're starting to see more analysis on the big, big stuff. And I think it's really cool. So then where are we with structure and language models? We know that language models are not-- they're not engineered around discrete linguistic rules, but the pretraining process isn't just a bunch of surface-level memorization. We have seen this. There is some discrete rule-based system coming out of this. Maybe it's not the perfect kind of thing you would write down in a syntax class, but there is some syntactic knowledge. And it's complicated in various ways. And humans are also complicated, and that's what we're going to get to next. There's no ground truth for how language works yet. If we knew how to fully describe English with a bunch of good, discrete rules, we would just make an old pipeline system, and it would be amazing. If we could take the Cambridge grammar of English, but it was truly complete, if we just knew how English worked, we would do that. And so we're working in this case where there's really no ground truth. Any questions about this before I move beyond syntactic structure? So moving beyond this very structure-based idea of language, I think it's very cool to learn about structure in this way, and at least how I was taught linguistics, a lot of it, the first many semesters was this kind of stuff. But I think there's so much more. And very important, I think that meaning plays a role in linguistic structure. There's a lot of rich information in words that affects the final way that the syntax works and, of course, what you end up meaning and what the words influence each other to mean. And so the semantics of words, the meaning, it's always playing a role in forming and applying the rules of language, right? And so, for example, a classic example is verbs. They have kind of selectional restrictions. Ate can take kind of any food, and it can also take nothing. It's like "I ate" means that I've eaten. Devoured-- the word "devoured" actually can't be used intransitively, right? It sounds weird. You need to devour something. There's verbs like "elapsed" that only take a very certain type of noun. Elapsed only takes nouns that refer to time. So maybe "harvest" can refer to time, "moon" can refer to time, "Trees"-- it cannot take a verb like "trees." There's even verbs that only ever take one specific noun as their argument, right? This classic example-- my advisor Dan Jurafsky, he told me this one to put it in. And what's cool is that that's how we train models these days. If you see this diagram I screenshotted from John's transformers lecture, we start with a rich semantic input. We start with these on the order of 1,000, depending on the model, size embeddings, which it's like, think of how much information you can express on a plane on two dimensions. The kind of richness that you can fit into 1,000 dimensions-- it's huge. And we start with these word embeddings and then move on to the attention block and everything. And so I'm just going to go through some examples of the ways that languages-- the ways that meaning plays a role in forming syntax. Hopefully, it's fun, a tour through the cool things that happen in language. So as we said, anything can be an object. Anything can be a subject. We want to be able to say anything. Language can express anything. This is a basic part of language. But many languages they have a special syntactic way of dealing with this, right? So they want to tell you if there's an object that you wouldn't expect, like in this case, I want to tell you, "hey, watch out, be careful, we're dealing with a weird object here." So this is in the syntax of languages-- if you're a native speaker or you've learned Spanish, you know this "a" constraint. So if something is an object, but it's inanimate, you don't need the "a" because you're like, yeah, I found a problem. But then if you're putting something animate in the object position, you need to mark it. You need be like, hey, watch out, there's an object here. And this is a rule of the grammar. If you don't do this, it's wrong. And they tell you this in Spanish class. Similarly, Hindi has a kind of a more subtle one, but I think it's cool. So if you put an object that is definite, you have to mark it with a little-- this is an object marker, a little accusative marker. And you might ask, OK, I understand why animacy is a big deal, maybe animate things more often do things and have things done to them. But why definiteness, right? Why would you need this little call marker to say "the goat" versus "a goat" and it's like, well, if something is definite, it means that it's in-- we've probably been talking about it or we're all thinking about it. For example, oh, I ate the apple. This means that either we had one Apple left and I ate it, or it was like really rotten or something and you can't believe I ate it, or something like that. And so then things that we're already talking about, they're probably more likely to be subjects, right? If I was like, oh, Rosa-- yeah, if you're like, Rosa did this and Rosa did that and Rosa that, and then Leon kissed Rosa, you'd be like, no, you probably want to be like Rosa kissed Leon. You probably want to put-- it's not strict, but if you're talking about something, it's probably going to be the subject of the next sentence. So then if it's "the goat," you have to put a little accusative marker on it. So this is how the marking in the language works. And it's kind of all influenced by this interesting semantic relationship. And language models are also aware of these gradations. In a similar classifying subjects and objects paper that we wrote, we see that language models also have these gradations. So again, if you map the probability of being in that classifier on the y-axis, we see that there's a high accuracy. This is over many languages. And all of them, on the left, we have the subjects that are classified above. On the right, we have the objects that are classified below. But animacy kind of influences this grammatical distinction, right? So if you're animate and a subject, you're very sure. If you're inanimate and an object, you're very sure. Anything else, you're kind of close to 50. And so this kind of relation where the meaning plays into the structure is reflected in language models. And that's not bad. It's good because it's how humans are. Or we should temper our expectations, maybe, away from the fully, fully syntactic things that we're talking about. Another kind of cool example of how meaning can influence-- we can say what we can say. I've said from the beginning, many times, that all combinations of structures and words are possible, but that's not strictly true. So in many cases, something is too outlandish. We often do just assume the more plausible interpretation. So there's these psycholinguistics experiments, where they kind of test these kind of giving verbs, like "the mother gave the daughter the candle," and you could actually switch that around. It sounds like the date of alternation. But you can switch that around to "the mother gave the candle to the daughter." And then, if you switch around who's actually being given, right, so if you're actually saying the mother gave the candle the daughter, people don't interpret that in its literal sense. They usually interpret it as the mother gave the daughter the candle. And of course, outlandish meanings-- they're never impossible to express because nothing is. And so you can spell it out. You could be like, well, the mother, she picked up her daughter, and she handed her to the candle, who is sentient. And then you could say this, but you can't do it simply with the "give" word. People tend to interpret it the other way. And so marking these less prominent things and marking them as-- sorry, these less plausible things and marking them more prominently, there's pervasive feature that we see across language in all these ways. And all these ways are also very embedded in the grammar, as we saw earlier in Spanish and Hindi. So another way in where how we see meaning kind of play into and break apart this full compositionality syntax picture, right, is that meaning can't always be composed from individual words. So language is full of idioms. Sometimes, we can talk about idioms, you might think, OK, there's maybe 20 of them, things my grandfather would say, things about chickens and donkeys. In Greece, they're all donkeys. We're actually constantly using constructions that we couldn't actually get from-- they're idiomatic in their little sense-- that we couldn't actually get from composing the words, things like "I wouldn't put it past him." "He's getting to me these days." "That won't go down well with the boss." There's so many of these. It's kind of a basic part of communication to use these little canned idiomatic phrases. And linguists love saying that, oh, any string of words you say is totally novel. And it's probably true. I've been speaking for, like, 50 minutes. And probably no one has said this exact thing ever before. I just use the compositional rules of English to make it. But actually, most of my real utterance is like, "oh yeah, no, totally," right, something like that, which actually people say that all the time. Most of my real utterances are-- people say that all the time. We have these little canned things that we love reusing and we reuse them so much that they stop making sense if you break them apart into individual words. And we even also even have these constructions that can take arguments but don't really-- so they're not canned words. They're a canned way of saying something that doesn't really work if you build up from the syntax. It's like, "oh, he won't eat shrimp, let alone oyster." And what does that mean? Well, it means I'm defining some axis of moreness in this case, probably selfish and shellfish and weird or something. And so it's like, well, shrimp is less weird, so oyster is more. And if I say, oh, he won't eat shrimp, let alone beef, the axis is, like, vegetarianism. So it's this construction that does a complex thing, where you're saying he won't do one thing, let alone the one that's worse in the dimension. It's like, oh, "she slept the afternoon away." "He knitted the night away." "They drank the night away." This is time away thing doesn't actually-- you can't really tell otherwise. This er-er construction, the bigger they are, the more expensive they are-- man, I forgot how it goes. The bigger they come, the harder they fall. So it doesn't even have to be a-- and it was like that travesty of a theory, that "of a" construction. There's so many of these, so much of how we speak. If you actually try to do the tree parse, the semantic parse up from it, it won't really make sense. And so there's been this-- this is more recently coming to light, and I've been really excited by it. There's testing constructions in large language models. There was just, this year, a paper by Kyle Mahowald, who was a postdoc here, testing the "A beautiful five days in Austin" construction. So this is like the "a adjective numeral noun" construction, where it doesn't really work, right, because it wouldn't really work because you have "a" "days." And there's many ways-- and anything kind of similar to it-- it's a five beautiful days. That doesn't work. So somehow, this specific construction is grammatically correct to us, but you can't say "a five days in Austin." You can say "a five beautiful days in Austin." You have to have it like this. And it showed GPT-3 actually largely concurrent concurs with humans on these things. So on the left here, the gray bars, we have the things that are acceptable to humans. So those are like "a beautiful five days in Austin" and "five beautiful days in Austin." Those are both acceptable to humans. They do this over many, many instances of this construction, not just Austin, obviously. And we see GPT-3 accepts these. Those are the gray bars. And humans also accept these. Those are the green triangles. And every other iteration, the human triangles are very low. And GPT-3 is lower, but does get tricked by some things. So it seems to have this knowledge of this construction, but not as starkly as humans do, right? So especially if you see if you see that third one over there, the five beautiful days, humans don't accept it as much. It's funny to me. It sounds almost better than those rest of them. But I guess these green triangles were computed very robustly. So I'm an outlier. And GPT-3 is better, thinks those are better than maybe humans do. But there is this difference. There's significant difference between the gray bars and the orange bars. And then, similarly, some people tested the x or the y-er construction. And so they took examples of sentences that were the x or the y-er construction, and then they took example sentences which had an er followed by an er, but they weren't actually. It's like, oh, the older guys help out the younger guys, right? So that's not an x or y-er construction. And then they were like, right, if we mark the ones that are as positive and the ones that aren't as negative, does the latent space of models encode this difference that had all this construction kind of clustered together, in a way. And they find that it does. And then the last thing I want to talk about in this semantic space after constructions and all that is the meaning of words is actually very subtle and sensitive, and it's influenced by context and all these crazy ways. And Erica Peterson and Chris Potts from the linguistics department here did this great investigation on the verb break. And it's like, break can have all these meanings, right? We think it's like, yeah, break is a word, and words are things like table and dog and break that have one sense. But actually, they're not even senses that you can enumerate, like river bank and financial bank, and just like, yeah, break the horse means tame, or break a $10 bill, it means break it into smaller bits of money. And there are just so many ways-- like break free and break even-- there are just so many ways in which break-- its meaning is just so subtle and actually this is kind of true for every word or many words. Maybe like table and dog-- yeah, there's a set of all things that are tables or dogs. And it kind of describes that set. There's maybe some more philosophical way of going about it. So like pocket-- it's a pocket, but then you can pocket something. Then it kind of means steal, in many cases, doesn't just mean put something in your pocket literally. There's all these ways in which the meaning of words is subtly influenced by everything around it. And what they do is that-- don't worry about what's actually going on here, but they've kind of mapped each sense, like a color, and then when you start off in layer one, they're all-- I think this is just by position embedding, right? You start off in layer 1, and it's just like, I think that's what it is. And then if you take all the words, pass them through a big model, like RoBERTa large, right, then they're kind of all jumbled up, right, because they're all just break. They're just in different positions. And then, by the end, they've all kind of split up. All the colors are kind of clustering together. Each color is one of these meanings, right? And so they cluster together, and these-- is it constructions again, or is it just the way in which they kind of isolate these really subtle aspects of meaning? So then I think a big question in NLP is how do you strike the balance between syntax and the ways that meaning influences things, right? And I pulled out this quote from a book by Joan Bybee, which I enjoy, and I think it kind of brings to light a question that we should be asking in NLP. This book is just a linguistics book. It's not about NLP at all. But "while language is full of both broad generalizations and item-specific properties, linguists have been dazzled by the quest for general patterns." That was the first part of this talk. And "of course, the abstract structures and categories of language are fascinating. But I would submit--" or she would submit-- "that what is even more fascinating is the way that these general structures arise from and interact with the more specific items of language, producing a highly conventional set of general and specific structures that allow the expression of both conventional and novel ideas." It's kind of this middle ground between abstraction and specificity that we would want-- that humans probably exhibit, that we would want our models to exhibit. Yeah? I was wondering if you could go back one slide and just unpack this diagram a little more because I'm fairly new to NLP. I've never seen a diagram like this. Oh, sorry. What does this mean? How should I interpret this? So this is all like-- if you take the way that words are as you're passing them through a transformer, through many layers-- I just wanted to be like, look at how the colors cluster-- and you pass them through a transformer, many layers, at any one point in that transformer, you could say, OK, how are the words organized now? And you say, well, I'm going to project that to two dimensions from 1,000. And that's maybe a good idea, maybe a bad idea. I think there's a lot of-- but I wouldn't be able to show them here if they were 1,000. So let's assume that it's an OK thing to be doing. Then so this is what they've done for layer 1 and then for layer 24. And so we could see that they start off where the colors are totally jumbled, and they're probably-- before layer one, you add in the position embeddings. So I think that's what all those clusters are. So it's kind of clustering-- because you don't have anything to go off of. This is "break," and it's in position 5. It's like, OK, I guess I'll cluster with all the "breaks" in position 5. But then, as you go up the model and, oh, this meaning is being formed, you see these senses come out in how it organizes things. So all these "breaks" become-- they're very specific. They're very kind of subtle versions of "breaks." This work, I think it's different from a lot of NLP work because it has a lot of labor put into this labeling. This is something because the person who did this is a linguistics student. And if you go through corpus and label every "break" by which one of these it means, it's a lot of work. And so I think it's the kind of thing that you wouldn't be able to show otherwise, so it's often not really shown. Language is characterized by the fact that it is an amazingly abstract system. I started off raving about that, and we want our models to capture that. That's why we do all these compositionality syntax tests. But meaning is so rich and multifaceted. So high-dimensional spaces are much better at capturing these subtleties. We started off talking about word embeddings in this class, right? High-dimensional spaces are so much better at this than any rules that we would come up with, being like, OK, maybe we could have "break," subscript, "break money." And we're going to put that into our system. And so where do deep learning models-- where do they stand now right between surface-level memorization and abstraction? And this is what a lot of analysis and interpretability work is trying to understand. And I think that what's important to keep in mind when we're reading and kind of doing this analysis and interpretability work is that this is not even a solved question for humans, right? We don't know exactly where humans stand between having an abstract grammar and having these very construction-specific and meaning-specific ways that things work by. Cool, any questions overall on the importance of semantics and the richness of human language? Yeah? So this is probably a question from quite a bit before, but you were showing a chart from your research where the model was really, really well able to distinguish inanimate and animate given its knowledge of subject or object. I was just trying to interpret that graph and understand what the sort of links between words. --switch back. Sorry, I know it's a long way back. No, it's not that-- I think it's here, right? So this is similar to the other graph, where what it's trying to distinguish is subject from object. But we've just split the test set into these four ways, where we split into subjects inanimate, subjects animate-- so we just split the test set. And so what the two panels in the x-axis are showing are these different splits. So things that are subject-- and basically, the ground truth, the things on the left should be above 50. And things on the right should be below 50. And that's what's happening. But if we further split it by animate and inanimate, we see that there's this influence of animacy on the probability. Sorry, I rushed over these graphs. I wanted to give a taste of things that happen. But yeah, it's good to also understand fully what's going on. Thank you. Cool. Yeah? So this is also from a while back. You don't have to go to the slide. So you were talking about acceptability. So I'm assuming, for judging acceptability in humans, you just ask that person. For GPT-3, how do you determine if it finds a sentence acceptable? I think you can just take logits. I think that's what Kyle Mahowald did in this paper, right? You could just take the probabilities out, put it at the end. For kind of GPT-3, it's going left to right. I think there's other things that people do sometimes. But yeah, especially for these models, I don't have too much access apart from generation and the probability of each generation. I think that you could-- I think that you might want to do that. And you don't want to multiply every logit together because then, if you're multiplying many probabilities, longer sentences become very unlikely, which is not true, exactly, for humans, or it's not true in that way for humans. So I think there's things you should do, ways to control it and stuff when you're running an experiment like this. OK, so moving on to multilinguality in NLP. So far, we've been talking about English, although I haven't been saying it explicitly all the time, but most things I've said, apart from some-- maybe some differential object marking examples-- they've been kind of about English, about English models. But there are so many languages. There's over 7,000 languages in the world-- well, maybe not over. There's around 7,000 languages in the world. It's hard to define what a language is. It's kind of difficult. Even in the case of English, where we have things Scots, the language spoken in Scotland. Is that English? Something like Jamaican English-- maybe that's a different language. There's the different structures, but it's still like clearly much more related than anything else, than German or something. And so how do you make a multilingual model? Well, so far, a big approach-- you take a bunch of languages. This is all of them. And maybe you're not going to take all of them. Maybe you're going to take 100 or something. And you just funnel them into just one transformer language model. And there's maybe things you could do, like upsampling something they don't have too much data of or downsampling something they have too much data of. But this is the general approach. What if we just make one transformer language model, something like a BERT. It's usually a BERT-type model. It's hard to get good generation for too many languages. How about you get just one transformer language model for all of these languages? And so what's cool about this is that multilingual language models, they let us share parameters between high resource languages and low resource languages. There's a lot of languages in the world-- really just most languages in the world which you could not train even like a BERT-sized model for. There's just not enough data. And there's a lot of work being done on this. And one way to do this is to say, well, pretraining and transfer learning-- they brought us so much unexpected success. And we get this great linguistic capability and generality if we pretrain something in English that we weren't asking for. So will the self-supervised learning paradigm-- can it deliver between languages? So maybe I can get a lot of the linguistic knowledge, the more general stuff, from just all the high-resource languages. And I can apply it to the low resource languages. A bilingual person doesn't have two totally separate parts of their self that have learned the language. There's probably some sharing some way that things are in the same space. And linguistics are broadly the same. And so we have this attempt to bring NLP to some still very small subset of the 7,000 languages in the world. We can look at it through two lenses, right? On the one hand, languages are remarkably diverse. We'll go over some of the cool ways that language in the world vary. And so does multilingual NLP capture the specific differences of different languages? On the other hand, languages are similar to each other in many ways. And so does multilingual NLP capture the parallel structure between languages? So just to go over some ways, really understanding how diverse languages can be-- and this is a quote from a book-- but "in around a quarter of the world's languages, every statement--" like every time you use a verb-- "must specify the type of source on which it is based." So this is a part how we have tense in English, where kind of of everything you say is either in the past or the present or the future tense. And so an example in Tariana-- these are, again, from the book. This is not a language I know, right? But you have this marker in bold at the end. And so when you say something like "José has played football," if you put the ka marker, that means that we saw it. It's kind of like the visual evidential marker, right? And there's a nonvisual marker that kind of means we heard it, right? So if you say statement, you could say we heard it. We infer it from visual evidence. So if it's like, oh, his cleats are gone and he is also gone, and we see people going to play football, or we see people coming back, I guess, from playing football-- because it's in the past-- so we can infer it, and so you can put this. Or if he plays football every Saturday, and it's Saturday, you'd use a different marker. Or if someone has told you, if it's hearsay, you would use a different marker. So this is a part of the grammar that, to me at least-- I don't speak any language that has this-- it seems very cool and different from anything I would ever think would be a part of the grammar, but it is-- or especially a compulsory part of the grammar. But it is. And you can map out-- I wanted to include some maps from WALS, the World Atlas of Linguistic Structure because that's always so fun. You could map out all the language, where, I only speak white dot languages, which are, like, no grammatical evidentials. If you want to say whether you heard something or saw it, you have to say it in words. But there's many languages-- especially in the Americas. Tariana is, I think, a Brazilian language from up by the border. While we're looking at language typology maps-- and so this language organization and categorization maps, the classic one, right, is, again, the subject object and verb order. So as you said, English has SVO order, but there's just so many orders that-- almost all the possible ones are tested. Some languages have no dominant order, like Greek. So a language that I speak natively has no dominant order. You would move things around for emphasis or whatever. And here, we're seeing some diverse-- we're seeing typology. We're also seeing some tendencies-- some are just so much more common than others. And this is, again, something which people talk about so much. It's a very big part. Yeah, it's like a huge part of linguistic-- why are some more common? Why are some others? Is it a basic fact of language, something which happened? Is this just the fact of how discourse works, maybe, that it's more preferred for many people to say something? And there's a lot of opinions on this. Another way that languages vary is the number of morphemes they have per word. Some languages are-- Vietnamese, classically, it's very isolating. Each kind of thing you want to express, like tense or something, is going to be in a different word. In English, we actually combine tenses. We have things like -able, like throwable or something. And then in some languages, they're just really-- so much stuff is expressed in morphemes. And so you can have languages, especially in Alaska and Canada, a lot of languages there and Greenland, where you have-- these are all one language family-- you can have whole sentences expressed with just things that get tacked on to the verb. So you can have things like the object and the-- or, I guess, in this case, you start with the object, and you can have the verb and whether it's happening or not happening and who said it or whether it's said in the future and all that, just all put in these, quote unquote, sentence words. It's like a very different way of a language working than English works at all. Yeah, you have a question? Yeah, this is from a few slides ago, the one with the map, I just want to know what these dots mean because, in the US, the top right is gray, like in the Northeast, but in the Pacific Northwest, it's yellow. Is that different dialects for, say, American English. Oh, no, these are all Indigenous languages. Oh, I see. Yeah, so English is just this one dot in here spread amongst all the Cornish and Irish and stuff. Yeah, so English is like in Great Britain. And that's why like all this evidential stuff is happening in the Americas because there's a lot of-- very often, the Indigenous languages of the Americas, the classic very [INAUDIBLE] marking ones, which are the pink ones. Yeah? You said that normally, we use a BERT-style model for multilingual models because it's difficult for natural language generation across languages. I guess, intuitively, that makes sense because of the subtleties and nuances between different languages when you're producing it. But is there a reason that-- a particular reason that's been so much harder to make developments on? I think it's just hard. I think the good generation is just harder. To get something like GPT-3 or something-- it means a lot of data. I think there are-- can I think of any? Are there any new shards in CODA-- yeah, I can't really think of any encoder-decoder, as you said, big multilingual models. Of course GPT-3 has this thing where if you're like, how do you say this in French, it'll be like, you say it like this. So if you've seen all of the data, it's going to include a lot of languages. But this multilingual model, where you'd be-- be as good as GPT-3 but in this other language, I think you need a lot more data to get that kind of coherence as opposed to something if you do text infilling or something, which is how BERT-style models are. Then you get very good-- even if the text infilling performance isn't great for every language, you can actually get very good embeddings to work with for a lot of those languages. Cool. Now for just a one last language diversity thing, I think this is interesting, the motion event stuff, because this is actually not-- it's like languages that many of us know-- I'm going to talk about Spanish-- but it's actually something which you might not have thought about, but then, once you see, you're like, oh, that actually affects how everything works. So in English, the manner of motion is usually expressed on the verb. So you can say something like "the bottle floated into the cave." And so the fact that it's floating is on the verb, and the fact that it's going in is kind of on this satellite. Well, in Spanish, the direction of motion usually expressed on the verb. Greek is like this too. I think most Indo-European languages are not like this. They're actually like English. So most languages like Europe, like North India, tend to not be like this. And so you would say, [SPANISH]. So the floating is not usually put on the main verb. And in English, you could actually say "the bottle entered the cave floating." It's just maybe not what you would say, right? And similarly, in Spanish, you can't say the other way. This is called satellite-framing language and verb-framing languages-- really affects how you would say most-- kind of how everything works. It's a division that's pretty tested. Of course, it's not a full division. It's not this exclusive categorization. Chinese, I think, often has these structures where there's two verb slots, where you could have both a manner of motion and a direction of motion in the one verb slot. None of them have to go after playing some different role. So there's all these ways in which language are just different from things that maybe we didn't even think could be in a language, things that we do, but we don't realize that. And sometimes, they're just so different in these subtle ways. And so going to the other annual language are so different, they're also very alike. So there's this idea-- is there a universal grammar-- so abstract structure that unite all languages? This is a huge question in linguistics. And the question is can we define an abstraction where we can all say all languages are some version of it. There's other ways of thinking about universals. All languages tend to be one way or tend to be-- languages that tend to be one way also tend to be some other way. And there's a third way of thinking about a universals-- languages all deal in similar types of relations, like subjects, objects, types of modifiers. The universal dependencies project was a way of saying maybe we can make dependencies for all languages in a way that doesn't shoehorn them into each other. And what was it called-- relational-something grammar-- it was also this idea that maybe one way to think about all languages together is the kind of relations they define. And ask me about the Chomsky and the Greenberg and stuff, if you want, and how it relates to NLP. I think there's a lot to say there. It's slightly more difficult. So maybe it's easier to think of this third one in terms of NLP. And back to the subject-object-relation stuff, if we look at it across languages, we see that they're encoded in parallel because classifiers, those classifiers that we're training, they're as accurate in their own language as they are in other languages, their own language being red and other languages being black. It's not like, wow, if I take a multilingual model and I train one classifier in one language, it's going to be so good at itself and so bad at everything else. They're kind of interspersed. They're clearly on the top end, the red dots. Yeah. And UD relations, universal dependencies, the dependency relations, they're also encoded in parallel ways. This is work that John has done. Again, main thing to take from this example is that the colors cluster together. So if you train a parser on or parse classification on one language and transfer it to another, you see these clusters form for the other language. So it's these ideas of how things relate together, like a noun modifier, all that stuff. They do cluster together in these parallel ways across languages. And so language specificity is also important. I might skip over this. But it seems like maybe, sometimes, some languages are shoehorned into others in various ways. And maybe part of this is that data quality is very variable in multilingual corpora. So if you take all these multilingual corpora, there was an audit of them, and for all these various multilingual corporate, 20% of languages, there are less than 50% correct, meaning 50% of it was often just links or just something random that someone's like, yeah, that might be some language, but it was not at all. And maybe we don't want too much parameter sharing. AfriBERTa is a recent-- it's a recent BERT model trained only on African languages, maybe having too much, too high risks, too high resources harming, and there's work here at Stanford being done in the same direction. Another recent crosslingual model, XLM-V, came out, which is like, why should we be doing vocabulary sharing? We just have a big vocabulary, each language gets its own words. It's probably going to be better. And it is. It knocks out similar models of smaller vocabularies, which are like, maybe, computer is the same and English and French. It should be shared. Maybe it's better to separate out things. It's hard to find this balance between-- I'm going to skip over this paper too. It's very cool, and there's a link there, so you should look at it. But yeah, we want language generality, but we also want to preserve diversity. And so how is multilingual NLP doing, especially with things like dialects? There are so many complex issues for multilingual NLP to be dealing with. How can deep learning work for low-resource languages? What are the ethics of working in NLP for low-resource languages? Who wants their language in big models? Who wants the language to be translated? These are all very important ethical issues in multilingual NLP. And so after looking at structure-- beyond structure, multilinguality-- in models, you've been-- yeah, I hope you know that linguistics is a way of investigating what's going on in black box models. The subtleties of linguistic analysis-- they can help us understand what we want or expect from the models that we work with. And even though we're not reverse-engineering human language, linguistic insights-- I hope I've convinced you, they still have a place in understanding the models that we're working with, the models that we're dealing with, and in so many more ways beyond what we've discussed here, like language acquisition, language and vision, and instructions and music, discourse, conversation, and communication, and so many other ways. Cool, thank you. If there's any more questions, you can come ask me. Time's up. Thank you.
Stanford_CS224N_Natural_Language_Processing_with_Deep_Learning_2023
Stanford_CS224N_NLP_with_Deep_Learning_2023_PyTorch_Tutorial_Drew_Kaul.txt
SPEAKER: And so today I kind of just want to cover the fundamentals of PyTorch, really just see what are the similarities between PyTorch and NumPy and Python, which you guys are used to at this point, and see how we can build up a lot of the building blocks that we'll need in order to define more complex models. So specifically, we're going to talk today about tensors. What are tensor objects? How do we manipulate them? What is autograd? How PyTorch helps us compute different gradients? And finally, how we actually do optimization and how we write the training loop for our neural networks. And if we have time at the end, then we'll try and go through a bit of a demo to put everything together and see how everything comes together when you want to solve an actual NLP task. All right. So let's get started. So if you go to the course website, there is Notebook. And you can just make a copy of this Colab notebook and then just run the cells as we go. And so to start, today we're talking about PyTorch, like I said. It's a deep learning framework that really does two main things. One, is it makes it very easy to author and manipulate tensors and make use of your GPU so that you can actually leverage a lot of that capability. And two, is it makes the process of authoring neural networks much simpler. You can now use different building blocks, like linear layers and different loss functions and compose them in different ways in order to author the types of models that you need for your specific use cases. And so PyTorch is one of the two main frameworks along with TensorFlow. In this class, we'll focus on PyTorch, but there are quite similar. And so we'll start by importing Torch and we'll import the neural network module, which is torch.mn. And for this first part of the tutorial, I want to talk a bit about tensors. One thing that you guys are all familiar with now is NumPy arrays. And so pretty much you can think about tensors as the equivalent in PyTorch to NumPy arrays. They're essentially multi-dimensional arrays that you can manipulate in different ways. And you'll essentially use them to represent your data, to be able to actually manipulate it, and perform all the different matrix operations that underlie your neural network. And so in this case, for example, if we're thinking of an image. One way you can think about it in terms of a tensor, is it's a 256 x 256 tensor, where it has a width of 256 pixels and a height of 256 pixels. And for instance, if we have a batch of images, and those images contain three channels, like red, green, and blue, then we might have a four-dimensional tensor, which is the batch size by the number of channels by the width and the height. And so everything we're going to see today is all going to be represented as tensors, which you can just think of as multi-dimensional arrays. And so to kind of get some intuition about this, we're going to spend a little bit of time going through, essentially, lists of lists and how we can convert them into tensors and how we can manipulate them with different operations. So to start off with, we just have a simple list of lists that you're all familiar with. In this case, it's a 2x3 list. And now we want to create a tensor. And so the way, we'll create this tensor is by doing torch.tensor and then essentially writing the same syntax that we had before. Just write out the list of lists that represents that particular tensor. And so in this case, we get back a tensor object, which is the same shape and contains the same data. And so now the second thing with the tensor is that it contains a data type. So there's different data types. For instance, there are different varying level of precision floating point numbers that you can use. You can have integers. You can have different data types that actually populate your tensor. And so by default, I believe this will be float32, but you can explicitly specify which data type your tensor is by passing in the dtype argument. And so we see here, even though we wrote in a bunch of integers, they have a decimal point, which indicates that they're floating point numbers. And so same thing here. We can create another tensor, in this case with data type float32. And in this third example, you see that we create another tensor. We don't actually specify the data type, but PyTorch implicitly takes the data type to be floating point since we actually passed in a floating point number into this tensor. So pretty much at a high level, tensors are like multi-dimensional arrays. We can specify the data type for them. We can populate them just like NumPy arrays. So now we know how to create tensors. We know that ultimately, everything that we work with, all the data we have is going to be expressed as tensors. Now the question is, what are the functions that we have to manipulate them? And so we have some basic utilities that can help us instantiate tensors easily, specifically torch.0s and torch.1s. These are two ways to create tensors of a particular shape, in this case, tensors of all 0s or tensors of all 1s. And you'll see that this will be very helpful. When you do your homeworks, typically, you'll want to-- need to create a bunch of zero matrix. And it'll be very easy to just specify the shape here without having to write everything out super explicitly. And then you can update that tensor as needed. Another thing you can do is, just like we have ranges in Python, if you want to loop over a bunch of numbers, you can specify a range. You can also use torch.arange to be able to actually instantiate a tensor with a particular range. In this case, we just looped over the numbers 1 through 10. You could reshape this and make it 1 through 5 and then 6 through 10. That's another way to be able to instantiate tensors. And finally, something to note is that when we apply particular operations, such as just simple Python operations like addition or multiplication, by default they're going to be element-wise. So they'll apply to all of the elements in our tensor. So in this case, we took our tensor-- I think this one was probably from earlier above-- and we added 2 everywhere. Here we multiplied everything by 2. But the PyTorch semantics for broadcasting work pretty much the same as the NumPy semantics. So if you pretty much have different matrix operations where you need to batch across a particular dimension, PyTorch will be smart about it. And it will actually make sure that you broadcast over the appropriate dimensions. Although, of course, you have to make sure that the shapes are compatible based on the actual broadcasting rules. So we'll get to that in a little bit when we look at reshaping and how different operations have those semantics. In this case, we have to define the-- I guess, I'm not personally aware of how you would define a jagged tensor that has unequal dimensions, but typically, we don't want to do that because it makes our computation a lot more complex. And so in cases where we have-- for instance, we have different sentences that we turn into tokens, we might have different length sentences in our training set. We'll actually pad all of the dimensions to be the same. Because ultimately, we want to do everything with matrix operations. And so in order to do that, we need to have a matrix of a fixed shape. But that's a good point. I'm not sure if there is a way to do that, but typically, we just get around this by padding. OK, so now we know how to define tensors. We can do some interesting things with them. So here we've created two tensors. One of them is a 3x2 tensor. The other one is a 2x4 tensor. And I think the answer is written up here, but what do we expect is the shape when we multiply these two tensors? So we have a 3x2 tensor and a 2x4 tensor-- yeah, 3 x 4. And so more generally, we can use matmul in order to do matrix multiplication. It also implements batched matrix multiplication. And so I won't go over the entire review of broadcasting semantics, but the main gist is that the dimensions of two tensors are compatible if you can left pad the tensors with 1's so that the dimensions that line up either A, have the same number in that dimension, or B, one of them is a dummy dimension. One of them has a 1. And in that case, in those dummy dimensions, PyTorch will actually make sure to copy over the tensor as many times as needed so that you can then actually perform the operation. And that's useful when you want to do things like batched dot products or batched matrix multiplications. And I guess the final point here is there's also a shorthand notation that you can use. So instead of having to type out matmul every time, you can just use the @ operator similar to NumPy. Effectively, that's kind of where we get into how batching works. So for example, if you had let's say two tensors that have some batch dimension. And then one of them is m by 1 and the other one is 1 by n. And if you do a batched matrix multiply to those two tensors, now what you effectively do is you preserve the batch dimension and then you're doing a matrix multiplication between an m by 1 tensor and a 1 by n. So you get something that's the batch dimension by m by n. So effectively, there are more-- I think the full semantics are written out on the PyTorch website for how the matrix multiplication works. But you're right. You don't just have these cases where you have two 2-dimensional tensors, you can have arbitrary number of dimensions. And as long as the dimensions match up based on those semantics I was saying, then you can multiply it. Alternatively, you can do what I do, which is just multiply it anyways. And then if it throws an error, print out the shapes and kind of work from there. That tends to be faster, in my opinion, in a lot of ways. But, yeah that's a good point. All right. So let's keep going through some of the other different functionalities here. So we can define another tensor. And kind of one of the key things that we always want to look at is the shape. So in this case, we just have a 1D tensor of length 3. So the torch dot size just gives us 3. In general, this is one of the key debugging steps and something that I'll try and emphasize a lot throughout this session, which is printing the shapes of all of your tensors is probably your best resource when it comes to debugging. It's one of the hardest things to intuit, exactly, what's going on once you start stacking a lot of different operations together. So printing out the shapes at each point and seeing do they match what you expect is something important. And it's better to rely on that than just on the error message that PyTorch gives you because under the hood, PyTorch might implement certain optimizations and actually reshape the underlying tensor you have. So you may not see the numbers you expect. So it's always great to print out the shape. So again, we can always print out the shape and we can have a more complex-- in this case a 3-dimensional tensor, which is 3x2x4. And we can print out the shape and we can see all of the dimensions here. And so now you're like, OK, great. We have tensors. We can look at their shapes. But what do we actually do with them? And so now let's get into kind of what are the operations that we can apply to these tensors. And so one of them is, it's very easy to reshape tensors. So in this case, we're creating this 15-dimensional tensor that's the numbers 1 to 15. And now we're reshaping it. So now it's 5x3 tensor here. And so you might wonder, well, what's the point of that? And it's because a lot of times when we are doing machine learning, we actually want to learn in batches. And so we might take our data and we might reshape it. So now that instead of being a long flattened list of things, we actually have a set of batches. Or in some cases, we have a set of batches, of a set of sentences or sequences of a particular length and each of the elements in that sequence has an embedding of a particular dimension. And so based on the types of operations that you're trying to do, you'll sometimes need to reshape those tensors. And sometimes you'll want to-- particularly sometimes transpose dimensions, if you want to, for instance, reorganize your data. So that's another operation to keep in mind. I believe the difference is View will create a view of the underlying tensor. And so I think the underlying tensor will still have this same shape. Reshape will actually modify the tensor. All right. And then finally, like I said at the beginning, your intuition about PyTorch tensors can simply be they're kind of a nice easy way to work with NumPy arrays, but they have all these great properties. Like, now we can essentially use them with GPUs and it's very optimized. And we can also compute gradients quickly. And to kind of just emphasize this point, if you have some NumPy code and you have a bunch of NumPy arrays, you can directly convert them into PyTorch sensors by simply casting them. And you can also take those tensors and convert them back to NumPy arrays. All right. And so one of the things you might be asking is, why do we care about tensors? What makes them good? And one of the great things about them is that they support vectorized operations very easily. Essentially we can parallelize a lot of different computations and do them, for instance, across a batch of data all at once. And one of those operations you might want to do, for instance, is a sum. So you can take, in this case a tensor, which is shape 5x7 and-- OK, it looks like that's not working. You can take a tensor that shaped 5x7. And now you can compute different operations on it that essentially collapse the dimensionality. So the first one is sum. And so you can take it and you can sum across both the rows as well as the columns. And so one way I like to think about this to kind of keep them straight is that the dimension that you specify in the sum is the dimension you're collapsing. So in this case, if you take the data and sum over dimension 0 because you know the shape of the underlying tensor is 5x7 you've collapsed the 0-th dimension. So you should be left with something that's just shape 7. And if you see the actual tensor, you got 75, 80, 85, 90, you get this tensor, which is shape 7. Alternatively, you can think about whether or not you're kind of summing across the rows or summing across the columns. But it's not just sum. It applies to other operations as well. You can compute standard deviations. You can normalize your data. You can do other operations, which essentially batch across the entire set of data. And not only do these apply over 1 dimension, but here you can see that if you don't specify any dimensions, then by default, the operation actually applies to the entire tensor. So here we end up just taking the sum of the entire thing. So if you think about it, the 0-th dimension is the number of rows. There are five rows and there are seven columns. So if we sum out the rows, then we're actually summing across the columns. And so now we only have seven values. But I like to think about more just in terms of the dimensions to keep it straight, rather than rows or columns because it can get confusing. If you're summing out dimension 0, then effectively you've taken something which has some shape that's dimension 0 by dimension 1 to just whatever is the dimension one shape. And then from there you can kind of figure out, OK, which way did I actually sum to check if you were right. NumPy implements a lot of this vectorization. And I believe in the homework that you have right now, I think part of your job is to vectorize a lot of these things. So the big advantage with PyTorch is that essentially, it's optimized to be able to take advantage of your GPU. When we actually start building out neural networks that are bigger, that involve more computation, we're going to be doing a lot of these matrix multiplication operations that is going to be a lot better for our processor if we can make use of the GPU. And so that's where PyTorch really comes in handy in addition to also defining a lot of those neural network modules, as we'll see later for you. So that now you don't need to worry about, for instance, implementing a basic linear layer and backpropagation from scratch and also your optimizer. All of those things will be built in. And you can just call the respective APIs to make use of them. Whereas in Python and NumPy, you might have to do a lot of that coding yourself. Yeah. All right. So we'll keep going. So this is a quiz, except I think it tells you the answer, so it's not much of a quiz. But what would you do if now I told you instead of summing over this tensor, I want you to compute the average? And so there's two different ways you could compute the average. You could compute the average across the rows or across the columns. And so essentially, now we get back to this question of, well, which dimension am I actually going to reduce over? And so here if we want to preserve the rows, then we need to actually sum over the second dimension-- really, the first-- 0-th and first. So the first dimension is what we have to sum over because we want to preserve the 0-th dimension. And so that's why for row average you see the dim equals 1. And for column average, same reasoning is why you see the dim equals 0. And so if we run this code, we'll see what are the shapes that we expect. If we're taking the average Over Rows, then an object that's 2x3, should just become an object that's 2. It's just a 1-dimensional almost vector you can think of. And if we are averaging across the columns, there's three columns. So now our average should have three values. And so now we're left with a 1-dimensional tensor of length 3. So does that kind of make sense, I guess, is this general intuition about how we deal with shapes, and how some of these operations manipulate shapes. So now we'll get into indexing. This can get a little bit tricky, but I think you'll find that the semantics are very similar to NumPy. So one of the things that you can do in NumPy is that you can take these NumPy arrays and you can slice across them in many different ways. You can create copies of them. And you can index across particular dimensions to select out different elements, different rows or different columns. And so in this case, let's take this example tensor, which is 3x2x2. And first thing you always want to do when you have a new tensor, print out its shape. Understand what you're working with. And so I may have shown this already, but what will x bracket 0 print out? What happens if we index into just the first element? What's the shape of this? SPEAKER 2: 2X2. SPEAKER: Yeah. 2X2, right? Because if you think about it, our tensor is really just a list of three things, each of those things happens to also be a 2X2 tensor. So we got a 2X2 object in this case, the first thing, 1, 2, 3, 4. And so just like NumPy, if you provide a colon in a particular dimension, it means essentially copy over that dimension. So if we do x bracket 0, implicitly we're putting a colon for all the other dimensions. So it's saying grab the first thing along the 0-th dimension. And then grab everything along the other two dimensions. If we now take just the 0-th element along the first dimension, what are we going to get? Well, ultimately, we're going to get. Now if you look-- the kind of first dimension where these three things, the second dimension is now each of these two rows within those things. So like 1, 2, and 3, 4, 5, 6, and 7, 8, 9, 10, and 11, 12. So if we index into the second dimension-- or the first dimension-- and get the 0-th element, then we're going to end up with 1, 2, 5, 6, and 9, 10. And even if that's a little bit tricky, you can kind of go back to the trick I mentioned before where we're slicing across the first dimension. So if we look at the shape of our tensor, it's 3x2x2. If we collapse the first dimension, that 2 in the middle, we're left with something that's 3x2. So it might seem a little bit trivial kind of going through this in a lot of detail. But I think it's important because it can get tricky when your tensor shapes get more complicated, how to actually reason about this. And so I won't go through every example here, since a lot of them kind of reinforce the same thing, but I'll just highlight a few things. Just like NumPy, you can choose to get a range of elements. In this case, we're taking this new tensor, which is 1 through 15 rearranged as a 5x3 tensor. And if we take the 0-th through third row, exclusive, we'll get the first three rows. And we can do the same thing, but now with slicing across multiple dimensions. And I think the final point I want to talk about here is list indexing. List indexing is also present in NumPy. And it's a very clever shorthand for being able to essentially select out multiple elements at once. So in this case, what you can do is if you want to get the 0-th, the second, and the fourth element of our matrix, you can just instead of indexing with a particular number or set of numbers, index with a list of indices. So in this case, if we go up to our tensor, if we take out the 0-th, the second, and the fourth, we should see those three rows. And that's what we end up getting. Yeah. Again, these are kind of a lot of examples to just reiterate the same point, which is that you can slice across your data in multiple ways. And at different points you're going to need to do that. So being familiar with the shapes that you understand what's the underlying output that you expect is important. In this case, for instance, we're slicing across the first and the second dimension and we're keeping the 0-th. And so we're going to end up getting essentially the kind of top left element of each of those three things in our tensor if we scroll all the way up here. We'll get this 1, we'll get this 5, and we'll get this 9 because we go across all of the 0-th dimension. And then across the first and the second, we only take the 0-th element in both of those positions. And so that's why we get 1, 5, 9. Also, of course, you can apply all of the colons to get back the original tensor. OK. And then I think the last thing when it comes to indexing is conversions. So typically, when we're writing code with neural networks, ultimately, we're going to process some data through a network and we're going to get a loss. And that loss needs to be a scalar. And then we're going to compute gradients, with respect to that loss. So one thing to keep in mind is that sometimes you might have an operation and it fails because it was actually expecting a scalar value rather than a tensor. And so you can extract out the scalar from this 1x1 tensor by just calling .item. So in this case, if you have a tensor, which is just literally 1, then you can actually get the Python scalar that corresponds to it by calling dot item. So now we can get into the more interesting stuff. One of the really cool things with PyTorch is autograd. And what autograd is, is PyTorch essentially provides an automatic differentiation package where when you define your neural network, you're essentially defining many nodes that compute some function. And in the forward pass, you're kind of running your data through those nodes. But what PyTorch is doing on the back end, is that at each of those points, it's going to actually store the gradients and accumulate them, so that every time you do your backwards pass, you apply the chain rule to be able to calculate all of these different gradients. And PyTorch caches those gradients. And then you will have access to all of those gradients to be able to actually then run your favorite optimizer and optimize with SGD, or with Adam, or whichever optimizer you choose. And so that's kind of one of the great features. You don't have to worry about actually writing the code that computes all of these gradients and actually caches all of them properly, applies the chain rule, does all these steps. You can abstract all of that away with just one call to .backward. So in this case, we'll run through a little bit of an example where we'll see the gradients getting computed automatically. So in this case, we're going to initialize a tensor. And requiresgrad is true by default. It just means that by default for a given tensor. PyTorch will store the gradient associated with it. And you might wonder, well, why do we have this when we always want to store the gradient? And the answer is, at train time, you need the gradients in order to actually train your network. But at inference time, you'd actually want to disable your gradients. And you can actually do that because it's a lot of extra computation that's not needed, since you're not making any updates to your network anymore. Let's create this right now. We don't have any gradients being computed because we haven't actually called backwards to actually compute some quantity with respect to this particular tensor. We haven't actually computed those gradients yet. So right now the .grad feature, which will actually store the gradient associated with that tensor, is none. And so now let's just define a really simple function. We have x. We're going to define the function y equals 3x squared. And so now we're going to call y dot backward. Now what happens is, when we actually print out x. grad, what we should expect to see is the number 12. And the reason is that our function y is 3x squared. If we compute the gradient of that function, we're going to get 6x. And our actual value was 2. So the actual gradient is going to be 12. And we see that when we print out x.grad that's what we get. And now we'll just run it again. Let's set z equal to 3x squared. We call z.backwards. And we print out x.grad again. And now we see that-- I may not have run this in the right order. OK. So here in the second one that I reran, we see that it says 24. And so you might be wondering, well, I just did the same thing twice. Shouldn't I see 12, again? And the answer is that by default, PyTorch will accumulate the gradients. So it won't actually rewrite the gradient each time you compute it. It will sum it. And the reason is because when you actually have backpropagation for your network, you want to accumulate the gradients across all of your examples and then actually apply your update. You don't want to overwrite the gradient. But this also means that every time you have a training iteration for your network, you need to zero out the gradient because you don't want the previous gradients from the last epoch where you iterated through all of your training data to mess with the current update that you're doing. So that's kind of one thing to note, which is that that's essentially why we will see when we actually write the training loop you have to run zerograd in order to zero out the gradient. Yes. So I accidentally ran the cells in the wrong order. Maybe to make it more clear, let me put this one first. So this is actually what it should look like, which is that we ran it once and I ran this cell first. And it has 12. And then we ran it a second time, and we get 24. Yes. So if you have all of your tensors defined, then when you actually call dot backwards, if it's a function of multiple variables, it's going to compute all of those partials, all of those gradients. Yeah. So what's happening here is that the way PyTorch works, is that it's storing the accumulated gradient at x. And so we've essentially made two different backwards passes. We've called it once on this function y, which is a function of x. And we've called it once on z, which is also a function of x. And so you're right. We can't actually disambiguate which came from what. We just see the accumulated gradient. But typically, that's actually exactly what we want. Because what we want is to be able to run our network and accumulate the gradient across all of the training examples that define our loss and then perform our optimizer step. So, yeah, even with respect to one thing, it doesn't matter because in practice, each of those things is really a different example in our set of training examples. And so we're not interested in the gradient from one example. We're actually interested in the overall gradient. So going back to this example, what's happening here, is that in the backwards pass, what it's doing is you can imagine there's the x tensor and then there's the .grad attribute, which is another separate tensor. It's going to be the same shape as x. And what that is storing, is it's storing the accumulated gradient from every single time that you've called dot backward on a quantity that essentially has some dependency on x, that will have a non-zero gradient. And so the first time we call it, the gradient will be 12 because 6x, 6 times 2. 12. The second time we do it with z, it's also still 12, but the point is that .grad doesn't actually overwrite the gradient each time you call dot backwards. It simply adds them. It accumulates them. And the intuition there is that ultimately, you're going to want to compute the gradient with respect to the loss. And that loss is going to be made up of many different examples. And so you need to accumulate the gradient from all of those in order to make a single update. And then, of course, you'll have to zero that out because every time you make one pass through all of your data, you don't want that next batch of data to also be double counting the previous batches update. You want to keep those separate. And so we'll see that in a second. OK. Yeah. All right. So now we're going to move on to one of the final pieces of the puzzle, which is neural networks. How do we actually use them in PyTorch? And once we have that and we have our optimization, we'll finally be able to figure out how do we actually train a neural network, what does that look like, and why it's so clean and efficient when you do it in PyTorch. So the first thing that you want to do is we're going to be defining neural networks in terms of existing building blocks, in terms of existing APIs, which will implement for instance linear layers or different activation functions that we need. So we're going to import torch.nn because that is the neural network package that we're going to make use of. And so let's start with the linear layer. The way the linear layer works in PyTorch, is it takes in two arguments. It takes in the input dimension and then the output dimension. And so what it does, is it takes in some input, which has some arbitrary amount of dimensions, and then finally, the input dimension. And it will essentially output it to that same set of dimensions, except the output dimension and the very last place. And you can think of the linear layer as essentially just performing a simple ax plus b. By default, it's going to apply a bias, but you can also disable that if you don't want a bias term. And so let's look at a small example. So here we have our input. And we're going to create a linear layer, in this case, as an input size of 4 and output size of 2. And all we're going to do is, once we define it by instantiating it with nn.linear. Whatever the name of our layer is, in this case, we call it linear. We just essentially apply it with parentheses as if it were a function to whatever input. And that actually does the actual forward pass through this linear layer to get our output. And so you can see that the original shape was 2x3x4. Then we pass it through this linear layer, which has an output dimension of size 2. And so ultimately, our output is 2x3x2, which is good. That's what we expect. That's not shape error. But something common that you'll see is maybe you decide to get a little confused and maybe you do let's say 2x2. You match the wrong dimension. Here we're going to get a shape error. And you see that the error message isn't as helpful because it's actually changed the shape of what we were working with. We said this was 2x3x4. Under the hood, PyTorch has changed this to a 6x4. But in this case, it's obvious because we instantiated it with the shape. But if we didn't have the shape, then one simple thing we could do is actually just print out the shape and we'd see, OK, this last dimension is size 4, so I actually need to change my input dimension in my linear layer to be size 4. And you also notice on this output we have this grad function. And so that's because we're actually computing and storing the gradients here for our tensor. Yeah. So typically, we think of the first dimension as the batch dimension. So in this case, it said n-- this you can think of as if you had a batch of images, it would be the number of images. If you had a training corpus of text, it would be essentially the number of sentences or sequences. That is usually considered the batch dimension. The star indicates that there can be an arbitrary number of dimensions. So for instance, if we had images, this could be a 4-dimensional tensor object. It could be the batch size by the number of channels by the height, by the width. But in general, there's no fixed number of dimensions. Your input tensor can be any number of dimensions. The key is just that last dimension needs to match up with the input dimension of your linear layer. The 2 is the output size. So essentially, we're saying that we're going to map this last dimension, which is 4-dimensional to now 2-dimensional. So in general, you can think of this as if we're stacking a neural network, this is the kind of input dimension size. And this would be like the hidden dimension size. And so one thing we can do is, we can actually print out the parameters. And we can actually see what are the values of our linear layer, or in general, for any layer that we define in our neural network what are the actual parameters. And in this case, we see that there's two sets of parameters because we have a bias, as well as the actual linear layer itself. And so both of them store the gradients. And in this case, these are what the current values of these parameters are. And they'll change as we trained the network. OK. So now let's go through some of the other module layers. So in general, nn.Linear is one of the layers you have access to. You have a couple of other different layers. They are pretty common. You have 2D convolutions. You have transpose convolutions. You have batch norm layers when you need to do normalization in your network. You can do upsampling. You can do max pooling. You can do lots of different operators. But the main key here is that all of them are built in building blocks that you can just call, just like we did with nn.Linear. Let's just go-- I guess I'm running out of time, but let's just try and go through these last few layers and then I'll wrap up by showing you an example that puts it all together. So in this case, we can define an activation function, which is typical with our networks. We need to introduce non-linearities. In this case, we use the sigmoid function. And so now we can define our network as this very simple thing, which had one linear layer and then an activation. And in general, when we compose these layers together, we don't need to actually write every single line-by-line applying the next layer. We can actually stack all of them together. In this case, we can use nn.sequential and list all of the layers. So here we have our linear layer, followed by our sigmoid. And then now we're just essentially passing the input through this whole set of layers all at once. So we take our input. We call block on the input and we get the output. Let's just kind of see putting it all together what does it look like to define a network and what does it look like when we train one? So here we're going to actually define a multi-layer perceptron. And the way it works, is to define a neural network, you extend the nn.module class. The key here is there's really two main things you have to define when you create your own network. One, is the initialization. So in the init function, you actually initialize all the parameters you need. In this case, we initialize an input size, a hidden size, and we actually define the model itself. In this case, it's a simple model, which consists of a linear layer, followed by an activation, followed by another linear layer, followed by a final activation. And the second function we have to define is the forward, which actually does the forward pass of the network. And so here our forward function takes in our input x. In general, it could take in some arbitrary amount of inputs into this function, but essentially, it needs to figure out how are you actually computing the output? And in this case, it's very simple. We just pass it into the network that we just defined and return the output. And again, you could do this more explicitly by doing what we did earlier where we could actually write out all of the layers individually instead of wrapping them into one object and then doing a line-by-line operation for each one of these layers. And so finally, if we define our class, it's very simple to use it. We can now just instantiate some input, instantiate our model by calling multi-layer perceptron with our parameters, and then just pass it through our model. So that's great, but this is all just the forward pass. How do we actually train the network? How do we actually make it better? And so this is the final step, which is we have optimization built in to PyTorch. So we have this backward function, which goes and computes all of these gradients in the backward pass. And now the only step left is to actually update the parameters using those gradients. And so here we'll import the torch.optim package, which contains all of the optimizers that you need. This part is just creating some random data, so that we can actually decide how to fit our data. But this is really the key here, which is we'll instantiate our model that we defined. We'll define the Adam optimizer. And we'll define it with a particular learning rate. We'll define a loss function, which is again, another built in module. In this case, we're using the cross entropy loss. And finally, to calculate our predictions, all we do simply is just call model on our actual input. And to calculate our loss, we just call our loss function on our predictions and our true labels. And we extract the scalar here. And now when we put it all together, this is what the training loop looks like. We have some number of epochs that we want to train our network. For each of these epochs, the first thing we do is we take our optimizer and we zero out the gradient. And the reason we do that, is because, like many of you noted, we actually are accumulating the gradient. We're not resetting it every time we call dot backward. So we zero out the gradient. We get our model predictions by doing a forward pass. We then compute the loss between the predictions and the true values. Finally, we call loss.backward. This is what actually computes all of the gradients in the backward pass from our loss. And the final step is we call .step on our optimizer. In this case, we're using Adam. And this will take a step on our loss function. And so if we run this code, we end up seeing that we're able to start with some training loss, which is relatively high. And in 10 epochs, we're able to essentially completely fit our data. And if we print out our model parameters and we printed them out from the start as well, we'd see that they've changed as we've actually done this optimization. And so I'll wrap it up here, but I think the key takeaway is that a lot of the things that you're doing at the beginning of this class are really about understanding the basics of how neural networks work, how you actually implement them, how you implement the backward pass. The great thing about PyTorch is that once you get to the very next assignment, you'll see that now that you have a good underlying understanding of those things, you can abstract a lot of the complexity of how do you do backprop, how do you store all of these gradients, how do you compute them, how do you actually run the optimizer, and let PyTorch handle all of that for you. And you can use all of these building blocks, all of these different neural network layers to now define your own networks that you can use to solve whatever problems you need.
Stanford_CS224N_Natural_Language_Processing_with_Deep_Learning_2023
Stanford_CS224N_NLP_with_Deep_Learning_2023_Lecture_8_SelfAttention_and_Transformers.txt
Hi, everyone. Welcome to CS224N we're about two minutes in. So let's get started. So today, we've got what I think is quite an exciting lecture topic. We're going to talk about self-attention and transformers. So these are some ideas that are the foundation of most of the modern advances in natural language processing and actually AI systems in a broad range of fields. So it's a very, very fun topic Before we get into that-- OK, before we get into that, we're going to have a couple of reminders. So there are brand new lecture notes. Woo, thanks, thank you. Yeah, I'm very excited about them. They go into-- they pretty much follow along with what I'll be talking about today but go into considerably more detail. Assignment four is due a week from today. Yeah, so the issues with Azure continue. Thankfully, thankfully, our TAs especially has tested that this works on Colab, and the amount of training is such that a Colab session will allow you to train your machine translation system. So if you don't have a GPU, use Colab. We're continuing to work on getting access to more GPUs for assignment five and the final project. We'll continue to update you as we're able to, but the usual systems this year are no longer holding because companies are changing their minds about things. OK, so our final project proposal, you have a proposal of what you want to work on for your final project. We will give you feedback on whether we think it's a feasible idea or how to change it. So this is very important because we want you to work on something that we think has a good chance of success for the rest of the quarter. That's going to be out tonight. We'll have Ed announcement when it is out, and we want to get you feedback on that pretty quickly because you'll be working on this. After assignment five is done, really the major core component of the course after that is the final project. OK, any questions? Cool, OK, OK, so let's take a look back into what we've done so far in this course and see what we were doing in natural language processing. What was our strategy? If you had a natural language processing problem, and you wanted to say, take your best effort attempt at it without doing anything too fancy, you would have said, OK, I'm going to have a bidirectional LSTM instead of a simple RNN. I'm going to use an LSTM to encode my sentences. I get bidirectional context, and if I have an output that I'm trying to generate, I'll have a unidirectional LSTM that I was going to generate one by one. So you have a translation or a parse or whatever. And so maybe I've encoded in a bidirectional LSTM, the source sentence, and I'm one by one decoding out the target with my uni-directional LSTM. And then, also, I was going to use something like attention to give flexible access to memory if I felt like I needed to do this look back and see where I want to translate from. OK, and this was just working exceptionally well, and we motivated attention through wanting to do machine translation. And you have this bottleneck where you don't want to have to encode the whole source sentence in a single vector. OK, and in this lecture, we have the same goal. So we're to be looking at a lot of the same problems that we did previously, but we're going to use different building blocks. So we're going to say if 2014 to 2017 ish, I was using recurrence through lots of trial and error, years later, we had these brand new building blocks that we could plug-in direct replacement for LSTMs, and they're going to allow for just a huge range of much more successful applications. And so what are the issues with recurrent neural networks we used to use, and what are the new systems that we're going to use from this point moving forward. OK, so one of the issues with a recurrent neural network is what we're going to call linear interaction distance. So as we know, RNNs are unrolled left to right or right to left depending on the language and the direction. OK, but it encodes this sort of notion of linear locality, which is useful because if two words occur right next to each other, sometimes they're actually quite related. So tasty pizza. They're nearby, and in the recurrent neural network, you sort of encode tasty, and then you sort of walk one step, and you encode pizza. So nearby words do often affect each other's meanings, but you have this problem where very long distance dependencies can take a very long time to interact. So if I have this sentence, the chef, so those are nearby. Those interact with each other, and then who and then a bunch of stuff. Like, the chef who went to the stores and picked up the ingredients and loves garlic and then was-- I actually have an RNN step. This application of the recurrent weight matrix and some element wise non-linearities once, twice, three times, right, as many times as there is potentially the length of the sequence between chef and what's, right? And it's the chef who was so this is a long-distance dependency, should feel kind of related to the stuff that we did in dependency syntax. But it's quite difficult to learn potentially that these words should be related. So if you have a lot of steps between words, it can be difficult to learn the dependencies between them. We talked about all these gradient problems. LSTMs do a lot better at modeling the gradients across long distances than simple recurrent neural networks, but it's not perfect. And we already know that this linear order isn't the right way to think about sentences. So if I wanted to learn that it's the chef who was, then I might have a hard time doing it because the gradients have to propagate from was to chef, and really, I'd like more direct connection between words that might be related in the sentence or in a document, right, even if these are going to get much longer. So this linear interaction distance problem. We would like words that might be related to be able to interact with each other in the neural networks computation sort of graph more easily than sort of being linearly far away so that we can learn these long-distance dependencies better. And there's a related problem too that, again, comes back to the recurrent neural networks dependence on the index, on the index into the sequence, often called a dependence on time. So in a recurrent neural network, the forward and backward passes have O of sequence length many. So that means just roughly sequence-- in this case, just sequence length many unparallelizable operations. So we know GPUs are great. They can do a lot of operations at once as long as there's no dependency between the operations in terms of time, but you have to compute one and then compute the other. But in a recurrent neural network, you can't actually compute the RNN hidden state for time step five before you compute the RNN hidden state for time step four or time step three, right? And so you get this graph that looks very similar where if I want to compute this hidden state, so I've got some word, I have zero operations I need to do before I can compute this state. I have one operation I can do before I can compute this state. And as my sequence length grows, I've got, OK, here, I've got three operations I need to do before I can compute the state with the number three because I need to compute this and this and that. So there's three unparallelizable operations that I'm sort of glomming all the matrix multiplies and stuff into a single one. So one, two, three, and of course, this grows with the sequence length as well. So down over here, so as the sequence length grows, I can't parallelize-- I can't just have a big GPU just kachunk with the matrix multiply to compute this state because I need to compute all the previous states beforehand. OK, any questions about that? So those are these two related problems both with the dependence on time. Yeah. Yeah, so I have a question on the linear interaction issues. I thought that was the whole point of the attention network and then how maybe you want-- during the training of the actual cells that depend more on each other. Can't we do something like the attention and then work our way around that? So the question is with the linear interaction distance, wasn't this the point of attention that it gets around that? Can't we use something with attention to help or does that just help? So it won't solve the parallelizability problem. And in fact, everything we do in the rest of this lecture will be attention based, but we'll get rid of the recurrence and just do attention more or less. So, yeah, it's a great intuition. Any other questions? OK, cool, so if not recurrence, what about attention, just a slide a slide back. And so we're going to get deep into attention today, but just for the second, attention treats each word's representation as a query to access and incorporate information from a set of values. So previously, we were in a decoder. We were decoding out a translation of a sentence, and we attended to the encoder so that we didn't have to store the entire representation of the source sentence into a single vector. And here, today, we'll think about attention within a single sentence. So I've got this sentence written out here with word one through word T, in this case, and right on these integers in the boxes, I'm writing out the number of unparallelizable operations that you need to do before you can compute these. So for each word, you can independently compute its embedding without doing anything else previously because the embedding just depends on the word identity. And then with attention, if I wanted to build an attention representation of this word by looking at all the other words in the sequence, that's one big operation. And I can do them in parallel for all the words. So the attention for this word I can do for the attention for this word. I don't need to walk left to right like I did for an RNN. Again, we'll get much deeper into this, but this, you should have the intuition that it solves the linear interaction problem and the non-parallelizability problem because now, no matter how far away words are from each other, I am potentially interacting. I might just attend to you even if you're very, very far away independent of how far away you are, and I also don't need to walk along the sequence linearly long. So I'm treating the whole sequence at once. So the intuition is that attention allows you to look very far away at once, and it doesn't have this dependence on the sequence index that keeps us from parallelizing operations. And so now the rest of the lecture, we'll talk in great depth about attention. So maybe let's just move on. OK, so let's think more deeply about attention. One thing that you might think of with attention is that it's performing a fuzzy lookup in a key value store. So you have a bunch of keys, a bunch of values, and it's going to help you access that. So in an actual lookup table, just like a dictionary in Python, for example, right, very simple. You have a table of keys that each key maps to a value, and then you give it a query. And the query matches one of the keys, and then you return the value, right? So I've got a bunch of keys here, and my query matches the key. So I return the value. Simple, fair, easy. OK, good. And in attention, so just like we saw before, the query matches all keys softly. There's no exact match. You compute some similarity between the key and all of the-- sorry, the query and all of the keys, and then you weight the results. So you've got a query again. You've got a bunch of keys. The query to different extents is similar to each of the keys, and you will sort of measure that similarity between 0 and 1 through a softmax, and then you get the values out. You average them via the weights of the similarity between the key and the query and the keys. You do a weighted sum with those weights, and you get an output, right? So it really is quite a lookup table but in this soft vector space mushy sort of sense. So I'm really doing some kind of accessing into this information that's stored in the key value store, but I'm sort of softly looking at all of the results. OK, any questions there? Cool, so what might this look like? So if I was trying to represent this sentence, I went to Stanford CS224N and learned-- so I'm trying to build a representation of learned. I have a key for each word. So this is this self-attention thing that we'll get into. I have a key for each word. A value for each word. I've got the query for learned, and I've got these sort of teal-ish bars up top, which might say how much you're going to try to access each of the word. Like, oh, maybe 224N is not that important. CS, maybe that determines what I learned. Stanford and then learned, maybe that's important for representing itself. So you look across at the whole sentence and build up this soft accessing of information across the sentence in order to represent learned in context. OK, so this is just a toy diagram. So let's get into the math. So we're going to look at a sequence of words. That's W1 to N. Sequence of words in a vocabulary. So this is like, Zuko made his uncle tea. That's a good sequence. And for each word, we're going to embed it with this embedding matrix just like we've been doing in this class, right? So I have this embedding matrix that goes from the vocabulary size to the dimensionality D. So each word has a non-contextual only dependent on itself word embedding, and now I'm going to transform each word with one of three different weight matrices. So this is often called key query value self-attention. So I have a matrix Q, which is in R D to D. So this maps xi which is a vector of dimensionality D to another vector of dimensionality D, and so that's going to be a query vector, right? So it takes an xi and it sort of rotates it, shuffles it around, stretches it, squishes it, makes it different, and now it's a query. And now for a different learnable parameter k, so that's another matrix, I'm going to come up with my keys. And with a different learnable parameter V, I'm going to come up with my values, right? So I'm taking each of the non-contextual word embeddings, each of these xi's and I'm transforming each of them to come up with my query for that word, my key for that word, and my value for that word. OK, so every word is doing each of these roles. Next, I'm going to compute all pairs of similarities between the keys and queries. So in the toy example we saw, I was computing the similarity between a single query for the word learned and all of the keys for the entire sentence. In this context, I'm computing all pairs of similarities between all keys and all values because I want to represent all of these sums. So I've got this dot-- I'm just going to take the dot product between these two vectors, right? So I've got qi. So this is saying the query for word i dotted with the key for word j, and I get this score, which is a real value. It might be very large negative, might be zero, might be very large and positive. And so that's like how much should I look at j in this lookup table. And then I do the softmax, right? So I softmax. So I say that the actual weight that I'm going to look at j from i is softmax of this over all of the possible indices. So it's like the affinity between i and j normalized by the affinity between i and all of the possible J prime in the sequence. And then my output is just the weighted sum of values. So I've got this output for word i. So maybe i is 1 for Zuko, and I'm representing it as the sum of these weights for all j. So Zuko and made and his and uncle and tea, and the value vector for that word j. I'm looking from i to j as much as alpha ij. What's the dimension of Wi? Oh, WI, you can either think of it as a symbol in vocab V. So that's like-- you could think of it as a one hot vector in-- yeah, in this case, we are, I guess, thinking of it as this. So one hot vector in dimensionality size of vocab. So in the matrix e, you see that it's RD by bars around v. That's size of the vocabulary. So when I do e multiplied by Wi that's taking e which is D by v, multiplying it by W, which is v, and returning a vector that's dimensionality D. So W in that first line, like W1n, that's a matrix where it has maybe like a column for every word in that sentence and each column is a length of v? Yeah, usually, I guess we think of it as having a-- I mean, if I'm putting the sequence length index first, you might think of it as having a row for each word. But similarly, yeah, it's N, which is the sequence length, and then the second dimension would be V, which is the vocabulary size. And then that gets mapped to this thing, which is sequence length by D. Why do we learn two different matrices, q and k, when q transpose-- qi transpose kj is really just one matrix [INAUDIBLE]? That's a great question. It ends up being because this will end up being a low-rank approximation to that matrix. So it is for computational efficiency reasons, although it also, I think, feels kind of nice in the presentation. But, yeah, what we'll end up doing is having a very low-rank approximation to QK transpose. And so you actually do do it like this. That's a good question. [INAUDIBLE] ii, so that [INAUDIBLE] specific? Sorry, could you repeat that for me? This eii, so the query of the word dotted with the q by itself, does it look like an identity or does it look like any things in particular? That's a good question. OK, let me remember to repeat questions. So does eii for j equal to i, so looking at itself, look like anything in particular? Does it look like the identity? Is that the question? OK, so it's unclear actually. This question of should you look at yourself for representing yourself. Well, it's going to be encoded by the matrices Q and K. If I didn't have Q and K in there, if those were the identity matrices, if Q is identity, K is identity, then this would be dot product with yourself, which is going to be high on average. You're pointing in the same direction as yourself, but it could be that qxi and kxi might be arbitrarily different from each other because Q could be the identity and K could map u to the negative of yourself, for example, so that you don't look at yourself. So this is all learned in practice so you end up-- it can sort of decide by learning whether you should be looking at yourself or not, and that's some of the flexibility that parametrizing at s, q and k gives you that wouldn't be there if I just used xi's everywhere in this equation. I'm going to try to move on, I'm afraid, because there's a lot to get on, but we'll keep talking about self-attention. And so as more questions come up, I can also potentially return back. OK, so this is our basic building block, but there are a bunch of barriers to using it as a replacement for LSTMs. And so what we're going to do for this portion of the lecture is talk about the minimal components that we need in order to use self-attention as this very fundamental building block. So we can't use it as it stands as I've presented it because there are a couple of things that we need to solve or fix. One of them is that there's no notion of sequence order in self-attention. So what does this mean? If I have a sentence, like when I move over here to the whiteboard briefly, and hopefully, I'll write quite large. If I have a sentence like Zuko made his uncle, and let's say his uncle made Zuko. If I were to embed each of these words, right, using its embedding matrix, the embedding matrix isn't dependent on the index of the word. So this is the word at index 1, 2, 3, 4 versus his is over here and uncle, right? And so when I compute the self-attention, and there's a lot more on this in the lecture notes that goes through a full example, the actual self-attention operation will give you exactly the same representations for this sequence, Zuko made his uncle, as for this sequence, his uncle made Zuko. And that's bad because they're sentences that mean different things. And so, right, it's this idea that self-attention is an operation on sets. You have a set of vectors that you're going to perform self-attention on and nowhere does like the exact position of the words come into play directly. So we're going to encode the position of words through the keys, queries, and values that we have. So consider now representing each sequence index-- our sequences are going from 1 to n-- as a vector. So don't worry so far about how it's being made, but you can imagine representing the number one, like the position one, the position two, the position three as a vector in the dimensionality D just like we're representing our keys, queries, and values. And so these are position vectors. If you were to want to incorporate the information represented by these positions into our self-attention, you could just add these vectors, these pi vectors to the inputs, right? So if I have this xi embedding of a word, which is the word at position i but really just represents-- oh, the word Zuko is here. Now I can say that, oh, it's the word Zuko, and it's at position 5 because this vector represents position 5. OK, so how do we do this? And we might only have to do this once. So we can do it once at the very input to the network, and then that is sort of sufficient. We don't have to do it at every layer because it sort of knows from the input. So one way in which people have done this is look at these sinusoidal position representations. So this looks a little bit like this, where you have-- so this is a vector pi, which is in dimensionality d, right, and each one of the dimensions, you take the value i,' You modify it by some constant, and you pass it to the sine or cosine function. And you get these sort of of values that vary according to the period-- differing periods depending on the dimensionality. So I've got this representation of a matrix where d is the vertical dimension, and then n is the horizontal. And you can see that there's sort of like, oh, as I walk along, you see the period of the sine function going up and down and each of the dimensions d has a different period. And so together, you can represent a bunch of different sort of position indices, and it gives this intuition that, oh, maybe the absolute position of a word isn't as important. You've got this sort of periodicity of the sines and cosines, and maybe that allows you to extrapolate to longer sequences. But in practice, that doesn't work. But this is sort of like an early notion that this is still sometimes used for how to represent position in transformers and self-attention networks in general. So that's one idea. You might think it's a little bit complicated. A little bit unintuitive. Here's something that feels a little bit more deep learning. So we're just going to say, oh, you know, I've got a maximum sequence length of n, and I'm just going to learn a matrix that's dimensionality d by n, and that's going to represent my positions, and I'm going to learn it as a parameter, just like I learn every other parameter. And what do they mean? Oh, I have no idea, but it represents position. And so you just sort of add this matrix to the xi, so your input embeddings, and it learns to fit to data. So whatever representation of position that's linear sort of index based that you want you can learn. And the cons are that, well, you definitely now can't represent anything that's longer than n words long. No sequence longer than n. You can handle because-- well, you only learned a matrix of this many positions. And so in practice, you'll get a model error. If you pass a self-attention model something longer than length n, it will just sort of crash and say, I can't-- I can't do this. And so this is what most systems nowadays use. There are more flexible representations of position, including a couple in the lecture notes. You might want to look at the relative linear position or words before or after each other but not their absolute position. There's also some representations that hearken back to our dependency syntax because like, oh, maybe words that are close in the dependency parse tree should be the things that are sort of close in the self-attention operation. OK, questions? In practice, do we typically just make n large enough that we don't run into the issue of having something that could be input longer than n? So the question is, in practice, do we just make n long enough so that we don't run into the problem where we're going to look at a text longer than n. No, in practice, it's actually quite a problem, even today, even in the largest biggest language models, and can I fit this prompt into ChatGPT or whatever is the thing that you might see on Twitter? I mean, these continue to be issues. And part of it is because the self-attention operation-- and we'll get into this later in the lecture-- it's quadratic complexity in the sequence length. So you're going to spend n squared sort of memory budget in order to make sequence lengths longer. So in practice, this might be on a large model, say 4,000 or so. And it's 4,000, so you can fit 4,000 words, which feels like a lot, but it's not going to fit a novel. It's not going to fit a Wikipedia page. And there are models that do longer sequences for sure, and, again, we'll talk a bit about it, but no, this actually is an issue. How do you know that the p you've learned is the position that is not any other without [INAUDIBLE]?? Yeah. So how do you know that the p that you've learned, this matrix that you've learned, is representing position as opposed to anything else? And the reason is the only thing it correlates is position, right? So like when I see these vectors, when I'm adding this p matrix to my x matrix, the word embeddings, I'm adding them together, and the words that show up at each index will vary depending on what word actually showed up there in the example, but the p matrix never differs. It's always exactly the same at every index. And so it's the only thing in the data that it correlates with. So you're sort of learning it implicitly. Like this vector at index 1 is always at index 1 for every example, for every gradient update, and nothing else co-occurs like that. Yeah. So what you end up learning, I don't know, it's unclear, but it definitely allows you to know, Oh, this word is with this index at this. Yeah. OK. Yeah. Just quickly, when you say [INAUDIBLE] in space, is this sequence right now defined as a sequence-- so a sequence of words or-- I'm trying to figure out what the unit is you're using. OK. So the question is, when this is quadratic in the sequence, is that a sequence of words? Yeah, think of it as a sequence of words. Sometimes there'll be pieces that are smaller than words, which we'll go into in the next lecture, but yeah, think of this as a sequence of words but not necessarily just for a sentence, maybe for an entire paragraph or an entire document or something like that. OK, but the attention piece is word-based. Yeah, the attention is based words to words. OK. Cool. I'm going to move on. OK, right, so we have another problem. Another is that based on the presentation of self-attention that we've done, there's really no non-linearities for sort of deep learning magic; we're just sort of computing weighted averages of stuff. So if I apply self-attention and then apply self-attention again and again and again and again, you should get-- you should look at the next lecture notes if you're interested in this, that's actually quite cool. But what you end up doing is you're just re-averaging value vectors together. So you're like computing averages of value vectors, and it ends up looking like one big self-attention. But there's an easy fix to this if you want sort of the traditional deep learning magic. And you can just add a feed forward network to post-process each output vector. So I've got a word here, that's sort of the output of self-attention, and I'm going to pass it through, in this case, I'm calling it a multi-layer perceptron MLP. So this is a vector in rd that's going to be-- and it's taking in as input a vector in rd, and you do the usual sort of multi-layer perceptron thing, right? Where you have the output, and you multiply it by a matrix, pass it through a non-linearity, multiply it by another matrix. OK? And so what this looks like in self-attention is that I've got this sort of sentence, the chef who dot dot dot dot food, and I've got my embeddings for it. I pass it through this whole big self-attention block, right, which looks at the whole sequence and sort of incorporates context and all that, and then I pass each one individually through a feed forward layer, right? So this embedding, that's sort of the output of the self-attention for the word the, is passed independently through a multi-layer perceptron here, and that sort of-- you can think of it as sort of combining together or processing the result of attention. So there's a number of reasons why we do this. One of them also is that you can actually stack a ton of computation into these feed-forward networks very, very efficiently, very parallelizable, very good for GPUs. But this is what's done in practice. So you do self-attention, and then you can pass it through this sort of position-wise feed forward layer, right? Every word is processed independently by this feed forward network to process the result. OK, so that's adding our sort of classical deep learning non-linearities for self-attention. And that's an easy fix for this sort of no non-linearities problem in self-attention, and then we have a last issue before we have our final minimal self-attention building block with which we can replace RNNS. And that's that-- when I've been writing out all of these examples of self-attention, you can sort of look at the entire sequence, right? And in practice, for some tasks such as machine translation or language modeling, whenever you want to define a probability distribution over a sequence, you can't cheat and look at the future. So at every time step, I could define the set of keys and queries and values to only include past words, but this is inefficient-- bear with me. It's inefficient because you can't parallelize it so well. So instead, we compute the entire n by n matrix just like I showed in the slide discussing self-attention, and then I mask out words in the future. So this score, eij, right, and I computed eij for all n by n pairs of words, is equal to whatever it was before if the word that you're looking at at index j is an index that is less than or equal to where you are, index i, and it's equal to negative infinity-ish otherwise if it's in the future. And when you softmax the eij, negative infinity gets mapped to zero. So now my attention is weighted zero, my weighted average is zero on the future, so I can't look at it. What does this look like? So in order to encode these words, the chef who, and maybe the start symbol there, I can look at these words, right? That's all pairs of words. And then I just gray out-- I sort of negative infinity out the words I can't look at. So when encoding the start symbol, I can just look at the start symbol; when encoding the, I can look at the start symbol and the; when encoding chef, I can look at start the chef, but I can't look at who. And so with this representation of chef that is only looking at start the chef, I can define a probability distribution using this vector that allows me to predict who without having cheated by already looking ahead and seeing that, well, who is the next word. Questions. So it says we're using it in decoders. Do we do this for both the encoding layer and the decoding layer, or for the encoding layer, are we allowing ourselves to look forward? The question is it says here that we're using this in a decoder. Do we also use it in the encoder? So this is the distinction between a bidirectional LSTM and a unidirectional LSTM, right? So wherever you don't need this constraint, you probably don't use it. So if you're using an encoder on the source sentence of your machine translation problem, you probably don't do this masking because it's probably good to let everything look at each other. And then whenever you do need to use it, because you have this autoregressive sort of probability of word 1, probability of 2 given 1, 3 given 2 and one1 then you would use this. So traditionally, yes, in decoders you will use it, in encoders you will not. Yes. My question is a little bit philosophical. How humans actually generate sentences by having some notion of the probability of future words before they say the words that-- or before they choose the words that they are currently speaking or writing, generating? Good question. So the question is, isn't looking ahead a little bit and sort of predicting or getting an idea of the words that you might say in the future how humans generate language instead of the strict constraint of not seeing into the future. Is that what you're-- OK. So right, trying to plan ahead to see what I should do is definitely an interesting idea, but when I am training the network, I can't-- if I'm teaching it to try to predict the next word, and if I give it the answer, it's not going to learn anything useful. So in practice, when I'm generating text, maybe it would be a good idea to make some guesses far into the future or have a high level plan or something, but in training the network, I can't encode that intuition about how humans generate sequences of language by just giving it the answer of the future directly at least because then it's just too easy. Like there's nothing to learn. Yeah. But there might be interesting ideas about maybe giving the network like a hint as to what kind of thing could come next for example. But that's out of scope for this. Yeah. Yeah, question over here. So I understand like why we would want to mask the future for stuff like language models, but how does it apply to machine translation? Like why would we use it there? Yeah, so in machine translation-- we're going to come over to this board and hopefully get a better marker. Yes. In machine translation, I have a sentence like I like pizza, and I want to be able to translate it [FRENCH].. Nice. And so when I'm looking at the I like pizza, right? I get this as the input. And so I want self-attention without masking because I want I to look at like, and I to look at pizza, and like to look at pizza, and I want it all. And then when I'm generating this, if my tokens are like [FRENCH],, I want to-- in encoding this word, I want to be able to look only at myself-- and we'll talk about encoder-decoder architectures in this later in the lecture-- but I want to be able to look at myself none of the future and all of this. And so what I'm talking about right now in this masking case is masking out with like negative infinity, all of these words. So that sort of attention score from [FRENCH] to everything else should be negative infinity. Yeah. Does that answer your question? Yes. Great. OK, let's move ahead. OK. So that was our last big sort of building block issue with self-attention. So this is what I would call, and this is my personal opinion, a minimal self-attention building block. You have self-attention, the basis of the method, so that's sort of here in the red, and maybe we had the inputs to the sequence here, and then you embed it with that embedding matrix e, and then you add position embeddings, right? And then these three arrows represent using the key, the value, and the query, that sort of stylized there. This is often how you see these diagrams, right? And so you pass it to self-attention with the position representation, right? So that specifies the sequence order, because otherwise, you'd have no idea what order the words showed up in. You have the non-linearities in the teal feed-forward network there to sort of provide that sort of squashing and sort of deep learning expressivity, and then you have masking in order to have parallelizable operations that don't look at the future. OK? So this is sort of our minimal architecture. And then up at the top above here, so you have this thing; maybe you repeat this sort of self-attention and feed forward many times. So self-attention, feed forward, self-attention, feed forward, self-attention, feed forward, that's what I'm calling this block. And then maybe at the end of it, you predict something. I don't know. We haven't really talked about that. But you have these representations, and then you predict the next word or you predict the sentiment or you predict whatever. So this is like a self-attention architecture. OK, we're going to move on to the transformer next. So are there any questions? Yeah. [INAUDIBLE] just for encoders? Other way around. We will use masking for decoders, where I want to decode out a sequence where I have an informational constraint. Where to represent this word properly, I cannot have the information of the future. And masking when you don't [INAUDIBLE],, right? Yeah. OK. OK. Great. So now let's talk about the transformer. So what I've pitched to you is what I call a minimal self-attention architecture. And I quite like pitching it that way, but really no one uses the architecture that was just up on the slide, the previous slide. It doesn't work quite as well as it could, and there's a bunch of important details that we'll talk about now that goes into the transformer. But what I would hope though to sort of have you take away from that is that the transformer architecture as I'll present it now is not necessarily the endpoint of our search for better and better ways of representing language even though it's now ubiquitous and has been for a couple of years. So think about these sort of ideas of the problems of using self-attention and maybe ways of fixing some of the issues with transformers. OK. So a transformer decoder is how we'll build systems like language models, right? And so we've discussed this. It's like our decoder with our self-attention only sort of minimal architecture. It's got a couple of extra components, some of which have grayed out here, that we'll go over one by one. The first that's actually different is that we'll replace our self-attention with masking with masked multi-head self-attention. This ends up being crucial; it's probably the most important distinction between the transformer and this minimal architecture that I've presented. So let's come back to our toy example of attention, where we've been trying to represent the word learned in the context of the sequence I went to Stanford CS 224n and learned. And I was sort of giving these teal bars to say, Oh, maybe intuitively you look at various things to build up your representation of learned. But really there are varying ways in which I want to look back at the sequence to see varying sort of aspects of information that I want to incorporate into my representation. So maybe in this way, I sort of want to look at Stanford CS 224n because I go, it's like entities, like you learn different stuff at Stanford CS 224n than you do at other courses or other universities or whatever, right? And so maybe I want to look here for this reason. And maybe in another sense, I actually want to look at the word learned, and I want to look at I, i went, and learned, right? As you sort of like maybe syntactically relevant words. Like there is very different reasons for which I might want to look at different things in the sequence. And so trying to average it all out with a single operation of self-attention ends up being maybe somewhat too difficult in a way that will make precise in assignment five. Nice, we'll get to do a little bit more math. OK. So any questions about this intuition? [INAUDIBLE] Yeah. So it should be an application of attention, just as I've presented it. So one independent, define the keys, define the queries, define the values. I'll define it more precisely here. But think of it as I do attention once, and then I do it again with different-- different parameters, being able to look at different things, et cetera. So if we have two separate [INAUDIBLE],, how do we ensure that they learn different things? We do not-- OK, so the question is, if we have two separate sets of weights trying to learn, say, to do this and to do that, how do we ensure that they learn different things? We do not ensure that they hope-- that they learn different things. And in practice they do, although not perfectly. So it ends up being the case that you have some redundancy, and you can sort of cut out some of these, but that's sort of out of scope for this. But we sort of hope-- just like we hope that different sort of dimensions in our feed-forward layers will learn different things because of lack of symmetry and whatever, that we hope that the heads will start to specialize, and that will mean they'll specialize even more, and, yeah. OK. All right. So in order to discuss multi-head self-attention well, we really need to talk about the matrices; how we're going to implement this in GPUs efficiently. We're going to talk about the sequence stacked form of attention. So we've been talking about each word sort of individually as a vector in dimensionality D, But really we're going to be working on these as big matrices that are stacked. So I take all of my word embeddings x1 to xn, and I stack them together, and now I have a big matrix that is in dimensionality R n by d. OK. And now with my matrices k, q, and v, I can just multiply them sort of on this side of x. So x is our n by d, k is our d by d, so n by d times d by d gives you n by d again. So I can just compute a big matrix multiply on my whole sequence to multiply each one of the words with my key query and value matrices very efficiently. So this is sort of this vectorization idea, I don't want a for loop over the sequence, I represent the sequence as a big matrix, and I just do one big matrix multiply. And then the output is defined as this sort of inscrutable bit of math, which I'm going to go over visually. So first, we're going to take the key-query dot products in one matrix. So we've got-- we've got XQ, which is our n by d, and I've got XK transpose, which is our d by n. So n by d, d by n, this is computing all of the eij's, these scores for self-attention, right? So this is all pairs of attention scores computed in one big matrix multiply. OK? So this is this big matrix here, next I use the softmax, right? So I softmax this over the second dimension, the second n dimension, and I get my sort of normalized scores, and then I multiply with xv. So this is an n by n matrix multiplied by an n by d matrix, and what do I get? Well, this is just doing the weighted average. So this is one big weighted average contribution on the whole matrix giving me my whole self-attention output, an R n by d. So I've just restated identically the self-attention operations but computed in terms of matrices so that you could do this efficiently on a GPU. OK. So multi-headed attention. This is going to give us-- and this is going to be important to compute this in terms of the matrices, which we'll see. This is going to give us the ability to look in multiple places at once for different reasons. So sort of self attention looks where this dot product here is high, right? This xi, the Q matrix, the K matrix. But maybe we want to look in different places for different reasons, so we actually define multiple query, key, and value matrices. So I'm going to have a bunch of heads. I'm going to have h self attention heads. And for each head, I'm going to define an independent query, key, and value matrix, and I'm going to say that its shape is going to map from the model dimensionality to the model dimensionality over h. So each one of these is doing projection down to a lower dimensional space. This can be for computational efficiency, and I'll just apply self attention sort of independently for each output. So this equation here is identical to the one we saw for single headed self attention, except I've got these sort of l indices everywhere. So I've got this lower dimensional thing, I'm mapping to a lower dimensional space, and then I do have my lower dimensional value vector there, so my output is an R d by h. But really you're doing exactly the same kind of operation, I'm just doing it h different times. And then you combine the outputs. So I've done sort of look in different places with the different key, query, and value matrices, and then I get each of their outputs, and then I concatenate them together. So each one is dimensionality d by h. And I concatenate them together and then sort of mix them together with the final linear transformation. And so each head gets to look at different things and construct their value vectors differently, and then I combine the result all together at once. OK, let's go through this visually because it's at least helpful for me. So it's actually not more costly to do this really than it is to compute a single head of self-attention, and we'll see through the pictures. So in single headed self-attention, we computed XQ, and in multi-headed self-attention, we'll also compute XQ the same way. So XQ is R n by d. And then we can reshape it into R n, that's sequence length, times the number of heads times the model dimensionality over the number of heads. So I've just reshaped it to say now I've got a big three-axis tensor: the first axis is the sequence length, the second one is the number of heads, the third is this reduced model dimensionality. And that costs nothing, right? And do the same thing for x and v. And then I transpose so that I've got the head axis as the first axis. And now I can compute all my other operations with the head axis kind of like a batch. So what does this look like in practice? Like instead of having one big XQ matrix that's model dimensionality D, I've got like, in this case, three XQ matrices of model dimensionality D by 3, D by 3, D by 3, same thing with the key matrix here. So everything looks almost identical, it's just a reshaping of the tensors. And now, at the output of this, I've got three sets of attention scores just by doing this reshape. And the cost is that, well, each of my attention heads has only a d by h vector to work with instead of a D dimensional vector to work with, right? So I get the output, I get these three sets of pairs of scores. I compute the softmax independently for each of the three, and then I have three value matrices there as well, each of them lower dimensional, and then finally, I get my three different output vectors, and I have a final linear transformation to sort of mush them together, and I get an output. And in summary, what this allows you to do is exactly what I gave in the toy example, which was I can have each of these heads look at different parts of a sequence for different reasons. Just a question-- so this is at a given block, right? Like all of these attention heads are for a given transformer block. Our next block would also-- could also have three attention heads. The question is, are all of these for a given block? And we'll talk about a block again, but this block was this sort of pair of self-attention and feed-forward networks. So you do like self-attention feed-forward, that's one block. Another block is another self-attention, another feed forward. And the question is, are the parameters shared between the blocks or not? Generally they are not shared. You'll have independent parameters at every block, although there are some exceptions. During loading on that, is it typically the case that you have the same number of heads at each block, or do you vary the number of heads across blocks? You have-- you definitely could vary it. People haven't found reason to vary-- so the question is, do you have different numbers of heads across the different blocks or do you have the same number of heads across all blocks? The simplest thing is to just have it be the same everywhere, which is what people have done. I haven't yet found a good reason to vary it, but, well, it could be interesting. It's definitely the case that after training these networks, you can actually just totally zero out, remove some of the attention heads. And I'd be curious to know if you could remove more or less depending on the layer index, which might then say, Oh, we should just have fewer. But again, it's not actually more expensive to have a bunch. So people tend to instead set the number of heads to be roughly so that you have a reasonable number of dimensions per head given the total model dimensionality D that you want. So for example, I might want at least 64 dimensions per head, which if D is 128, that tells me how many heads I'm going to have roughly. So people tend to scale the number of heads up with the model dimensionality. Yeah. [INAUDIBLE] by slicing it in different columns, you're reducing the rank of the final matrix, right? Yeah. But that doesn't really have any effect on the results? So the question is, by having these sort of reduced XQ and XK matrices, this is a very low rank approximation. This little sliver and this little sliver defining this whole big matrix, this very low rank, is that not bad in practice? No. Again, it's sort of the reason why we limit the number of heads depending on the model dimensionality, because you want intuitively at least some number of dimensions. So 64 is sometimes done, 128, something like that. But if you're not giving each head too much to do and it's got sort of a simple job, you've got a lot of heads, it ends up sort of being OK. All we really know is that empirically, it's way better to have more heads than like one. Yes. I'm wondering, have there been studies to see if information in one of the sets of the attention scores-- like information that one of them learns is consistent and related to each other, or how are they related? So the question is, have there been studies to see if there is sort of consistent information encoded by the attention heads? And yes, actually there's been quite a lot of study in interpretability and analysis of these models to try to figure out what roles, what sort of mechanistic roles each of these heads takes on. And there's quite a bit of exciting results there around some attention heads learning to pick out sort of the syntactic dependencies or maybe doing a sort of a global averaging of context. The question is quite nuanced though, because in a deep network, it's unclear-- and we should talk about this more offline-- but it's unclear if you look at a word 10 layers deep in a network, what you're really looking at, because it's already incorporated context from everyone else and it's a little bit unclear. Active area of research. But I think I should move on now to keep discussing transformers. But yeah, if you want to talk more about it, I'm happy to. OK. So another sort of hack that I'm going to toss in here, and maybe they wouldn't call it hack, but it's a nice little method to improve things, it's called scaled dot product attention. So one of the issues with this key-query value self attention is that when the model dimensionality becomes large, the dot products between vectors, even random vectors, tend to become large. And when that happens, the inputs to the softmax function can be very large, making the gradients small. So intuitively, if you have two random vectors and model dimensionality D, and you just dot product them together, as D grows, their dot product grows, and expectation could be very large. And so you sort of want to start out with everyone's attention being very uniform, very flat, look everywhere. But if some dot products are very large, then learning will be inhibited. And so what you end up doing is you just-- for each of your heads, you just sort of divide all the scores by this constant that's determined by the model dimensionality. So as the vectors grow very large, their dot products don't, at least at initialization time. So this is like a nice little important but maybe not terribly-- like yeah, it's important to know. And so that's called scaled dot product attention. From here on out, we'll just assume that we do this. It's quite easy to implement; you just do a little division in all of your computations. OK. So now in the transformer decoder. We've got a couple of other things that I have unfaded out here. We have two big optimization tricks or optimization methods, I should say really, because these are quite important. They end up being very important. We've got residual connections and layer normalization. And in transformer diagrams that you see around the web, they're often written together as this add and norm box. And in practice, in the transformer decoder, I'm going to apply masked multi-head attention and then do this sort of optimization; add a norm, then I'll do a feed forward application and then add a norm. So this is quite important, so let's go over these two individual components. The first is residual connections. I mean, I think we've talked about residual connections before, right? It's worth doing it again. But it's really a good trick to help models train better. So just to recap, we're going to take-- instead of having this sort of-- you have a layer i minus 1, and you pass it through a thing, maybe it's self-attention, maybe it's a feed-forward network, now you've got layer i. I'm going to add the result of layer i to this sort of-- to its input here. So now I'm saying I'm just going to compute the layer, and I'm going to add in the input to the layer so that I only have to learn the residual from the previous layer, right? So I've got this connection here, it's often written as this; this sort of like ooooh connection, OK, right? Goes around. And you should think that the gradient is just really great through the residual connection, right? Like if ah, I've got vanishing or exploding gradient-- vanishing gradients through this layer, well, I can at least learn everything behind it because I've got this residual connection where the gradient is 1, because it's the identity. So this is really nice. And it also maybe is like a bias-- at least at initialization, everything looks a little bit like the identity function now, right? Because if the contribution of the layer is somewhat small because all of your weights are small, and I have the addition from the input, maybe the whole thing looks a little bit like the identity, which might be a good sort of place to start. And there are really nice visualizations; I just love this visualization. This is your loss landscape, right? So your gradient descent, and you're trying to traverse the mountains of the loss landscape. This is like the parameter space, and down is better in your loss function. And it's really hard, so you get stuck in some local optima, and you can't sort of find your way to get out. And then this is your residual connections. I mean, come on, you just sort of walk down. It's not actually I guess really how it works all the time, but I really love this. It's great. OK. So yeah, we've seen residual connections, we should move on to layer normalization. So layer norm is another thing to help your model train faster. And the intuitions around layer normalization and sort of the empiricism of it working very well maybe aren't perfectly like, let's say, connected. But you should imagine, I suppose, that we want to, say, there's variation within each layer. Things can get very big, things can get very small. That's not actually informative because of variations between maybe the gradients or I've got weird things going on in my layers that I can't totally control, I haven't been able to make everything behave sort of nicely where everything stays roughly the same norm, maybe some things explode, maybe some things shrink. And I want to cut down on sort of uninformative variation between layers. So I'm going to let x and Rd be an individual word vector in the model. So this is like at a single index one vector. And what I'm going to try to do is just normalize it. Normalize it in the sense of it's got a bunch of variation, and I'm going to cut out on everything-- I'm going to normalize it to unit mean and standard deviation. So I'm going to estimate the mean here across-- so for all of the dimensions in the vector. So j equals 1 to the model dimensionality. I'm going to sum up the value. So I've got this one big word vector, and I sum up all the values. Division by d here, right? That's the mean. I'm going to have my estimate of the standard deviation. Again, these should say estimates. This is my simple estimate of the standard deviation of the values within this one vector. And I'm just going to-- and then possibly I guess I can have learned parameters to try to scale back out in terms of multiplicatively and additively here, but that's optional. We're going to compute this standardization, right? Where I'm going to take my vector x, subtract out the mean, divide by the standard deviation, plus this epsilon sort of constant. If there's not a lot of variation, I don't want things to explode. So I'm going to have this epsilon there. That's close to zero. So this part here, x minus mu over square root sigma plus epsilon, is saying take all the variation and sort of normalize it to unit mean and standard deviation. And then maybe I want to sort of scale it, stretch it back out, and then maybe add an offset beta that I've learned. Although in practice actually, this part, and we'll discuss this in the lecture notes, in practice, this part maybe isn't actually that important. But so layer normalization, yeah, you're-- you can think of this as when I get the output of layer normalization, it's going to be-- it's going to sort of look nice and look similar to the next layer independent of what's gone on because it's going to be unit mean and standard deviation. So maybe that makes for a better thing to learn off of for the next layer. OK. Any questions for residual or layer norm? Yes. [INAUDIBLE] to subtract the [INAUDIBLE] the vector x? Yeah, that's a good question. When I subtract the scalar mu from the vector x, I broadcast mu to dimensionality d and remove mu from all d. Yeah, good point. Thank you. That was unclear. In the fourth bullet, maybe I'm confused, is it divided? Should it be divided by d or [INAUDIBLE]?? Sorry, can you repeat that. In the fourth bullet point when you're calculating the mean, is it divided by d or is it-- or maybe I'm just [INAUDIBLE]. I think it is divided by d. Yeah, cool. So this is the average deviation from the mean of all of the-- yeah. Yes. So if you have five words in the sentence [INAUDIBLE] norm, do you normalize based on the statistics of these five words or just one word [INAUDIBLE]? So the question is, if I have five words in the sequence, do I normalize by sort of aggregating the statistics to estimate mu and sigma across all the five words, share their statistics or do independently for each word? This is a great question, which I think in all the papers that discuss transformers is under-specified. You do not share across the five words, which is somewhat confusing to me. So each of the five words is done completely independently. You could have shared across the five words and said that my estimate of the statistics are just based on all five, but you do not. I can't pretend I understand totally why. [INAUDIBLE] extension of that [INAUDIBLE] for example, per batch or per output for the same position [INAUDIBLE]? So a similar question. The question is, if you have a batch of sequences, right? So just like we're doing batch-based training, do you for a single word, now we don't share across a sequence index for sharing the statistics, but do you share across the batch? And the answer is no. You also do not share across the batch. In fact, layer normalization was sort of invented as a replacement for batch normalization which did just that. And the issue with batch normalization is that now your forward pass sort of depends in a way that you don't like on examples that should be not related to your example. And so yeah, you don't share statistics across the batch. OK. Cool. OK, so now we have our full transformer decoder, and we have our blocks. So in this sort of slightly grayed out thing here that says repeat for number of encoder or, sorry, decoder blocks, each block consists of-- I pass it through self-attention. And then my add and norm, right? So I've got this residual connection here that goes around, and I've got the layer normalization there and then a feed-forward layer and then another add and norm. And so that set of four operations, I apply for some number of times, number of blocks, so that whole thing is called a single block. And that's it, that's the transformer decoder as it is. Cool. So that's a whole architecture right there. We've solved things like needing to represent position, we've solved things like not being able to look into the future, we've solved a lot of different optimization problems. You've got a question. Yes. [INAUDIBLE] is the multi-headed attention [INAUDIBLE]?? Yes. Yes, masked multi-head attention. Yeah. With the dot product scaling with the square root d over h as well. Yeah. So the question is, how do these models handle variable length inputs? Yeah, so if you have-- so the input to the GPU forward pass is going to be a constant length. So you're going to maybe pad to a constant length. And in order to not look at the future, the stuff that's sort happening in the future, you can mask out the pad tokens. Just like the masking that we showed for not looking at the future in general, you can just say set all of the attention weights to zero or the scores to negative infinity for all of the pad tokens. Yeah, exactly. So you can set everything to this maximum length. Now, in practice-- so the question was, do you set this length that you have everything be, be that maximum length? I mean, yes often, although you can save computation by setting it to something smaller and everything-- the math all still works out. You just have to code it properly so it can handle-- you set everything- instead of to n, you set it all to 5 if everything is shorter than length 5, and you save a lot of computation. All of the self-attention operations just work. So yeah. How many layers are in the feed-forward normally? There's one hidden layer in the feed-forward usually. Yeah. OK, I should move on. I've got a couple more things and not very much time. OK. But I'll be here after the class as well. So in the encoder-- so the transformer encoder is almost identical. But, again, we want bidirectional context, and so we just don't do the masking, right? So I've got-- in my multi-head attention here, I've got no masking. And so it's that easy to make the model bidirectional. OK? So that's easy. So that's called the transformer encoder. It's almost identical but no masking. And then finally, we've got the transformer encoder decoder, which is actually how the transformer was originally presented in this paper Attention Is All You Need. And this is when we want to have a bidirectional network. Here's the encoder. It takes in, say, my source sentence for machine translation; It's multi-headed attention is not masked. And I have a decoder to decode out my sentence. Now, but you'll see that this is slightly more complicated. I have my masked multi-head self-attention just like I had before and my decoder, but now I have an extra operation which is called cross attention, where I'm going to use my decoder vectors as my queries, but then I'll take the output of the encoder as my keys and values. So now for every word in the decoder, I'm looking at all the possible words in the output of all of the blocks of the encoder. Yes. [INAUDIBLE] is no longer [INAUDIBLE] like the keys and the values. How do we get a key and value separated from the output, because didn't we collapse those into the single output? So we will-- sorry, how will we get the keys and values out? How do we-- because when we have the output, didn't we collapse like the keys and values into a single output? So the output-- [INAUDIBLE] calculate. Yeah, the question is, how do you get the keys and values and queries out of this sort of single collapsed output? Now, remember, the output for each word is just this weighted average of the value vectors for the previous words. And then from that output, for the next layer, we apply a new key, query, and value transformation to each of them for the next layer of self-attention. So it's not actually that you're-- --key and the value to the output. It's not the output itself, when you're taking from the [INAUDIBLE]? Yeah, you apply the key matrix, the query matrix to the output of whatever came before it. Yeah. And so just in a little bit of math, right? We have these vectors h1 through hn, I'm going to call them, that are the output of the encoder, right, and then I've got vectors that are the output of the decoder. So I've got these z's I'm calling the output of the decoder, and then I simply define my keys and my values from the encoder vectors, these h's. So I take the h's, I apply a key matrix and a value matrix, and then I define the queries from my decoder. So my query is here. So this is why two of the arrows come from the encoder, and one of the arrows comes from the decoder. I've got my z's here and my queries, my keys and values from the encoder. OK. So that is it. I've got a couple of minutes, I want to discuss some of the results of transformers, and I'm happy to answer more questions about transformers after class. So really, the original results of transformers, they had this big pitch for like, Oh, look, you can do way more computation because of parallelization; they got great results in machine translation. So you had transformers sort of doing quite well, although not like astoundingly better than existing machine translation systems, but they were significantly more efficient to train, right, Because you don't have this parallelization problem, you could compute on much more data much faster, and you could make use of faster GPUs much more. After that, there were things like document generation, where you had this sort of old standard of sequence to sequence models with LSTMs, and eventually, everything became sort of transformers all the way down. Transformers also enabled this revolution into pre-training, which we'll go over in lecture next class. And this sort of the efficiency, the parallelizability allows you to compute on tons and tons of data. And so after a certain point sort of on standard large benchmarks, everything became transformer-based. This ability to make use of lots and lots of data, lots and lots of compute just put transformers head and shoulders above LSTMs in, let's say, almost every sort of modern advancement in natural language processing. There are many sort of drawbacks and variants to transformers. The clearest one that people have tried to work on quite a bit is this quadratic compute problem. So this all pairs of interactions means that our sort of total computation for each block grows quadratically with the sequence length. And in a student's question, we heard that, well, as the sequence length becomes long, if I want to process a whole Wikipedia article, a whole novel, that becomes quite unfeasible. And actually, that's a step backwards in some sense. Because for recurrent neural networks, it only grew linearly with the sequence length. Other things people have tried to work on are sort of better position representations because the absolute index of a word is not really the best way maybe to represent its position in a sequence. And just to give you an intuition of quadratic sequence length, remember that we had this big matrix multiply here that resulted in this matrix of n by n, and computing this is like a big cost, it costs a lot of memory. And so there's been work-- yeah, and so if you think of the model dimensionality, as like 1,000, although today it gets much larger. And then for a short sequence of n is roughly 30, maybe if you're computing n squared times d, 30 isn't so bad, but if you had something like 50,000, then n squared becomes huge and sort of totally infeasible. So people have tried to map things down to a lower dimensional space to get rid of the quadratic computation. But in practice, as people have gone to things like GPT 3, ChatGPT, most of the computation doesn't show up in the self-attention, so people are wondering is it even necessary to get rid of the self-attention operations quadratic constraint? it's an open form of research whether this is sort of necessary. And then finally, there have been a ton of modifications to the transformer over the last five, four-ish years. And it turns out that the original transformer plus maybe a couple of modifications is pretty much the best thing there is still. There have been a couple of things that end up being important. Changing out the non-linearities in the feed-forward network ends up being important, but it's sort of a-- it's had lasting power so far. But I think it's ripe for people to come through and think about how to improve it in various ways. So pre-training is on Tuesday. Good luck on assignment four. And then yeah, we'll have the project proposal documents out tonight for you to talk about.
Stanford_CS224N_Natural_Language_Processing_with_Deep_Learning_2023
Stanford_CS224N_NLP_with_Deep_Learning_2023_Python_Tutorial_Manasi_Sharma.txt
SPEAKER: All right. Hi, everyone. Welcome to the 224N Python review session. The goal of the session really will be to give you the basics of Python and NumPy in particular that you'll be using a lot in your second homework and the homeworks that will come after that as well. We're sort of taking this tutorial from the background of anyone who hasn't touched programming languages to some extent, but also for people who have. We'll be sort of going through a lot of that material very quickly, and we'll be progressing to NumPy as well. And as I mentioned, first and foremost, the session is really meant for the people who are here in person. So if you'd like me to slow down, speed up at any point, need time for clarifications, feel free to ask. It's really meant for you first here. And I really would like it to be sort of an interactive session as well. All right. So this is a topic-- these are the topics we'll be covering today. Going through, first of all, why Python is a language, why we have chosen it for sort of this course and in general, why do people prefer it to some extent for machine learning and natural language processing. Some basics of the language itself-- common data structures and then getting to the meat of it through NumPy which as I mentioned, you'll be extensively using in your homeworks going forward. And then some practical tips about how to use things in Python. All right. So first thing, why Python? So a lot of you who might have been first introduced to programming might have done Java before. A lot of people use Matlab in other fields as well. So why Python? Python is generally used for one, because it's a very high level language. It can look very, very English like and so it's really easy to work with for people, especially when they get started out. It has a lot of scientific computational functionality as well, similar to Matlab. So when you talk about NumPy, you'll see that it has a lot of frameworks that are very, very quick and efficient operations involving math or matrices. And that's very, very useful in applications such as deep learning. And for deep learning in particular, a lot of frameworks that people use, particularly for example, PyTorch and TensorFlow, interface directly with Python. And so for those main reasons, people generally tend to use Python than deep learning. OK. So the setup information is in the slides if you'd like to look at them offline. I will be sort of jumping over that for now because I want to get to the introduction to the language itself and then if we have time, come back to the setup information. A lot of it's pretty direct. You can walk through it. It gives you steps for how to install packages, what is a Conda environment, for example, and gets you set up with your first working Python environment. So you can sort of run simple and basic commands to get used to the language. But for now, I'm going to be skipping over this and coming back to it if we have time. All right. Language basics. So in Python, you have variables and these variables can take on multiple values. The assignment operation or the equal sign will allow you to assign a particular value to a variable. A nice thing with Python is you don't have to instantiate the type of the variable to begin with, and then only instantiate-- and only assign values of that type. So for example, in certain languages, we first say that this variable x is only going to be of type int and any value aside from that that's assigned to it will throw an error. Python is pretty flexible. So if I want to, I can reassign-- I can start with x is equal to 10. And then later on-- five lines later, I can say x is equal to high as a string and there would be no issue. You can do simple mathematical operations such as the plus and division signs. You can do exponentiation which is raising one value to another value. So x to the power of y for example, using the double asterisk. You can do type casting for float division. So if you want to ensure that your values are being divided resulting in a float value and not just dividing two integers, you can cast two different types like float. If you want something to be explicitly an int, you can also just put an int instead of the float with brackets around the result and that will give you an integer value. And then you can also do type casting to, for example, convert from integers to strings. So in this case, if I wanted to instead of doing 10 plus 3 as a mathematical operation, I just want to write out 10 plus 3, then I can convert the x and y values, for example, to strings and then add the plus sign as a character as well to create a string. And so a lot of these common operations you can look online as well-- people have lists for them-- and just see how they're sort of done in Python. All right. Some other quick things. So Boolean values, the true and the false, they're always used with capital letters and some other languages, they might be lowercase so just one thing to know. Python also doesn't have a null value. The equivalent of a null value is none. So sometimes when you want to say that this value-- you want to return none saying that I'm not really doing anything here. You want to do checks, for example, in if statements to say that this doesn't have a value, then you can assign it to none. So none, sort of, functions as a null equivalent. So you're not really returning anything. It doesn't have a value. Not the same as 0. And another nice thing about Python is lists which are, sort of, mutable-- we'll come to that a little bit later-- but sort of mutable lists of objects and means that you can change them. They can be of any type. So you can have a mixture of integers, non- values strings, et cetera. And yeah, functions can return the non-value as well. And another quick thing, instead of using the double and, and in some other languages as people might do, with Python, I mentioned earlier, it's very English like so you can actually just write out if x is equal to 3 and, and in English y is equal to 4 then return true or something. It's quite nice that way. So you can use and/or and not. And then just the comparison, operators of equal equals to and not equals to will check for equality and inequality. This one is pretty standard, I feel, across many languages and you can use them in Python as well. And yeah, remember just a quick thing. The equal, equal to sine is different from the assignment operator. This one checks for equality. That one is just assigning a value. So a single equal sign versus two of them. All right. And then also in Python, you don't use brackets. So Python-- you can use basically spaces or tabs. So either indents of 2 or 4 to be able to break up what is contained within the function or contained within like an if statement, a for statement or any loops for example. And so the main thing is you can choose whether to do 2 or 4. You just have to be consistent throughout your entire code base. Otherwise, they will throw an error. Now go to some common data structures, and for this, we'll transition to the Colab. So this one will sort of show you in real time. This is by the way, a Colab. A Colab is basically a Jupyter Notebook for those of you who are familiar with those that you can use that it's hosted on Google's servers. The really nice thing about Jupyter Notebooks is you don't have to run an entire file altogether. You can run it step by step into what are these called cells. So if you want to see an intermediate output, you can see that pretty easily, and that way-- and you can also write, for example, a lot of descriptions pertaining to cells, which is really, really nice to have as well. So a lot of people tend to use these when they're sort of starting out with their project and want to debug things. And Colab allows you to use these Jupyter Notebook type applications hosted on their servers for free basically. So anyone can create one of these and run their code. All right. So lists are mutable arrays. Mutable means that you can change them so that once you declare them, you can add to them, you can delete them and they're optimized for that purpose. So they expect to be changed very often. We'll come to what are called NumPy arrays later and those tend to be pretty much fixed. When you create a new one, you'd have-- when you change one, you actually have to create a new array, which will have the additional information. So this is highly optimized for changing things. If you know, for example, if you're in a loop, you're adding different elements to, let's say, a bigger entity, you'd want to use something like a list because you're going to be changing that very often. So let's see how they work. So we start off with a names array with Zach and J. You can index into the list by-- so what is this? Index into the list by index, which means that you can list out the elements in the list depending on what's called the index. So it's what place that value is at within the list. So 0 refers to the first element. So Python's what's called 0 indexed which means it starts with 0 and then it goes to 1. So here, 0 will be Zach. And then let's say I want to append something to the end. So to add something to the end of the list, the term is append not add. And so if I want to append, I can now create a separate list which is the original list itself with the added last element. And what would currently be the length of this? It would be 3 because you have three elements. And you can just quickly get that by using the len function. Not length. Just three letters-- L-E-N. All right. It's also really nice because Python has overloaded the plus operation to be able to concatenate lists. So here, I have a separate list, right? And all you need for a list definition is just brackets. So this is a separate list altogether, even though I haven't saved it in a variable. Just Abby and Kevin. And I can just do a plus equal to, which means that names is equal to names plus Abby and Kevin and this should output this full list. You can create lists by just putting the plain brackets or an existing list. And then as I mentioned earlier, your list can have a variety of types within them. So here this list contains an integer value, a list value-- so you can have a list of lists, as many sort of sublists as you like-- a float value and a none value. And this is completely valid within Python. Slicing refers to how you can access only parts of the list. So if I only want, for example, in this numbers array, I only want 0, 1, 2, slicing is a way that you can extract only those parts. So the way slicing works is the first element is included, and the last element is excluded. So here I start with 0, 1, 2, 3. So 3 is not included. And so 0, 1, 2 will be printed out. There's also shorthands. So if you know that you're going to be starting with the first element of the array so if you know-- I want 0, 1, 2 and it starts with 0, then you don't need to even include the first index. You can just leave that and include the last index that would be excluded. So that would be blank semicolon 3 and same deal with the end. If you know that you want to take everything, let's say, from like 5 and 6 till the end of the array, you can start with what you would like. So 0, 1, 2, 3, 4, 5 till the end and leave that. Fun fact-- so this semicolon, when you take just the semicolon, it will take everything in the list, but it will also create a duplicate in memory. That's a very slight, very useful thing to know, because sometimes when you pass lists an array-- in Python, which is out of scope of this tutorial, you can only pass the reference to it. So if you change the array, that gets changed. This will create an entirely separate copy in memory of the exact same array. So if you make any changes to it, it won't affect your original array. So this is a very-- pretty neat way to do that. And then another fun thing that Python has which is pretty unique is you can index negatively. So negative indexing means you index from the back of the array. So minus 1 refers to the last element of the array. Minus 3 will refer to the third last element. And so what minus 1 will give you will be 6 in this case. 1 minus 3 will give you-- will be everything because you're starting with the minus 3 elements. So minus 1, minus 2, minus 3 till the end. And then this one seems kind of confusing, right? 3 to minus 2. So what this will do is it'll give you 0, 1, 2, 3. So you start with 3. And then minus 1, minus 2. So you leave off the last-- because you exclude it within the list, you would only get 3 and 4. So that's what this is. OK. That's about lists. Tuples are immutable arrays. So once you declare the values of these, they cannot be changed. So I start with-- you remember we started with the list of Zach and Jay. Tuples, you start with Zach and J and you can still access them. I can still print out names, zero, same as I did with lists. But if I try to change it, in this case, it'll throw an error. So tuples, once you've instantiated them, they cannot be changed. And to create an empty tuple, you just create-- you can either use just a tuple sign or oftentimes, you can just use the parentheses, brackets. So you can just say, for example, as you did here, just parentheses to instantiate something. All right. And yeah, this one we'll come to a little bit later in shapes, but you can also have a tuple of a single value. And all you have to do there is just put the value and put a comma. So that just shows that have a tuple which is basically like an immutable array. So you can't change it. It's a list but only of one item. And that's here. OK. I'll quickly move to dictionaries. For those of you who might be familiar with other languages, this is the equivalent of like a hash map or a hash table. What this is useful for essentially is mapping one value to another in a really, really quick way. So if I want to map, for example, a string to an index, which you will happen to do a lot of in your homeworks, this is a really, really useful way to do that. And so what it does is you can instantiate this dictionary, and it says, corresponding, that Zach is going to correspond to this string value, whatever it is. And so anytime I want to retrieve the string value, I just use this dictionary. I index by it, which is what I do here, and then it outputs the corresponding value. And it does that really, really quickly. And yeah, it's really useful. Very, very commonly used, especially when you, for example, you have a list of strings, or a list of items and you want to have a corresponding index for them, because and as you'll see in NLP, oftentimes you're working with indices and numbers in particular. So it's a really great way to, sort of, move from like string formats to just like numerical index values. There's some other things you can do for dictionaries. You can check whether certain elements are in there. So if you, for example, try to index phonebook is equal to Muncy, it'll throw an error because there's no string that says Muncy in that phonebook dictionary. And so sometimes you might be wanting to do checks before you extract a value. And so this will just check, for example, if I do print Muncy in phonebook, it should say false. Or for example here, Kevin in phonebook, it should say false. While something that's actually in that dictionary, Zach will be true. OK. And then if you'd like to delete an entry from the dictionary, you can just do that using the del command. All right. Let's move to loops quickly. So loops are a really great way to optimize for the same kind of operation you're doing. It's also a great way to start to sequentially go over those list type or array type objects we were talking about earlier. You have a list of names, right? How do you access all of them? So loops are a really great way to do that. In Python, they've abstracted away a lot of the confusing parts and other languages that might be. You can really, for example, first index on numbers. So what you do is you have a range function that you call. So here you say range. And the range of the last number you'd want. So what this range function will return is 0, 1, 2, 3, 4. And that's what will be stored in this i value. And here it's just printing out that i value. So if I want to, for example, loop over the length of a list of size 10, I just have to do for i in range 10 and then index that corresponding part of the list. You technically don't even have to do that because in Python, you can just directly get the element of the list. So here I have a list of names where I have Zach, J and Richard. Instead of saying first the length of the list, and then doing this range operation, I can just directly say for name in names and then print out the names, and it will just directly get the element in each list. But sometimes you might want both. You might both want this element Zach as well as its position in the array. And for that, you can actually use this really helpful function called enumerate. And so enumerate will basically pair those two values, and it'll give you both the value, which is here in name, for example, and its corresponding index within the array-- both together so that's really, really convenient. Versus, for example, having to do this a little bit more complicated range operation where you first take the range, and then you index the list. How do you iterate over a dictionary? So for dictionaries, if you want to iterate over what's called the keys, so all of these first items that you first put into the dictionary, you can just iterate the same way you would a list. You just say for name and for example, phonebook and you can output the keys. If you want to iterate over what is stored in the list, which is called a value, you'd have to do the dictionary dot values. And if you want both, you use the dot items function. And so that will print out both of these. All right. So this is sort of covering the overarching, most commonly used sort of structures lists, dictionaries and then loops and how to efficiently use them within your code. We'll quickly be moving to the meat of what is really, really strong about Python and what you'll be using a lot for your coming homeworks, especially homework 2 which is NumPy. OK. So for NumPy, also, I'm going to be going to the Colab but just quickly wanted to mention what NumPy is. So NumPy is basically an optimized library for mathematical operations. People tend to like Matlab because it's very, very useful for these mathematical operations which people use in their research. Python's sort of solution to that is to have a separate library entirely where they make use of subroutines which are, sort of, like sublanguages-- sorry, subscripts that are written in a different language called C or C++ that are highly optimized for efficiency. So the reason C and C++ are much faster than Python is because they're closer to what's called machine language, which is what the computer will read. I mentioned earlier, one of the nice things about Python is it's kind of high level. It looks like English to some extent. We say, literally, like if x is equal to 1, or x is equal to 2, right? But that also means that there's a lot more translation required on the computer's part before it understands what you mean. And that's useful when we're writing out code, where we want to understand it, but it's a little bit less useful when you're sort of running a lot of operations on a lot of data. So the real benefit of something like NumPy is that if you have your memory and your data in a particular format, it will call these basically scripts or what are called subroutines in a different language, and it'll make them very, very fast. And so that's the real benefit of using NumPy. And almost everyone in sort of NLP is very, very familiar with this because you'll be running a lot of operations on, for example, a co-occurrence matrices, which are really, really big and it's very useful to have them optimized for time. So that's really the benefit of using NumPy. And NumPy, basically, it's involved for all these math and matrix and vector calculations. And it's different than a list. Although you can easily translate between a list and a NumPy array, NumPy arrays are specifically, as I mentioned, designed to be used in these subroutines. So they have a specific format. They're instantiated differently, and you can translate between this and sort of your standard list easily but to know that you can only do NumPy operations on NumPy arrays. You can't do NumPy operations on lists directly. You first have to convert them, which is really simple. You just use this NumPy.array function. But just know that they operate only on NumPy arrays. OK. So for NumPy, we're going to be going back to the Colab. And then as I mentioned earlier, the real strength of NumPy is it supports these large multi-dimensional arrays and matrices for very, very optimized high level mathematical functions. And just to go step back for a quick second, what is a matrix? Matrices are basically like rectangular structures of numbers that are used, and you can treat them with specific rules for operations between different kinds of things. So if you have a lot of data, instead of individually potentially multiplying things, if you can store them in this rectangular format, you have specific rules about how this matrix for example will interact with the different one. And by doing that, which is matrix multiplication or matrix math, you can do a wide variety of mathematical operations. A vector is generally-- this is conventional. None of these are hard and fast rules. But conventionally, a vector is a matrix in one dimension. So it's usually like a row vector or a column vector, which usually just means that it's a list of values in only one dimension. So it's like, for example, here, when I come down to x is equal to NumPy array of 1, 2, 3, that's a list in only one dimension versus, for example, Z-- when this is Z down here-- that is what's called a two dimensional array, because you have both rows, for example, like 6, 7 and then you have 8, 9. Versus, in this first one, you only have three values in one dimension. So that's the conventional difference between the two. Another convention is matrices generally referred to two dimensional objects. So as I mentioned, is like Z, this is two dimensional. You might have heard the word tensor also. Tensor is by convention, usually like higher dimensional objects. So instead of having two dimensions, 2 comma 2, you can have n dimensions. You can have 2, comma 2, comma 2, comma 2, comma 2, comma 2 for like five or six dimensions. And those are very valid to do mathematical operations on. And those are often colloquially called tensors. In addition, and this will be covered in the next tutorial, in PyTorch, those larger, sort of, tensors are also optimized for efficiency to be used on GPUs and so they're called tensor in a more concrete way because you're using these tensors with PyTorch and other sort of packages to directly do those quicker GPU operations on for deep learning. So those are, sort of-- that's a quick, sort of, terminology difference between the three. OK. So now let's start off with just some quick sort of representations of how are these matrices and vectors represented in NumPy. This, sort of, goes back to your question about what is the difference between 3 comma versus like 1, 3. So usually 3 comma in NumPy arrays usually just means that you have one list of 1 to 3 for example, or three values. Versus if you add another list on top of that, this one comma 3 essentially refers to the fact that there's a list of lists. So any time you have two dimensions, it always means that there's a list of lists and that being like a list of lists or for example, a row. So here, 1, 3 means that there's one row and then three columns. So it's saying there's one row of 3, 4, 5, essentially, and then each of those is like a column separately. You can easily reshape them. So these are basically the same format but from NumPy's perspective, you'll see a little bit later for operations such as broadcasting, you need to have it, for example, sometimes in this 1, 3 format or 3, 1 format. And also like as I said, 3 is basically just like it represents three numbers. 1, 3 means like one row of three elements. 3, 1 will mean you have-- essentially in each column, you'll have a separate array. So you'll see boxes around each of them. There's an example that comes a little bit later in this Colab which will make it a little bit more clearer. So here, if you can see the difference between x and y, one of them has only one bracket, which just says it's one list, only one list of 1, 2, 3. The second one is two brackets, which says it's a list with only one list in it. So it's a list of a list. That's really the main difference between these sort of two representations. So I could have, let's say, a separate one. I'm going to call this a and I just do this. So it's the same sort of elements but this will be 1, 3 because it's showing that there's one outer list which shows the rows and then one inner list which will have each of those values. So the benefit will come when I'm coming to, a little bit later, which is broadcasting. And so essentially it will help you determine what dimensions you want to match against because sometimes you'd want to have 1, 3-- like, 1, 2, 3 applied only to rows in some other matrix. We'll come to that a little bit later. But sometimes you might want to have it only apply to columns. And so if I have a separate matrix, for example, of 00000000 and I want the resulting matrix to be, for example, 123123123 along the rows-- let me actually draw this out. It might be easier. So let's say I have the 00000000 and if I want to have a matrix that does 123123123 versus 123123123, the difference in how to generate these two will be the difference in how you represent their shape. It's the same 123 but the resulting array you're generating by repeating the 1, 2, 3 values requires a difference in shape. And so we'll come to that a little bit later, because this process of how do you generate these arrays is called broadcasting. But that's the real benefit of having an understanding of the shapes. The same 1, 2, 3 values are the same. It's just how they're sort of used with regards to other arrays. All right. So yeah, vectors can be represented as, sort of-- this is what I talked about earlier, as like n dimensions, n by 1, or 1 by n dimensions, and they can result in this different behavior kind of like this that I talked about. Matrices are usually in two dimensions represented as m by n. These are just two examples. For example, I generate, let's say-- and you can also reshape. So I start with, for example, this array, which is a list of 10. Oh, sorry. Let me import NumPy quickly. So I started off with this matrix a which is basically a one dimensional list of 10 values. I can reshape it into a 5 by 2 matrix. So you just have to make sure that your dimensions match, which means that you can multiply them together and get the original size. So if I start off with the 10 matrix, I can make a 2 by 5 matrix. I can make a 5 by 2 matrix. I can make a 10 by 1, 1 by 10. I can't make a, for example, 3 and 5 because it wouldn't fit into the original size. And for that, this operation called reshape is really useful. You might be wondering, why is there two parentheses? The way that reshape works is essentially it will take in a tuple. So remember what I talked about earlier with tuples is that these-- they are immutable objects and they're defined by parentheses. So the outer parentheses is representing what you're inputting to the function and what you're inputting is a tuple. So it uses a second set of parentheses. So now let's go to some array operations. So I started off with this array x. When you apply simple operations, for example, a max operation, sometimes you might want the max of the entire array. So if I do the max of an the entire array-- what's the max value of the entire array by the way? Just the entire thing. Yes, 6, right? So if I just do np.max of x, it'll return one value. It'll return 6. Well, let's say I want the max of every row. In each of these rows, I say I want, let's say, the max of each row. I want 2 and then 4 and then 6. How do you do that? And so NumPy always has, like usually in most of their functions, an axis variable. And what the axis variable will do is it'll tell you which of these dimensions do you want to take the max over. And the way to think about it is-- this is going to be a little bit tricky, but the way people describe it is the axis is what you want to apply your function over. What you want to reduce over. And what that means is I print out the shape of the original array. It's 3 by 2. I want to apply axis 1. As I remember, NumPy is 0 index. It'll be 0, 1. So I want to apply the max over the second dimension. The second dimension means that for each of these, essentially, the row dimension is the first dimension. So it's all along the rows. I'm going to be comparing columns. And so compare this entire column to this entire column. And so just remember for axes, usually the axis 0 refers to the row axis and the axis 1 refers to the column axis. If you don't even want to remember that, you can just remember that from the original dimension, which of these it's referring to and that's the dimension you want to compare over or reduce over. So it can be a little bit harder to grasp around it. Usually the best way to get around is to just play with a bunch of operations of min, max and things like that. But just remember like the axis is what you want to compare over not the resulting thing. So axis 1 means your column. I want to compare between the columns. I want to get-- for example, compare 1 to 2, 3 to 4, 5 to 6. Does that make sense? OK. And what this will do is if I just do np.axis axis, it'll just return-- basically since I'm comparing these columns, it'll just return a result in column. And so as I mentioned, for over the axis 1, you get three values, because you're comparing over these columns, and each column has three values. I'm comparing over rows, as we mentioned, I get two values, right? And so this will just be the tuple comma which is just indicating that it's just a list. It's not a list of lists. It's just a list. But let's say I want a list of lists. Maybe I want to do those operations I talked about earlier. Instead of reshaping which is always there, it's always an option, you can also use this feature called keepdims. And what that will do is it'll take the original dimensions, which is two dimensions because you have 3, 2, there's two of them, and it'll keep that consistent. So it'll be 3, 1. But it just means that instead of returning just the extracted column, which is just a list, it'll basically keep the column in the context of the original sort of x and it'll keep it as like a two dimensional value. All right. Now these are just some operations. So in NumPy, you can use an asterisk as an element wise multiplication. So an asterisk means that I'm going to be comparing every single value to every single corresponding value in another matrix and you need your matrices to also be the same size for this one. So this one, it's basically an element wise matrix. It's not a matrix multiplication so you need to have them be the exact same size. So this will compare, for example, 1 into 3, 2 into 3, 3 into 3 and 4 into 3. All right. You can also do matrix multiplication, which is a different operation entirely. For those of you who aren't familiar with matrix multiplication, you would basically be multiplying a row of one matrix with a column of another matrix and for that to be necessary, you need to have the second dimension of the first array be equal to the first dimension of the second array. So for matrix multiplication if I have an A into B, B into C shaped matrices, these two have to be equal for matrix multiplication. Just something to keep in mind because oftentimes if you're doing matrix multiplication, you need-- you have to make sure that these dimensions are the same, which means that, for example, this is a valid operation, but this can sometimes throw an error. Sometimes. So it's just important to make sure that sometimes you want to make sure that these are exactly equal. You can actually just print out the shapes and make sure that these are equal to be doing matrix multiplication. And then for matrix multiplication, there's a couple of functions you can use. The first one is just np.matmul which is NP dot matrix multiplication. You can also just use the add operation. And both of those are overloaded. You can choose whichever one. They'll result in the same exact operation. And just a quick search-- let me show. You can-- to show what this will do is it'll multiply 1 into 2, so it'll come like 1, 2 versus 3, 4. So it'll do 1 into 3, 2 into 3 and add those two values. So that's what matrix multiplication will do. OK. And then dot products will-- what a dot product is that it takes two vectors. So usually it operates on vectors. And a vector, as I mentioned, is just like a one dimensional matrix. So it's just basically 3 cross 1 for example or 4 cross 1. It'll element wise multiply between two different vectors, and it'll sum up those values. And so here, what a dot product would do would be like 1 into 1 plus 2 into 10 plus 3 into 100. And for a NumPy, you can just do NP dot and then both of those vectors. This one is just a side on how you would want the structure of the dot product to be. For arrays that are more-- OK. So to phrase this the best way. For single dimensional vectors, this operation works directly. Anytime it's a multiple dimensional matrix, then it treats it as a matrix multiplication. The NP dot, dot function. So for 2 by 2 matrix versus a 2 by 2 matrix dot product, it's not going to return the sum. It's going to return the matrix multiplication. That's just something to keep in mind. If you want to make sure that your dot product is happening in the correct way, you would want to make sure that sort of similar to what I was talking about earlier, that-- here. I think the best way to show it. OK. So you would want the second, like what I mentioned, like the last dimension of the first one to match with the first dimension of the next one because it's treating it as a matrix multiplication. Here, the error that it's throwing is 3, 2 combined with 3 and so the way to sort of like fix that would be to have this be like, for example, switch the two so you'd have 2, 3 and then 3 comma. It's really a dimension matching thing at this point. So it can be a little bit confusing, but when you-- the main thing to keep in mind is like for single dimensional vectors, you can just do np.. directly and it'll give you the dot product value. For higher dimensional matrices, it treats it as a matrix multiplication. And so if you still want to-- for those higher dimensional values to ensure that you're getting a dot product, you'd have to make sure that the dimensions are aligned similar to these. So anything that's 2 by 2 plus for both-- any matrix that doesn't have a single dimension in any of them, yes, it would treat it as matrix matmul. The same thing. OK. All right. OK. I'm going to move to indexing. So similar to what I was talking about earlier, remember with lists I was saying if you just do the semicolon, it'll create the same array? Same deal here. The semicolon just means you take everything from the original array. In fact, it returns a copy. So it returns a deep copy. Means you have a complete separate copy in memory. OK. Now I'm going to sort of more details about how do you want to index quickly. So if I, for example, have, let's say, this 3 by 4 matrix and I only want to select the 0 and the second rows, how would I do that? So what's useful is that you can sort of treat a NumPy, you can treat different dimensions differently for indexing. So a semicolon means you select everything in that dimension, which for example, here there's a semicolon in the second dimension, which means I'm taking all of the column values. Versus what's in the first dimension here it's saying a NumPy array of 0 and 2. So it's saying only the 0 index and only the 2 index, which means only the zeroth row and only the second row. So what this would look like would be something like I have a matrix. OK. I have a matrix and I only want to select the zeroth row and I only want to select the column-- the second row. Zero and second. And everything in the columns. All right. And then similarly, for example, if I want to select in the column dimension, I want to select the first and second rows and only the first row, I can do that. So you can basically treat them separately. You can think, how many columns do I want? How many rows do I want? And then index those separately. And that goes for as many dimensions as you want in your entire tensor. Some nice things also, if I wanted, for example, take-- I have this-- let me print that actually x here. I'll just generate the x. OK. So this is x. So if I want to take all the values of x that are above 0.5, for example, I can do that by using what's called Boolean indexing. So I just basically will say x indexed by everything in x that's bigger than 0.5. So it's pretty direct and it'll just output all the values in this entire array that are bigger than 0.5. All right. This one is also another way to do reshaping. So I kind of mentioned earlier, sometimes you have this list of three elements and you want to reshape it to a 3 by 1 array, for example. You can also use what's called NumPy.mulaxis. This will essentially add another axis in whatever dimension you want. So if I want to go from like this 3 by 4 array to a 3 by 4 to 3 by 4 by 1, then I can just add a NumPy.mulaxis there. An even simpler way to think about it would be like a 2 comma to a 2, 1. And so it's just-- it's another way to do what essentially what would be the reshaping operation. Does that make sense? Also what this would look like for example-- let me just make it a little bit more concrete. So you see, I have this list. I have a singular list and then in that list, I have a list of lists. So I have a list with element 1 and list of element 2. So this is what that reshape operation will do. And what NumPy.mulaxis will enable you to do as well. All right. I think we are good for time. So the last main topic we'll be covering is broadcasting. And what's really great about broadcasting is it'll allow you to operate with NumPy arrays that are of different shapes, but can be, sort of-- if many operations in them can be repeated, it allows for that in a very efficient manner. And this is actually one of the most, I would say, useful things about NumPy and one of its defining features. And what that means is if for example in this case, right? If we go back to this example that I had with-- I start off with the 000 array. How do I generate this array versus how do I generate this array, right? Instead of me saying, OK, element 00 plus 1, element 01 plus 2, all of that stuff, right? Instead of doing that one by one, what broadcasting allows me to do is I can have only one vector of size 1, 2, 3 and depending on how I do the broadcasting which I'll come to in a second, I can duplicate it along the row dimension or I can duplicate it along the column dimension. And NumPy allows for that. It'll do that on its own in the back end. And so that's really what broadcasting means is I don't need to, for example, create a new array saying I want to create a new array to begin with, which is already like this and then add those two together. I can just duplicate this and get this. All right. So now some rules for broadcasting. And let me just quickly visually also just show what broadcasting will do. Oh, sorry. So broadcasting-- this is a pretty good visual analogy. I have this 1 by 1 1, 2, 3 vector, right, and I want to basically add, let's say, only the columns with this 1, 2, 3 vector. So what broadcasting allows you to do is you only pass these two values in and on the back end, it will duplicate this along the column dimension. So let's say, I have 123123123 and then it'll do the addition. Similarly, if I pass it a vector 1, 2, 3, 4 and I want it to be added to each of the rows instead of each of the columns, it'll be able to do that by sort of duplicating it on the back end. So this is visually what's happening with broadcasting. All right. Now some rules. So how does NumPy know when and how to do broadcasting? So the main two rules to keep in mind for broadcasting is one, it can only happen if all of the dimensions, every single dimension between two arrays are compatible. And when they say what is compatible? Either the dimension values are equal or one of them is equal to 1. And that is the only rule required. So for example, I start off with this x array. I have this 3 by 4 x array. Will y is equal to 3 comma 1 be compatible? Yes, it will be. Why? Because you have 3 in the first dimension between the two, which are the same and in the second dimension, you have 4 and you have 1. So those are compatible values. And so what this tells NumPy on the back end is I'm doing, for example, an addition operation x plus y. It knows that OK, 3 and 3 are the same but 4 and 1 are not the same. One of them has one dimension. So I need to duplicate this y along the second dimension, which means I need to duplicate it along the column dimension. And once it does that, it duplicates it. It'll get 3 comma 4 an array and then it can do the addition. And it does that really fast. So it's better to use broadcasting in this way then for you to create a separate array already duplicated and then add them. Similarly, I have this z array, which is 1, 4. What x into z will do is first it'll check OK, 3, 1. OK, is that compatible? Yes, because you have 3 in one dimension, and you have one in the second and 4 and 4 are compatible. OK. So let's say I know that these two are compatible. In the second dimension, I don't need to change anything. In the first dimension, it'll know to duplicate them, basically. So it'll know to duplicate z and so add it three times in the row dimension. Create a separate array and then multiply those two. So this is giving you an example of saying I started off with x. I have y and then the final shape will be 3, 4. So a lot of times in deep learning you will have the same-- you'll have different batches of different images coming in, but you want to apply, let's say, the same weight matrix to all of them. And instead of duplicating that weight matrix a hundred or even like potentially, depending on the size of your batch size like 1,000 times and then adding those together, you use the same matrix, and it'll know OK, if I'm going to be duplicating over the batch dimension, it'll do that for you on the back end. So it's used a lot of times in deep learning because of this. And basically in your second homework, that's basically what you'll be doing. You'll be implementing a feed forward network in NumPy and it'll say you have this W matrix, you have this B matrix which is a bias-- we'll come to those in class-- and it'll ask you to implement that in NumPy because that's basically what you're doing. Is that you have this input image, you have a weight matrix, which will somehow scale it to an output and that weight matrix will be applied to multiple images in your batch. And those images can be different, but their sizes will be the same and it's optimized for that. OK. So this is just more examples of the same thing. Your final thing that you'll be coming to is the size of 3, 4. Let's see. This one's the example that I showed right here, which is that I have this array of let's say zeros. I have this NumPy, this B array of size-- what size would this be? Yes, good. Because you have one outer list and inside this, you have one inner list. So it's just basically one row and then three values inside so yes. And so would this be compatible? Yes, and so it'll know basically to duplicate over the row dimension. And so you're going to get duplicates in the row dimensions. You're going to get 123123123, and that's what's happening here. So these are, for example, a little bit-- sometimes it says more complex behavior. What this basically just means is that if I have this B vector, which is 3, 1. If I'm doing this B plus B dot transpose-- by the way, transpose is just changing the dimensions. It's switching them. So if I have a 2 by 3 matrix, a transpose will be a 3 by 2 matrix. What that means visually is something like your row and rows and like column dimensions will get switched. 6 goes to, I believe, it's like 1, 2, 3, 4, 5, 6. So like three rows versus like three columns. And what this is just saying is that a 3 by 1 and a 1 by 3-- both of those vectors will be compatible because remember in each dimension, it's either the same or a 1. And so it knows to duplicate over both of those dimensions, and that's what's happening here. OK. So I think we are right at time. And what I would recommend is basically playing with variations of this for broadcasting and see-- just remember, the two rules for broadcasting is just if it's compatible, it's either the same value or it's 1. And whatever is the 1 dimension is what's going to be duplicated over on the back end. So yeah, it's not going to be compatible if they're divisible for example. So if you have let's say 6 and 3, that's not compatible. You can reshape it and then see if you would like to have 1-- there's tricks you can use where you're thinking like on the back end, how do I want this data to be multiplied? You can maybe reshape everything into a 1 by 18 matrix and then multiply everything and then reshape it back. That's what you can do, but you can never just directly, for example, 6 by 3-- make that compatible. OK. So I think let's wrap up. This was just a quick example of another use of efficient NumPy code. Quick note, never-- preferably, don't use loops whenever you're dealing with large data matrices. Mostly because loops are almost always about 100 times slower. NumPy is usually very, very efficient. And so this is just an example of what you can accomplish with NumPy and the same thing using loops. So what this is saying is that I have an x matrix of size 1,000 by 1,000 and I want to apply-- let's say, I want to add everything from row 100 onwards with plus 5. So visually what that will look like is something like I have this full matrix and I want everything here basically to be add with plus-- added with plus 5. Then in a loop format, I can basically loop over the first dimension of 100 plus and do that or in NumPy, I can basically do what's called NumPy.arrange, which will generate integers in basically 1, 2, 3, 4, 5, 6, all the way up to that 100 value. In this case, it's between 100 and 1,000. So it'll start with 100, 101, 102 all the way to 1,000 in the first dimension, and then just add that with 5. So this is just an example of how you would switch from using loops to using NumPy and it's a lot, lot faster.
Stanford_CS224N_Natural_Language_Processing_with_Deep_Learning_2023
Stanford_CS224N_NLP_with_Deep_Learning_Winter_2021_Lecture_15_Add_Knowledge_to_Language_Models.txt
Welcome to CS224N lecture 15. So I'm Megan. And I'm one of the CAs in this course. And I'm also a PhD student working at [INAUDIBLE].. And today I'll be talking about integrating knowledge in language models. So some quick reminders. Your project milestones were due today. So hopefully, you turned those in already or will be turning them in the next couple of days. And we'll try to give you feedback on those as fast as possible. So something to be aware of is the change in grading basis. And of course, withdrawal deadline is this Friday. So if you want to make any change in your grade, make sure you do that by then. And we'll be getting you the grades back on assignment 5 by then as well, in case that's helpful in making your decision. And finally, your final projects are due in two weeks so hopefully, those are going smoothly. So the topic of the day is integrating knowledge in language models. You've seen a bit about this idea in a assignment 5 and also on Colin [INAUDIBLE] lecture last class. So in assignment five, the task was to train a model to predict the birthplace of a person, given their name. And you saw that by pretraining on a larger data set, you're actually able to do better on this task since you could encode some real knowledge into the language model. And then last lecture, Colin [INAUDIBLE] presented how T5 could actually fine-tune for a closed domain question answering task such that you can give T5 a natural language question, and it will return an answer. So today we'll be building on these threads and looking at techniques that researchers have recently been developing to increase the amount of knowledge in language models. So we're going to start with a quick recap of language models just to make sure we're all on the same page. Then we're going to talk about what types of knowledge language models can already encode, and what they might struggle on. We'll also motivate why researchers are interested in increasing the amount of knowledge in language models and what this could enable for a future systems if we have language models that can actually reliably recall knowledge. We'll talk about three broad classes of techniques that researchers have been using to add knowledge to language models. These include adding pretrained entity embeddings, using external memory or key-value store, or even just modifying the training data. And for each of these techniques, we'll talk about at least one recent work that used that technique. So hopefully, it's clear to see how to actually employ it in practice. And then finally, we'll wrap up by talking about how to evaluate the knowledge in language models and the challenges that come up when trying to do this. So let's dive right in. We're going to start by talking about standard language models. You learned about these at the beginning of the course. And the task is to predict the next word in a sequence a text and to compute the probability of a sequence. So you may remember the example, the students opened their blank. And we talked about could be minds, exams, bring those books here. And the task of standard language model is to predict the most likely next word in the sequence. A couple of lectures ago, John also introduced the notion of masked language models. And so in predicting the next word in a sequence of text, the task is to predict the masked token. And this is done using bidirectional context. So you may remember the example, I masked the mask. And the goal of the masked language model is to make the most likely token for each of the masked out words. So maybe I went to the store. So while there are some differences in these two types of language models whether you're predicting the next word or whether you're predicting the masked out token, they're similar and that they can both be trained over large amounts of unlabeled text. And this is one of the reasons why they've been so widely adopted. They don't require any human annotated data. So you've seen that language models can be used for a variety of tasks from summarization to dialogue to fluency evaluation Tests involved would be the generating text or evaluating the probability of text. And more recently, we've seen that language models can also be used to generate pretrained representations of text that encode some notion of language understanding and has been shown to be widely useful for different downstream NLP tasks. And then finally, today, we're going to touch on this idea that if language models are trained over massive amounts of text, can they even be used as a knowledge base? So we're going to start by looking at what types of factual knowledge a language model might already know. And these examples are taken from a paper by Petroni, et al in EMNLP a couple of years ago. And the goal is to test the factual or commonsense knowledge in existing language models such as BERT-Large. So let's check out what BERT-Large predicts. iPod touch is produced by Apple. London Jazz Festival is located in London. Dani Alves plays with Santos. Carl III used to communicate in German. And ravens can fly. So here, we have the correct predictions in green and incorrect predictions in red. And if you know anything about sports, you may know that Dani Alves is a soccer player. Santos is a soccer team. Here they were hoping that it would predict Barcelona. Because at least at the time of this data set, apparently he played for Barcelona. And Carl III is actually used to communicate in Swedish, not German. So what's good about these examples is the predictions are generally reasonable. If you didn't know the ground truth, they all make sense. When you want to produce-- when you want to predict a language, you do, in fact, predict the language. But of course, they're not all factually correct. So what made this happen? Well, for one, the fact might not have been seen in training. And you can't expect the language model to do more than recall facts that it has seen in training. It can't make up facts about the world, for instance. It's also possible that the fact is just really rare. So maybe the language model has seen the fact during training, but it hasn't seen it enough times to actually memorize the fact. And the last issue is a little more subtle, which a model might just be very sensitive to the phrasing of the fill in the blank statement. And so for example, you might have statements like x was created in blank, that the model can't predict correctly. But if you change it to x was made in blank, suddenly, it can predict it correctly. And I will come back to this and how to actually evaluate the knowledge in these language models. So it's inability to reliably recall knowledge is a key challenge facing language models today. And they will be the focus of this talk. Recent works have found that language models can recover some knowledge, including the work that Colin presented last class. They've had very encouraging results. But there's still a way to go as we saw with the fill in the blank statements and with these challenges that we just discussed above. So as a result, the past couple of years have had a ton of rapid progress in this area of research in terms of trying to figure out how do you actually encode more knowledge in language models. So I also want to motivate why researchers are interested in building language models that can more reliably recall knowledge. And one of these reasons is that the pretrained representations are used in a variety of downstream tasks. And some of these downstream tasks are knowledge intensive. So for instance, you might have a downstream task to extract the relations between two entities in a sentence. And this is commonly known as relation extraction. And this is much easier if you have some knowledge of the entities, which could be potentially provided by this pretrained language model representation. And when we talk about evaluation, we'll talk about what types of tests are most likely to benefit from this knowledge rich pretrained representations. And then as a stretch goal, some researchers are starting to propose the idea that, can language models actually ultimately be used to replace traditional knowledge bases? So instead of querying a knowledge base for a fact like you might right now with SQL, you'd create a language model with a natural language prompt. And of course, this does require the language model to have high quality on recalling facts. So we might not be there yet, but it's an interesting direction for us to be moving towards. So I want to make it super clear what I mean by a knowledge base. Here we're just talking about a knowledge graph where the nodes in the graph would be entities, and the edges are going to be relations between the entities. So for example, here we have a subset of a knowledge graph for Franklin D. Roosevelt. And you see the information about his spouse, his place of birth, his date of birth, and so on. An important thing to note is this is a structured way of storing the knowledge since it's just in a graph form. And you can actually describe these graphs with knowledge graph triples, which will be an important vocabulary word throughout this talk. So knowledge graph triple would be consisting of a subject entity, a relation, and an object entity. So for instance, here we might have Franklin D. Roosevelt, date of birth, January 30, 1882, and that would form a knowledge graph triple. We'll also refer to this as a apparent entity, a relation, and a tail entity. So Wikidata is one very popular knowledge base you might come across if you're working in this area. It's a free knowledge base that's actually populated by humans. So they're filling in these relations and entities. And it's also multilingual. So if you want information from this knowledge base, what you'd do is you'd write a SQL query. This is a simplified one. But the idea is if you'd want to figure out the date of birth of Franklin Roosevelt, so you would write a query like follows. Now if instead you want to create a language model as the knowledge base, you'll have something like this diagram that you've actually probably seen in several lectures now. And the idea is you're training a language model over this unstructured text. And then you'll use a language model to just answer these natural language query statements. So here, this is the work on T5, where they're training T5 over natural language over unstructured text with the span corruption task. And then they're asking T5, when was Franklin D Roosevelt born? And the idea is TD will produce a textual answer. So you can see this contrast very much with the old approach of using a traditional knowledge base where the knowledge base is structured, and you have these SQL statements to query it. So what are the advantages of using language models over traditional knowledge bases? And why might people think this could be a good idea? Well, for one, the language models are pretrained over large amounts of unstructured and unlabeled text, whereas traditional knowledge bases require manual annotation, like with Wikidata, people are actually populating it, or complex NLP pipelines to extract from unstructured text into a structured form that forms the knowledge base. Language models can also support more flexible natural language queries. So if we take the example, what is the final F in the song UFOF stand for, a knowledge base probably won't have a feel for final F. So it won't be able to answer your query. But there's a chance that a language model could actually learn and have a response for this natural language query. They also had a less extreme example, in this paper by Petroni and others, where maybe your relation would be, is works for in your knowledge base, and then you ask for, is working for, and the knowledge base doesn't have an exact match on the field, and so it returns an empty response. And as much-- it's reasonable to believe that your language model could figure out that these relations are similar. So if I know the answer to one of them, I probably know the answer to the other. Of course, it's not all advantages. There's also many open challenges using language models as knowledge bases. So for one, it's harder to interpret. When a traditional knowledge base produces an answer, there's actually provenance information associated with, why did it return that particular query? But with a language model, it's really not clear why it might produce a prediction. The knowledge is just encoded in the parameters of the model. It's also harder to trust. So you saw this in assignment five where the language model could produce realistic predictions, but they are incorrect. So it's not easy to know when the language model actually knows a fact versus it's using biases to make its prediction. And in the case of the traditional knowledge base, if it doesn't know a fact, it's just going to have an empty response. And then finally, knowledge bases-- or language models are harder to modify. So in a knowledge base, if you want to update a fact, you just change the fact directly in the structured data. But in a language model, it's not quite clear how you would do this. You could fine-tune the model longer on the updated data, but how do if it still has some memorization of the old fact? So there are a lot of open challenges to this goal of actually using language models as traditional knowledge bases. But hopefully, you'll see why some people think this could actually be a good idea and why researchers are interested in training language models that can actually integrate more knowledge. So that brings us to section two of the talk. So I want to pause here just in case there's any questions OK? I think that's OK. Yeah. OK. Awesome. So now we're going to be talking about what techniques researchers are using to actually add more knowledge to language models. So we're going to talk about three broad classes of techniques. This is by no means exhaustive. But hopefully, it gives you a good overview so that if you want to dive deeper, you can. So we'll start by talking about adding pretrained entity embeddings. And for each section, we'll focus on the first work that you see in the bullets. But we'll also talk about, briefly, some of the variants. So you see how the works within each class can differ and what knobs you can turn. So for adding pretrained embeddings, we first need to figure out what pretrained embeddings would actually be the most useful to add knowledge to language models. And this can start with the observation that facts about the world are usually in terms of entities. So if we have a fact like Washington was the first president of the United States, we have the entities Washington and United States. But pretrained word embeddings don't have this notion of entities. So we'd have different word embeddings for USA, United States of America, and America even though these all refer to the same entity. And this makes it challenging for the language model to actually learn any representations over these entities since they may be referred to many ways in the text. So what if instead, we have a single embedding per entity? And we'll refer to these as entity embeddings. So now you'd have a single entity embedding for USA, United States of America, and America. And whenever you see a phrase in text referring to this entity, you would use the same entity embedding. And these entity embeddings can actually be pretrained to encode this factual knowledge about the world. And these first class techniques we'll be looking at will be how do you actually best use these pretrained entity embeddings in a language model. So I need to make a quick note that these entity embeddings are only useful to a language model. So if you can do another NLP task called entity linking well. So I'm going to take a quick aside and explain what is entity linking. So a definition of entity linking is the link mentions in text to entities in a knowledge I like to think about this in terms of how we use word embeddings. So if you want to use word embeddings, and you have a sentence, you're going to first tokenize that sentence into words. And then for each word, you're going to look up their corresponding ID in some word embedding matrix. And now, you have your word embedding. Well, for entity embeddings, the dictionary lookup isn't so easy. You might have sentences like Washington is the first president of United States. Well, Washington has two different candidates. Are we talking about George Washington? Or are we talking about Washington State? And these are different entities that have different entity embeddings. And the QIDs here would just be their identifiers in Wikidata. And then United States just has a single entity. So the task of entity linking is to figure out correctly these ambiguous mentions. What entity do they actually link to in a knowledge base? And there's many different ways you can do this entity linking. So one way you might be able to do this is to figure out that, oh, I see the context word of president so Washington probably links to George Washington. Just some more definitions. We're going to refer to Washington as a mention, the United States as a mention, and then the things that the mention could link to. So the two options for Washington are going to be candidates. So this is a whole research area of its own. And I encourage you check out the resources at the bottom if you're interested in learning more. But right now the most important thing to understand is that entity thinking is what is going to tell us which entity embeddings are actually relevant to the text and which ones you want to use as you iterate through a sequence. Megan, there are a few questions around here. One of them is, so that's entity linking, but what about the relations? Yeah. So some of the works we'll talk about will only use the entity embeddings. Some of these have been pretrained with relation information. But in the end, you only have an entity embedding. And so relation extraction is yet another NLP task you could also do. But here, we're just talking about entity linking. Then if you have the knowledge graph you showed earlier, it had relations in it, right? Do you get any connection between that and the text? I mean, that's the goal of relation extraction, right? It's to figure out, given the entities, what is the relation between them, which would then form the full triple of head entity, tail entity, and relation. OK. Then I think people want to know more about how this is going to be used. Maybe you should go on and show some examples Yeah. I will. For sure. OK. So entity embeddings, just to summarize, they're like word embeddings, but they're for entities in a knowledge base. So you'll have some vector associated to George Washington. And it should be meaningful in embedding space such that maybe the George Washington vector is close to the vectors for other founding fathers. So we're going to briefly talk about some methods for training entity embeddings. There is knowledge graph embedding methods. You might have heard of the TransE embedding method. So this starts from the idea of having these knowledge graph triples. And you want to learn pretrained entity and pretrained relation embeddings. And you want it to be the case that the subject embedding and the relation embedding, the sum of those two, is close to the object embedding in vector space. So it's an algorithm to learn that constraint. There's also word entity co-occurrence methods. So these build off of Word2Vec. One of them's even called Wikipedia2Vec. And the idea is given an entity, you want to figure out what words are most likely to co-occur around it. And then the last method, or one of the other methods that is common now is actually just using the transformer to learn representations of an entity by encoding the entity description. And so BLINK from Facebook is an approach that does this. So the methods we'll talk about today are actually agnostic to how you train your pretrained entity embedding. But I think it's important to know that there's actually a wide variety of methods to train these pretrained entity embeddings. And it's actually not clear which method is best for using them downstream in language models. So one of the key challenges of using pretrained entity embeddings in language models is figuring out how to incorporate them when they're from a different embeddings space in the language model. And so what we'll do, or the approach that we'll look at today will learn a fusion layer to combine this context and entity information. So we have entity embeddings, and we have the contextualized word embeddings from our language model. So if we take a sequence of text, and we imagine that j indicates the jth element in a sequence, then the challenge here is we want to figure out how do we combine some word embedding wj with some aligned entity embedding ek. So here an alignment could be like in the example where we had, Washington was the first President. Washington would be a word embedding, and George Washington would be the aligned entity embedding there. So you could imagine, in this case, let's say, your wj is Washington, and your ek is your entity embedding for George Washington, and you want to align them together. So what you can do is learn a weight matrix, Wt for the text and we for the entity to project this embeddings to the same dimension before you sum them, and finally, take an activation function over them. So the idea is that by having some fusion layer mechanism like this, you can actually use these entity embeddings and these contextual word embeddings that are in different embedding spaces and fuse them together to have the single hidden representation for the element in the sequence. So the approaches we'll talk about today all have some mechanism either very similar to this or some variation of this to do this combination of the context and entity information. So the first approach we're talking about is called ERNIE, Enhanced Language Representation with Informative Entities. And so this just builds on what we already talked about. It uses pretrained entity embeddings. And it also uses this notion of a fusion layer. So the first block in ERNIE is a text encoder which is a multi-layer bidirectional transformer encoder. For their experiments, they use BERT. But it doesn't have to be BERT. And this is followed by a knowledge encoder which has stacked blocks composed of two multi-headed attentions, one is over the entity embeddings and one is over your token or subword embeddings. And then the output of these contextualized entities and token embeddings on multi-headed attentions are passed a fusion layer, which looks very similar to what we just looked at. But now you also have new word and entity embeddings that you're producing as output of your fusion layer. So you see this wj and it's ek, which are produced as the next layer of word and entity embeddings. So the i here indicates that it's the ith block in the knowledge encoder. So you'll actually have multiple stacks of these knowledge encoders, and you'll be doing a fusion of the word entity embedding, producing new word and entity embeddings and then passing this to the next block of the knowledge encoder. So this is what the architecture diagram looks like. On the left side, we have the T-encoder, or the text encoder, followed by the K-encoder, or the knowledge encoder. And then on the right side, you have a zoomed in version of your knowledge encoder. So you see the multi-headed attentions over the tokens in orange and then over the entities in yellow. And then you have this alignment between the word and entities with the dashed lines. So they had this example as, Bob Dylan wrote Blowing in the Wind in 1962. The entities here are Bob Dylan and Blowing in the Wind. And they have a simple alignment rule where you want to align the entity to the first word in the entity phrase. So you want to align Bob Dylan to Bob. That's what the dash line's trying to indicate. And you want to align Blowing in the wind to blow. Here this already assumes that entity linking has been done. And you know your entities in advance. So you can see that the entities are actually input into the model. So after you have your word entity alignment, this goes to the information fusion layer in this light purple gray color. And then finally, it produces these new word entities embedded things as output. And then remember that you have multiple blocks of these, so this will be passed onto the next block of your knowledge encoder. So how do you actually train this? It's pretty similar to BERT. You have a masked language model loss, and you have a next sentence prediction loss. And then they also introduce a knowledge pretraining task, which they refer to as the DEA task. It's named after a denoising entity autoencoder from an ICML paper in 2008. And the idea is they're going to randomly mask these token entity alignments. So the idea that Bob goes to Bob Dylan, they're going to mask that out with some random percentage. And then they're going to predict the corresponding entity for a token are the entities in the sequence. So this looks like as follows. The summation is over M entities in the sequence. So this would be over Bob Dylan and Blowing in the Wind in the previous example. And given a particular word, they want to figure out what entities are most likely to align to in that sequence. So does Bob align to Bob Dylan? Or does Bob align to Blowing in the Wind? And their motivation for doing this is that if you don't have this task, all you're ever going to be predicting is the token with the masked language model loss. And you really think that knowledge should also probably be predicting over entities. So by adding this task, they have some kind of task that is actually predicting the entity. And they also suggest that this might better fuse the knowledge or the entity and the word representations than just using the fusion layer. Their final loss is then the summation of the masked language model loss, the next sentence prediction loss and this DEA knowledge pretraining task loss. So they showed that ablation experiment that it's actually very important to have this knowledge pretraining task. So this has BERT on the leftmost bar, ERNIE has the second bar from the left. And so that's with all the features of ERNIE. And then they try removing the pretrained identity embeddings and removing this knowledge pretraining task. So you see that BERT performs the worst-- this isn't very surprising-- and that ERNIE performs the best. But what's interesting is that if you remove the entity embeddings, or you remove the pretraining task, they only do a little better than BERT. And so it's really necessary to actually use this pretraining task to get the most use of your pretrained entity embeddings. So some strengths of this work were that they introduced some way to combine entity and context information through this fusion layer and this knowledge pretraining task. And then they also showed improved performance on downstream tasks which we'll come back to when we talk about evaluation. But of course, there's also some limitations. So it needs text data with the entities annotated as input. And this is even true for downstream tasks. So if you remember on the architecture diagram, we had the entity information actually input into the architecture. But it's not very realistic that you're necessarily going to have a good entity linker for any downstream task that you want to use ERNIE on. And the next challenge is this requires more pretraining of your language model. So now you don't just need to pretrain BERT, but you also need to pretrain your knowledge encoder on top. For the first challenge, we're going to actually talk about a work that presents a solution to address this. For the second challenge, I encourage you to check out the footnote on the bottom. This introduces a work that actually uses pretrained entity embeddings, uses them in a language model, and doesn't require any more pretraining. So it's pretty cool. I guess that's all I have for ERNIE. So I want a pause here for questions. Well, here's one that's up here. So on the fusion layer, it observed that passing the entity embedding into a fusion layer to combine with the word embedding is more powerful than just concatenating the entity embedding onto the end of the word embedding question mark. So I guess people are still a little bit confused as to the motivation for that fusion layer. And so I guess here is this, the simplest strategy would be, since you've got the entity linking, you could just concatenate entity embeddings onto the end of word embeddings and do regular BERT. Would that work just as well? I think the idea is it wouldn't. Because if you imagine that, let's say, your magnitudes are very different, you need some way to, I guess, align the spaces so that anything meaningful in the entity embedding space is still meaningful in the word embedding space. So if you're close in the word embedding space, you also would be-- you'd want to be close to an entity embedding space. So I guess that's one argument. Yeah. I mean, I think the question isn't-- it's a good question as people say. I mean, it's not completely obvious that it wouldn't work to do that. It seems like one, the potential problem is some words have the entity links to them and some words don't. And so you then, you have zero vectors for the ones that don't have anything-- That's a good point. Linked, and that might act a bit weirdly. But-- Yeah. In this case, when they don't have entities linked, which is a great point, yeah, the first equation just simplifies to the first term plus the bias. So an obvious solution in that case when you're not concatenating, that you just don't add on the term. Yeah. That could be one reason to. Are there any other questions? I think you can go on. OK. Cool. So now we're talking about KnowBERT. And this is from the same folks that introduced the ELMo BERT. And the idea here is that they're going to pretrain and integrate an entity linker as an extension to BERT. And so their loss function will now be the summation of the next sentence prediction, the masked language model loss, and its entity linking loss. So instead, the knowledge pretraining DEA task from ERNIE will have an entity linking loss. And the idea of the entity linker is you'll now have just as normal sequence as input. And the integrated entity linker will figure out what are the entities in the sentence or what are the mentions in the sentence, what are the candidates of those mentions, and then what should be the scores of those entities for the candidates, given the context of the sentence. So this is all done now as part of the model rather than requiring it as some external pipeline stage before you could even use ERNIE, for instance. So now for downstream tasks, you no longer need this entity annotations. Your integrated entity linker will figure out what the correct entity is and be able to use the correct entity embedding. So there's also this idea that learning this entity linking may actually better encode knowledge than this DEA pretraining task because they show that KnowBERT actually outperforms ERNIE on downstream tasks. So one reason this may occur is that if you think about the DEA task, it's actually a bit simpler than just entity linking. So you're trying to predict, for instance, what Bob linked to out of Bob Dylan and Blowing in the Wind. And it's much easier even as a human to see that Bob Dylan will more likely link to-- or Bob will more likely link to Bob Dylan than that Bob will link to Blowing in the Wind. In the entity linking task, you actually have a much harder set of candidates to predict over. You're not just looking at the ones in a sentence. So does Washington link to George Washington or Washington State, actually requires you using more information about the entity. So given it's a harder task, it's not too surprising that it might perform better than just this easier knowledge pretraining task that ERNIE introduced. So otherwise, KnowBERT has a lot of similarities to Ernie. It uses a fusion layer that combines this context and entity information. And it introduces some knowledge pretraining task. So I'd say the high level takeaways if you want to use pretrained entity embeddings in a language model, you probably at least want to consider both of these components in terms of actually going to integrate the pretrained entity embeddings and take the most advantage of the knowledge in them as possible. So that brings us to the next class of techniques which is using an external memory. And here, we'll mainly focus on this work called KGLM. And then we'll also briefly talk about kNN-LM. So the previous methods I've talked about have relied on pretrained entity embeddings to encode the factual knowledge from knowledge bases. And the one problem with this, or one of the problems with this, is if you want to, let's say, modify your knowledge base, you now need to retrain your entity embeddings and then retrain your language model on top of these entity embeddings. So this begs the question, are there more direct ways in pretrained entity embeddings to provide the model factual knowledge? And so what we're going to talk about is how you can actually use an external memory or a key-value store to give the model access to either knowledge graph triples or context information. And a key thing about this external memory is that it's independent of the learned model parameters. So this means you can actually support injecting and updating factual knowledge. You can do this directly to the symbolic external memory by, let's say, changing the value for a particular key or maybe adding another key. And you don't have to retrain, or pretrain, your entity embeddings when you make this change. And the next part we'll talk about today can actually even have these updates to the external memory without more pretraining of the language model. So that's pretty neat. And another benefit of using external memory over these pretrained entity embedding approaches is it can also be more interpretable. So if you have a bug, or not a bug, an air in your model, where it's not predicting a fact fact, it's very challenging to figure out with pretrained embeddings what the problem might be. Was it the original knowledge base? Was it the encoding in the entity embeddings? Is it how the language model is using the entity embeddings? And here you have a little more information with an external memory, and that you can look in the external memory and see what's a fact in the external memory with a not in external memory and so on. So it adds a little bit more interpretabilability than just using these pretrained entity embeddings as an indirect way to encode the knowledge base. So first, what we're going to talk about is called KGLM. And unlike the other approaches we've talked about so far, this actually uses LSTMs and not transformers. So the key idea here is to condition the language model on a knowledge graph. So recall with the standard language model, we want to predict the next word, given the previous words in the sequence. Well, now we also want to predict the next entity, given the previous words in the sequence, and given the previous entities in the sentence, or the entities that are relevant to the sentence, I should say. So KGLM will be building a local knowledge graph as it iterates over the sequence. And a local knowledge graph is just a subset of a full knowledge graph that only has the entities that are actually relevant to the sequence. So if we have this example here, a simplified example from the paper, that Super Mario Land is a game developed by blank. And Super Mario Land here is an entity. You'd want a local knowledge graph as follows, where you see that Super Mario Land is in the local knowledge graph. But we also have the relation to Super Mario Land to other entities that are copied from the full knowledge graph into this local knowledge graph. And you would build up this local knowledge graph as you iterate over the sentence. So whenever you see an entity, you would add it to the local knowledge graph as well as its relations to other entities. So obviously, this is a much smaller example than when we would really have all the relations to Super Mario Land, just for the purpose of the example. But hopefully, it's clear that all of these are relevant to the sequence. Something important to note here is that this does assume that the entities are known during training so that you do have this entity annotated data for training. And therefore, your local knowledge graph is always the ground truth local knowledge graph as you iterate over the sequence. So why might this be a good idea to do this? Well, here, the next word we want to predict is Nintendo. And you may notice that Nintendo isn't in your local knowledge graph. So sometimes, this local knowledge graph can actually serve as a very strong signal for what you want to predict for your next word. Now you may be thinking, well, this wouldn't always be helpful, and that's true as well. So if you look at just the third word in the sequence, and you want to predict that word, so is a game, for instance, well, if this isn't in the local knowledge graph, this wouldn't be necessarily that helpful. You will just do a standard language model prediction. Or if you at the beginning of a sequence, your local knowledge graph is empty, so of course, you're not going to get any signal from it. So the first question they ask in KGLM is how can a language model know when to use a local knowledge graph and when it might actually be useful for predicting the next word? So we're going to keep the same example as a running example. And we have our local knowledge graph here. We now have an LSTM that looks similar to the representations you've seen throughout this class. And normally, you've seen the LSTM predicts the next word. Well, now we're also going to use the LSTM to predict the next type of the word. So is the next word going to be a related entity, meaning it's in the local knowledge graph already? Is it going to be a new entity, meaning it's not in the local knowledge graph? Or is it going to be not an entity, in which case, you just revert to a normal LSTM prediction. And they're going to use the LSTM hidden state to do this prediction of the type of the next word over this three way-- three different classes that they might want to consider. So in the case of Super Mario Land is a game developed by Nintendo, we saw that this would be a related entity case because you saw that Nintendo was in the local knowledge graph. But in the other cases, Super Mario Land would be a new entity case since the local knowledge graph is empty at that point. And then any of the words between Super Mario Land and Nintendo would be non-entity as they're just a standard LSTM language model prediction that doesn't involve any entities. So now we need to talk about what the language model actually does in these three different scenarios to predict the next entity and the next word. So we're going to keep the example up at the top in case you want to go back to the three different cases. And we're going to start with a related entity case. So here we assume that the next word or entity is actually in your local knowledge graph. And remember that we can describe a knowledge graph in terms of triples, so in terms of pairs of parent entities, relations, and tail entities. And in the case of predicting the next word as Nintendo, there's only one possible parent entity in the local knowledge graph, which is Super Mario Land. And the goal is you want to figure out what is the most relevant triple that will be useful in helping to predict the next word. So in this case, you could have the triple Super Mario Land, publisher, Nintendo. You might have the triple Super Mario Land, genre, platform game, which of these is actually helpful in predicting that Nintendo should be the next word. So here what you would want KGLM to do is predict that the top scoring parent entity is Super Mario Land. And the top scoring relation is publisher. And you can see there are actually contextual cues in a sentence that could help you figure out which triple you're talking about. And then, given that your top scoring parent entity is Super Mario Land, and your top scoring relation is publisher, you can figure out that using knowledge graph triples, the tail entity he has to be Nintendo. And therefore, this gives you a strong signal that the next word will be Nintendo. So the goal is you're going to find the top scoring parent entity and the top scoring relation using the nodes in your local knowledge graph. And you can do this by using the LSTM hidden state combined with pretrained entity and relation embeddings. So I do admit I cheated here a little bit, and that this does use pretrained embeddings. But hopefully, you'll see by the end of this discussion why I think it fits a bit better in this external memory use case as well. So what I'm going to do here is take a softmax using LSTM hidden state and the entity embeddings for each of the potential parent entities. And they will take the top scoring one as a parent entity. And they'll do the same thing for the relation embeddings. The next entity is then just this tail entity from the knowledge graph triple. So it's relatively trivial to figure out what the next entity should be once you've figured out the top scoring parent entity and your top scoring relation. And then finally, to predict the next word, they take the vocabulary, and they expand it to include different aliases that could refer to that entity. So what I mean by aliases here are phrases that could refer to the entity and text. So you might not just call it Nintendo. You might also say Nintendo company or Koppai. And you want any of these to be possible words they could predict as the next word. So the goal of this vocabulary expansion is to increase the probability that the next where you predict will actually be related to this next entity So a new entity case is a bit simpler. This means that the entity that you're predicting is not in the local knowledge graph. So you're not getting any signal from this local knowledge graph that you've been building up, and all you want to do is find the top scoring entity in the full knowledge graph. And you can do this using the LSTM hidden state and pretrained entity embeddings, similar to how we found the score for the top parent entity. Your next entity will just be the top scoring entity out of the full knowledge graph. And then your next word is once again this vocabulary expanded to include aliases of that entity. The not an entity case is as simple as you just revert to normal LSTM. You don't have a next entity to predict. And your next word is just the most likely next token of your normal vocabulary. So here's a diagram from their paper that hopefully summarizes and makes even clear what I just went over. So they have a longer example than the one we are looking at but the same prediction as Nintendo is the next word. And they have their predictions in red. So this is what they want KGLM to predict. The three different cases are in the horizontals. And we see that here, you're in the related entity case since Nintendo is in your local knowledge graph. So they want KGLM to predict that Nintendo should be a related entity type of word, that Super Mario Land should be its parent entity, that publisher should be the relevant relation. And as a result, the next entity is Nintendo. And then they expand their vocabulary. You see the aliases of Nintendo at the bottom. And then finally, they actually predict Nintendo is the next word. And the other cases just summarize what we also already went over. So you find that KGLM actually outperforms GPT-2 and AWD-LSTM, which is a strong LSTM language model on a fast completion task similar to the fill in the blank examples that we looked at the beginning of the talk. They also find qualitatively that compared to GPT-2, KGLM tends to predict more specific tokens since it can predict these tokens from just copying from the local knowledge graph, whereas GPT-2 will tend to predict more generic tokens. So if you want to predict the birthplace of someone, GPT-2 is more likely to predict New York, for example, and KGLM might predict some obscure place. And then they have these really cool set of experiments where they showed that KGLM actually supports modifying or updating facts. So they made a direct change in the knowledge graph, and then they saw what is the change in KGLM's predictions. So they have this example where the sequence was, Barack Obama was born on blank. They had their knowledge graph triple as Barack Obama's original birth date, and then the most likely next tokens, whereas expected, August 4th 1961. And then they just changed their knowledge graph. So they changed the birthday of Obama. They said, OK, he's now born 2013. And they looked to see what the next predictions were for KGLM, and it changed its predictions to match what was in the local knowledge graph. So this is something that's pretty cool, and that really only external memory approaches can do compared to the original pretrained entity embedding approaches we talked about. And I think it's one of the reasons that KGLM, at least my opinion, fits better in these external memory use cases. Right. So the next slide is a different paper so I guess I'll take questions about KGLM, if there are any. It's a pretty complex method. So feel free to have questions. Yeah. Can you, one more time, explain what the definition of the local knowledge graph is in relationship to the global knowledge graph? Yep. So local knowledge graph is supposed to be a subset of the full knowledge graph. And it's only supposed to consist of entities that have actually been seen in the sequence as well as their relevant entities. OK. All right. So here, you see that Super Mario Land is in the local knowledge graph because Super Mario Land is an entity that is seen in the sequence. And then you also want to copy over all the edges from Super Mario Land that would be in the full knowledge graph. So this is just a subset of them for the purpose of the example. But you see that Super Mario Land has an edge to Nintendo to Game Boy to platform game. And so you would copy all edges that Super Mario Land has to another node in the full knowledge graph. And they know in advance like, they have the labels here for what the entities are during training. So that's how they can actually create this ground truth knowledge graph. Then briefly, a student asked why we can't just use the whole knowledge graph? And I gave an answer but maybe you know better. Yeah, I think the idea is the signal be much stronger if you just use local knowledge graph. So in the softmax for the related entity case, you would just be predicting over the potential parent entities in your local knowledge graph, which is a much smaller set than what's in your full knowledge graph. So I guess it's more likely that you're going to predict something that is correct in that case than when you have 5 million or so entities in your full knowledge graph. It's also much cheaper to compute. In this case, there's only a single parent entity, but you could have multiple parent entities that you're trying to compute which ones the most likely over. Is that what you were also thinking, John? Yeah. I mainly just said the efficiency. So the signal thing is cool too. Here's an exciting question. What about queries that require more than one step in the knowledge graph such as the location of the publisher of Super Mario Land? Yeah. That's a good question. So the idea is it can support those types? Like, does it support multi-hop, kind of, building of the knowledge graph? Yeah. How does KGLM perform in those cases? Yeah. I don't know. That's a very good question. They built up the knowledge graph so that is just a single hop as far as I know. But if you saw the other entities, if you were to see the entities along the hops, that it would have them in the local knowledge graph, yeah, that's a good question. I don't if they support that. Great. OK. Let's move along then. So the next piece of we're going to talk about, you guys, have actually briefly seen in the language generation lecture. But I'm going to go over it again quickly here. So unlike the other works that we talked about that use knowledge graph triples, this is actually going to take a looser notion of knowledge in that the knowledge will just be encoded in the text in the training data set. So this is called kNN-LM. And the idea is that-- or it's brilliant idea is language models not only learn to predict the next word in text, but they also learn these representations of text. And the author suggests that it might actually be easier to learn similarities between text sequences than it is to predict the next word in the text. So you have this example that Dickens is the author of blank, and Dickens wrote blank. And they argue that it's easier to tell for a human but also for a model that these sequences are similar, and they should probably have the same next word even if you don't know what the next word is. So that's suggesting that it's easier to learn these similarities than it is to actually predict the next word. And they argue that this is even more true for long tail patterns, where it's very challenging for the model to predict that the next word is some rarely seen token or rare entity, than it is to find another similar sequence that it's already seen and just copy the next word from that sequence. So what they proposed to do is store all representations of text sequences in the nearest neighbor datastore. And then an inference, what you want to do is you find the k most similar sequences of text, you then retrieve their corresponding values, so you just peek at the sequences and see what were their next words. And then you combine the probability from this nearest neighbor datastore with just a typical language model prediction. And so they call this an interpolation step in that they're weighting how much to pay attention to the probability from this kNN approach and how much to pay attention to this language model approach. And the lambda here is just a hyperparameter they tune. So I have this diagram from their paper where they want to predict the next word in the sequence, Shakespeare's play blank. So what they do is they have all the training context already encoded in their datastore. So they have representations of all of the training context. And then they compute the representation of the text context. And they want to figure out which representations in the training context are most similar to this task context representation. And so here, an external memory view of things, the keys would be the representations of the training context, and the values would be the next words. So to get the k nearest training representations, they then copy over their values. That's what you see with this Macbeth, Hamlet, Macbeth example. They have a normalization step where they convert this to probability space. And then finally, they have an aggregation step. So if a word is seen as the next word in several of these k nearest neighbors, then they want to count more for that. So that's why they aggregate. So if they see Macbeth twice, it means Macbeth is more likely. And then finally, they have this interpolation step where they try to balance between the classification probabilities from the language model and from the kNN-LM approach. So some immediate observation you might have is this seems really expensive. They do propose ways to try to minimize the expense of actually having to store all the training contexts in this datastore because they actually store it for every single window of next word in the training context. And you can do quantization on some nearest neighbor approaches to try to make this less expensive. But I imagine this would still be pretty expensive for really large training data sets. They also have some cool experiments that show that this is very good for domain adaptation. So if you take your language model, and you have a new domain that you want a higher language model too, you could just create a nearest neighbor datastore of your new domain. So you encode all the representations of that new domain. You stick it in a datastore. And then, you can just use your language model with these kNN probabilities as well just immediately on this new domain without actually having to further train your language model. So I thought that was a pretty cool use case of this external memory approach. So while it doesn't leverage knowledge bases directly, it does have this loose knowledge of-- or a loose idea of encoding knowledge that is in a textual representation form into some external memory that the model can then take advantage of. That's all I have for this approach. Are there any questions on this approach? Well, suddenly, one person is asking, how does the kNN make predictions for the next word? The k neighbors are for the context instead of the next word. Oh, OK. That wasn't clear. So the keys are the representations of the context. The values in your external memory are the next words. So when you figure out, you figure out your nearest neighbors using your keys, and then you copy over their values. So it does actually know what the next words are for each of those representations. Yeah OK so finally, we're going to talk about how you can just modify the training data to better encode knowledge in language models. So the approaches we've talked about so far are actually incorporating knowledge explicitly by using the pretrained embeddings or an external memory. We also want to talk about how can you just incorporate knowledge implicitly through the unstructured text? So what we're going to do is either mask or corrupt the data to introduce additional training tasks that require factual knowledge to figure out what data was masked, for instance. So this has some clear advantages. It doesn't have any additional memory or computation requirements, you don't have a datastore to deal with, you don't have extra knowledge encoded layers to train. All you do is modify the training data. And you don't have to modify your architecture either so you can continue using your favorite BERT model and just make these changes to the training data. So the first work we're going to look at is called WKLM, Weakly Supervised Knowledge-Pretraining Language Model, or Pretrained Language Model. And the key idea here is to train the model to distinguish between true and false knowledge. So they're going to corrupt the data by replacing mentions in the text with mentions that refer to different entities of the same type to create what they refer to as negative knowledge statements. And then the model will just predict has the entity been replaced or corrupted. This type constraint is necessary to make sure that-- or to encourage the model to actually use factual knowledge to figure out that this corruption is taking place. So you can imagine if you replace it with something that's not realistic at all, the model could just be basing its prediction based on, is this sentence linguistically correct? So as an example, we have a true knowledge statement as JK Rowling is the author of Harry Potter. And then we want to modify this to replace it with another author. So let's say we change this to J.R.R. Tolkien is the author of Harry Potter. So you can see that this requires some amount of knowledge, background knowledge, to actually be able to figure out which statement is true and which statement is false. And the idea is that the model will be able to predict for each of these mentions whether it's a true or false mention. So this diagram here is from the paper. And hopefully, it explains this a bit better. They have the original article on the left. And then they have the replaced article with the corruptions on the right, and the entities are in blue. So what they do is for a given entity, they first look up its type. They find other entities of that type, and then they randomly sample the entity and get an alias of it to replace in the text. So they're going to replace Stan Lee, for instance, with Bryan Johnson, and Marvel Comics with DC Comics. And the replacements are in red on the right. And then the idea is that the model will be able to predict for each of these mentions, was it replaced or not. So in the case of Bryan Johnson, they have the red X for this is a false mention. And in the case of the true mention, they have the checkmark. So it's a pretty simple approach. But they actually show that it can help a model increase the amount of knowledge that's encoded as parameters. So WKLM uses an entity replacement loss to train the model to distinguish between these true and false mentions. And this just looks like a binary classification loss where your true mentions are on the left, and your false mentions are on the right. And you want to increase the probability that this P of e given C, so the probability of entity given the context, you want to increase that for the true mentions and decrease it for the false mentions. The total loss is then just a combination of the masked language model loss and this entity replacement loss. The masked language loss-- the masked language model loss is defined at the token level. And the entity replacement loss is defined at the entity level, meaning it's not just over subwords, it's even potentially over words, if you have multi-word entities, phrases, for instance. And this is an important point, or an important theme, that we really see occurring throughout these works that we look at in that modifying the data at the entity level seems to be an important component of actually increasing the amount of knowledge that a language model can encode. So they find that WKLM improves over BERT and GPT-2, in fact, completion tasks like the fill in the blank statements that we looked at the beginning. They also find that it improves over the ERNIE paper that we talked about on a downstream task. And then they had a set of ablation experiments where they looked at, can you just remove this masked language model off now. And if you just train BERT for longer, do you really need this entity replacement loss? So that's what the table here is looking at. The second row is looking at if we remove the masked language model loss, what happens? We see that it performs much worse without the masked language model loss. So you really need both losses. The intuition there was the masked language model loss helps to encode just general language understanding. And then training BERT for longer performs much worse than using its entity replacement loss. So that motivates even further that you really do need-- or the entity replacement loss is actually really helping encode more knowledge in these language models. So in addition to corrupting the data, we're also going to look at, can we just mask the data differently? Can we be more clever about how we do the masking? And this is a thread in several recent works. So there's actually another paper called ERNIE. So this is different than the one I talked about before. And this is an Enhanced Representation through Knowledge Integration. And what they do is show improvements on downstream Chinese NLP tasks by doing phrase-level and entity-level masking. So instead of just masking out subwords, they're going to mask out phrases that have multiple words. And entities-- the full phrase of an entity which corresponds to-- entity and text that they might find that are like NER techniques, for example. And then the second work is actually something you heard about in the last lecture, which is the idea of using salient span masking to mask out salient spans. And a salient span is just a named entity or a date. So you can see this is pretty similar to what ERNIE is doing. And they found that using salient span masking actually significantly helped T5 performance on these closed domain question and answering tasks. So just to make sure we're all on the same page with the different masking techniques, this diagram from the ERNIE paper is comparing to what BERT does versus what ERNIE does. The top shows that ERNIE masked up the subword tokens, or that BERT masked up the subword tokens, whereas ERNIE masked up phrases like a series of, as well as entities like JK Rowling. There's some interesting results on showing that salient span masking is helping encode more knowledge in these representations. So on the left, we're looking at the results of the original paper that proposed salient span masking. So this is the REALM work. And the idea here was that they were training a knowledge retriever. So it's actually more of an external memory class of techniques, but they find that by using the salient span masking technique, they could actually train a much better knowledge retriever. So that's a good example of how these techniques are really complementary. So while I presented three classes of techniques, you can definitely get benefits by doing multiple techniques together. And they found that doing salient span masking compared to using masking from BERT, which would be the random uniform masks, or doing random masking spans from a paper called SpanBERT, it performs much better to do salient span masking. So you see a 38 exact match score versus a 32 exact match score, for instance. And on the right, we have results from fine-tuning T5 with either salient span masking or the span corruption task that you saw on assignment 5. And you can see that on these different QA data sets, salient span masking does significantly better than just using the span corruption technique. So this really suggests that doing the salient span masking and masking out these salient spans of these entities is, in fact, helping to encode more knowledge in these language models. So to recap, we talked about three different classes of techniques to add knowledge of language models. We talked about using pretrained entity embeddings. These weren't too difficult to apply to existing architectures and is a way to leverage this knowledge graph pretraining. But it's a rather indirect way of incorporating knowledge, and it could be hard to interpret. We also talked about approaches to add an external memory. This could support modifying the knowledge base. It was also easier to interpret. But they tended to be more complex in implementation like we saw in KGLM. And they also required more memory like we saw at the kNN-LM approach. And then, finally, we talked about modifying the training data. So this requires no model changes or additional computation. It also might be the easiest to theoretically analyze. So this is actually an active area of research right now. But it's still an open question if modifying the training data is always as effective as model changes and what the trade-offs are in terms amount of data required versus doing one of these other knowledge enhancement approaches. So that leads us to section 3. So I guess I'll pause again for questions. [INAUDIBLE] may be good. Awesome. OK. So section 3 is about how researchers are actually going about evaluating the knowledge in language models and, I guess, how some of the techniques we actually just talked about stand up in this evaluation. So first, we're going to talk about probes which don't require any fine-tuning of the language model. And then we're going to talk about downstream tasks which look at how well do these pretrained representations actually transfer their knowledge to other tasks. So one of the initial works in this area was called LAMA. And this really started a series of works to look into how much knowledge is already encoded in these language models. So their question was how much relational common sense and factual knowledge is in off-the-shelf language models? So this is just taking pretrained language models and evaluating the knowledge in them. And this is without any additional training or fine-tuning. So they mainly constructed a set of what they refer to as closed statements. And these are just the fill in the blank statements that we actually drew from at the beginning of the talk. And we have some more examples here. And they manually created these templates of closed statements using knowledge graph triples and question-answer pairs from existing data sets. They wanted to compare pretrained language models to supervised relation extraction and question and answering systems to see how do these language models that were trained in an unsupervised fashion compare to these baseline systems that are not only supervised but really targeted for this task of knowledge extraction And their goal was to evaluate the knowledge in existing pretrained language models. And a key point about this is they're just using the language models as they are available to researchers. So this means there could be differences in the pretraining corpora, for example. So when you look at the following table, and you're comparing language models, also keep in mind that these don't account for the differences in the pretrained corpora. So a lot of these language models probably look familiar to you either from previous lectures or maybe your final projects. And what we see is that overall, BERT-base and BERT-Large pretrained models are performing much better than the previous language or the other language models here. I guess I forgot to mention what mean precision at one is. This a pretty simple metric. The idea is if you look at the blank, and you look at the top predictions, or the top prediction, for the blank, is it correct or not. That's what precision at one means. Precision at 10 would be, let's look at the top 10 predictions, is the correct prediction in the top 10? So in addition to BERT-Large and BERT-base performing well overall, we do see that in the T-REx data set, their relation extraction baseline is performing a bit better than BERT. One thing they notice here that's pretty interesting is that this data set has a lot of different types of relations. And relations can be classified in terms of, are they one-to-one relation, are they n-to-one relation, are they n-to-n relation? An example of a one-to-one relation would be your student ID relation. So you have a unique student ID. An example of an n-to-n would be the enrolled in relation. So there's lots of students enrolled in lots of classes. So this would be an n-to-n relation. And they find that BERT really struggles on these n-to-n relations. So while it performs better than relation extraction baseline on some types of relations, overall, it does pretty terribly on these n-to-n relations. So overall, it does a bit worse than the baseline on this T-REx data set. They also compared to SQuAD on DrQA. And they find that it does a fair amount worse. They note that the language model is not fine-tuned here. And also, it has no access to an information retrieval system. And then when they look at the precision at 10, they find that this gap between DrQA's performance and BERT actually closes quite a bit, which suggests that these language models do have some amount of knowledge encoded in them and that they're even competitive with these knowledge extraction supervised baselines. So you can also try out examples on their GitHub repo for the LAMA probe. We have an example that was from their repo that was, the cat is on the mask. You can see what the top 10 predictions are to fill in the closed statement. Here, they have the cat is on the phone. So this can be a fun way to figure out what factual and common sense knowledge is in existing language models. And it's pretty easy to use with this interactive prompt. So some limitations of the LAMA probe are that it can be hard to understand why the models performed well when they do. So for instance, BERT might just be the most popular token, and this happens to be right. Maybe it's just memorizing co-occurrence patterns and doesn't really understand the knowledge statement and doesn't understand what the fact is. It might also just be identifying similarities between surface forms of the subject and object. So for instance, in this example, Pope Clement VII has a position of blank. Even if you don't know anything about Pope Clement VII, you might be able to figure out that Pope is the likely next word for this triple, or for this template. So the problem with this is if the model is just making these predictions based on these surface forms or co-occurrence patterns, it's difficult to know if we're actually evaluating the knowledge in the model. Maybe it's just making correct predictions for other reasons. And the more subtle issue that we've brought up is that language models might be just be sensitive to the phrasing of the statement. So for each triple in their data set, or for each relation their data set, they just had one mainly defined template. And qualitatively, they found that if they just make small changes to this template, it could actually change whether or not the model could recall the correct prediction or not. And so this means that the probe results are really a lower bound on the knowledge that's encoded in the language model. So if you change your phrasing, it's possible that the model might show that actually, it does have the knowledge encoded in it. So the next lines of work we'll talk about are really building on these two limitations of this original LAMA probe. So the first one is called LAMA-UHN, or LAMA-Unhelpful Names. And the key idea is to remove these examples from LAMA that can be answered without the relational knowledge. So this is just addressing the first limitation on the left side. So they observed that BERT relies on the surface forms entities, might not be using knowledge to make these predictions. This includes a string match situation that we talked about with the pope. This also is dealing with the revealing person name issue that you saw on assignment 5. So this is where the name could be an incorrect prior for the native language of someone, their place of birth, their nationality. They have this example from the table, or from the paper, where they looked at different people names or person's names, and then they looked at BERT's prediction for their native language. And these are all French-speaking actors. And BERT just predicts very biased and stereotypical languages for these particular names. So this can really work both ways. It can lead BERT to make incorrect predictions sometimes or in some cases. But it could also work to make-- or to let BERT make correct predictions, even if it has no factual knowledge of those people. So that's the issue they're trying to get at here is do we know that BERT actually knows this fact, or is it just using some bias to make its prediction? So what they do is they introduce a couple of heuristics to basically just filter out these examples from the LAMA probe that can either be solved by the string match setting or this revealing person name setting. So they make a harder subset of the data set, essentially. They find that when they test BERT on this harder subset, that its performance drops about 8%. But when they test their knowledge on the enhanced model, which they call E-BERT, the score only drops about 1%. So it's possible that as you make harder knowledge probes, we'll actually see even bigger differences in the performance of knowledge-enhanced models to models without these knowledge enhancements. The next piece of work we'll talk about is actually getting at this issue of the phrasing of the prompt might actually trigger different responses from the language model. So the language model might know the fact, but it might fail on the task due to the phrasing. One reason this might happen is the pretraining is on different contexts and sentence structures in the query. So for example, you might have in your pretraining corpus, the birthplace of Barack Obama is Honolulu, Hawaii. And this might be something you see in Wikipedia, for instance. That's a common training data set. And then as a researcher, you write, Barack Obama was born in blank. And you can see that this sentence structures are pretty different. So the model might have seen the first fact. But the sentence structure difference is actually enough to confuse it so it can't answer this query. So what they do is they generate a lot more of these prompts by mining templates from Wikipedia. One of the techniques actually uses dependency parsing and also generating paraphrase prompts by taking inspiration from the machine translation literature and using back translation. So you generate a lot more prompts to try to query the language models and figure out do small variations in the prompt trigger the correct prediction from the language model. They also experimented ensembling prompts. So if we give the model multiple prompts and then take some probability averaged over these different prompts, can we improve the performance on the model returning the correct prediction? So we give it a higher chance of seeing a context that it might have actually seen during pretraining. They find that the performance on LAMA increases when they either use a top performing prompt or when they use this ensembling approach. So this suggests that the original LAMA really was a lower bound on the amount of knowledge encoded in these language models. And changing the phrasing can actually help the model recall the correct answer. This table's a bit frightening. But they find that small changes in the query can lead to really large gains on performance. So if you just have a query like x plays in y position, and then you change that to x plays that y position, this can actually lead to a 23% accuracy gain on this particular relation in terms of the model actually being able to call the correct answer. Or even just x was created in y to x is created in y, 10% gain. So I think this motivates a need to not only develop better ways to query these models but probably also build language models that are actually more robust to the query itself. So in addition to probes, another way to evaluate these language models is by looking at how well they transfer from the pretrained representation to downstream tasks. And so the idea here is you're actually going to fine-tune the pretrained representation on different downstream tasks similar to how you would evaluate BERT on GLUE tasks. So common tasks that are used for this are relation extraction, entity typing, and question and answering. Relation extraction is where you want to predict the relation between two entities. So this is getting back at one of the questions earlier in the talk in terms of, well, how do you get the relation that's in the edges in these knowledge bases. So Given two entities, you learn a model to predict what is the relation between then. Entity typing is the task of given an entity, what is the type of the entity. So here, Alice robbed the bank. You want to predict here as a criminal. And then, you guys, are very familiar with question and answering. So the idea of these common-- of these tasks, is that they're knowledge intensive so they're good candidates to see how do all of these pretrained representations actually transfer the knowledge to these downstream tasks. Here we look at the performance on a relation extraction benchmark called TACRED. And all the models that we show here were at one point, state-of-the-art on TACRED. So this C-GCN is a graph convolutional neural network over dependency trees. The BERT-LSTM-base is ONE of the first works that showed that you could actually get state-of-the-art performance with BERT on relation extraction. And this is just putting LSTM layer over BERT's output. ERNIE is the work that we talked about with the pretrained entity embeddings. Matching the Blanks, we didn't get to today. But it's a really interesting work about learning meaningful relational representations. And it falls more into the training data modification approaches and that they are actually masking out entities again. And then KnowBERT is what we talked about. The W and W here means they actually encode two knowledge bases in KnowBERT. So they're encoding BERT Net, and they're also encoding Wikipedia. And the high level takeaway from this table is that you can see that the recent knowledge enhanced models have achieved state-of-the-art over the original models that once performed very well on TACRED. And we have about five F1 gains here. Another interesting takeaway from this table is there seems to be a trade-off in the size of the language model that's necessary to get a certain performance. So if you just consider the size of the language model, then KnowBERT performs the best. But if you don't consider that, then it ties with Matching the Blanks. So overall, this is pretty good evidence that these knowledge-enhanced methods are, in fact, transferring to these knowledge-intensive downstream tasks that can really take advantage of these pretrained representations. We also have results on entity typing. So here we're comparing a slightly different set of models. Some of the baselines are LSTM models that were designed for entity typing. And we have ERNIE and KnowBERT leading the, I guess, leaderboard here on the entity typing task of open entity. And we see gains of about 15 F1 points with ERNIE and KnowBERT. So once again, we really do see that these knowledge-rich pretrained representations are transferring and helping on these knowledge-intensive downstream tasks. So just to recap, we talked about probes which evaluate the knowledge already present in models. These don't require any more training. But it can be challenging to construct benchmarks to actually make sure you're testing the knowledge in these language models. It can also be challenging to construct the queries using the probe. We then talked about downstream tasks. These are a bit of an indirect way to evaluate knowledge and that they have this extra component of fine-tuning. But it's a good way to evaluate how useful is this knowledge-rich pretrained representation in actual applications. So I just touched on the exciting work in this area, but there's many other directions if you want to dive more into this. So there's retrieval-augmented language models which learn knowledge retrievers to figure out what documents might be relevant for predicting the next word. There's work in modifying the knowledge in language models. So I talked about how this is one of the obstacles and challenges to using language models as knowledge bases. So there's been recent work in this area. We also saw how important the knowledge pretraining task was. Although there's many papers that are proposing different tasks to do the knowledge pretraining, so it's still an open question in terms of what tasks are best to add to encode more knowledge. There's also been work on more efficient knowledge systems, so at NeuIPRS efficient QA challenge which aims at building the smallest QA system. And then finally, there's been work on building better knowledge benchmarks that build on the benchmarks that we saw today. So that's all I have for today. And I hope your final projects are going well.
Stanford_CS224N_Natural_Language_Processing_with_Deep_Learning_2023
Stanford_CS224N_NLP_with_Deep_Learning_Winter_2021_Lecture_3_Backprop_and_Neural_Networks.txt
hi everyone i'll get started okay so we're now i'm back for the second week of cs224n um on natural language processing with deep learning okay so um for today's lecture what we're going to be looking at is all the math details of doing neural net learning first of all looking at how we can work out by hand um gradients for training neural networks and then looking at how it's done more algorithmically which is known as the back propagation algorithm and correspondingly for you guys um well i hope you remembered that you know one minute ago was when assignment one was due and everyone has handed that in if by some chance you haven't handed it in um really should hand it in as soon as possible best to preserve those late days for the harder assignments so i mean i actually forgot to mention we actually did make one change um for this year to make it a bit easier when occasionally people join the class a week late if you want to this year in the grading um assignment one can be discounted and we'll just use your other four assignments but if you've been in the class so far for that 98 percent of people well since assignment one is the easiest assignment again it's silly not to do it and have it as part of your grade okay so starting today we've put out assignment two and assignment two is all about making sure you really understand the math of neural networks and then the software that we use to do that math so this is going to be a bit of a tough week for some so for some people who are great on all their math and backgrounds um they'll feel like this is stuff they know well nothing very difficult but i know there are quite a few of you um who this lecture and week is the biggest struggle of the course we really do want people to actually have an understanding of what goes on in your network learning rather than viewing it as some kind of deep magic and i hope that some of the material we give today and that you read up on and use in the assignment will really give you more of a sense of what these neural networks are doing and how it is just math that's applied in the systematic large scale that works out the answers and that this will be valuable and giving you a deeper sense of what's going on but if this material seems very um scary and difficult you can take some refuge in the fact that there's fast light at the end of the tunnel since this is really the only lecture that's heavily going through the math details of neural networks after that we'll be kind of popping back up to a higher level and by and large after this week we'll be making use of software to do a lot of the complicated math for us um but nevertheless i hope this is valuable i'll go through everything quickly today but if this isn't stuff that you know backwards i really do encourage you um to you know work through it and get help as you need it so do come along to our office hours there are also a number of pieces of tutorial material given in the syllabus so there's both the lecture notes there's some materials from cs231 um in the list of readings the very top reading is uh some material put together by kevin clark a couple of years ago and actually that one's my favorite the presentation there fairly closely follows the presentation in this lecture of going through matrix calculus so you know personally i'd recommend starting with that one but there are four different ones you can choose from if one of them seems more helpful to you two other things on what's coming up um actually for thursday's lecture we make a big change and thursday's lecture is probably the most linguistic lecture of the whole class where we go through the details of dependency grammar and dependency parsing some people find that tough as well but at least it'll be tough in a different way and then one other really good opportunity is this friday we have our second tutorial at 10 a.m which is an introduction to pie torch which is the deep learning framework that we'll be using for the rest of the class once we've gone through these first two assignments where you um do things by yourself um so this is a great chance to get intro to pytorch they'll be really useful for later in the class okay um today's material is really all about sort of the math of neural networks but just to sort of introduce a setting where we can work through this i'm going to introduce a simple nlp task and a simple form of classifier that we can use for it so the task of named entity recognition is a very common basic nlp task and the goal of this is you're looking through pieces of text and you're wanting to label by labeling the words which words belong to entity categories like persons locations products dates times etc so for this piece of text last night paris hilton wowed in the sequin gown samuel quinn was arrested in the hilton hotel in paris in april 1989 the the some words are being labeled as named entities as shown um these two sentences don't actually belong together in the same article but i chose those two sentences to illustrate the basic point that it's not that you can just do this task by using a dictionary yes a dictionary is helpful to know that paris can possibly be a location but paris can also be a person name so you have to use context to get named entity recognition right okay well how might we do that with the neural network there are much more advanced ways of doing this but a simple yet already pretty good way of doing um named entity recognition within a simple neural net is to say well what we're going to do is use the word vectors that we've learned about and we're going to build up a context window of word vectors and then we're going to put those through a neural network layer and then feed it through a softmax classifier of the kind that we um sorry i said that wrong and then we're going to feed it through a logistic classifier of the kind that we saw when looking at negative sampling which is going to say for a particular entity type such as location is it high probability location or is it not a high probability location so for a sentence like the museums in paris are amazing to see what we're going to do is for each word say we're doing the word paris we're going to form a window around it say a plus or minus two word window and so for those five words we're going to get word vectors for them from the kind of word debacle glove word vectors we've learned and we're going to make a long vector out of the concatenation of those five word vectors so the word of interest is in the middle and then we're going to feed this vector to a classifier which is at the end going to have a probability of the word being a location and then we could have another classifier that says the probability of the word being a person name and so once we've done that we're then going to run it at the next position so we then say well is the word r a location and we'd feed a window of five words as then in paris are amazing too and put it through the same kind of classifier and so this is the classifier that we'll use so it's input will be this word window so if we have d dimensional word vectors this will be a 5d vector and then we're going to put it through a layer of a neural network so the layer of the neural network is going to multiply this vector by a matrix add on a bias vector and then put that through a non-linearity such as the soft max transformation that we've seen before and that will give us a hidden vector which might be of a smaller dimensionality such as this one here and so then with that hidden vector we're then going to take the dot product of it with an extra vector here here's u so we take u dot product h and so when we do that we're getting out a single number and that number can be any real number and so then finally we're going to put that number through a logistic transform of the same kind that we saw when doing negative sampling the logistic transform will take any real number and it will transform it into a probability that that word is a location so its output is the predicted probability of the word belonging to a particular class and so this could be our location classifier which could classify each word in a window as to what the probability is that it's a location word and so this little neural network here is the neural network i'm going to use today when going through some of the math but actually i'm going to make it even easier on myself i'm going to throw away the logistic function at the top and i'm really just going to work through the math of the bottom three quarters of this if you look at kevin clark's handout that i just mentioned he includes when he works through it also working through the logistic function and we also saw working through a softnext in the first lecture when i was working through some of the word today model okay um so the overall question we want to be able to answer is so here's our stochastic gradient descent equation that we have existing um parameters of our model and we want to update them based on our current loss which is at the j of theta so for getting our um loss here that the true answer as to whether a word is a location or not will be either you know one if it is a location or zero if it isn't our logistic classifier will return some number like um 0.9 and we'll use the distance away from what it should have been squared as our loss um so we work out a loss and then we're moving a little distance in the negative of the gradient which will be in changing our parameter estimates in such a way that they reduce the loss and so this is already being written in terms of a whole vector of parameters which is being updated as to a new vector of parameters but you can also think about it that for each individual parameter theta j that we're working out the partial derivative of the loss with respect to that parameter and then we're moving a little bit in the negative direction of that um that's going to give us a new value for parameter theta j and we're going to update all of the parameters of our model as we learn i mean in particular in contrast to what commonly happens in statistics we also we update not only the sort of parameters of our model that are sort of weights in the classifier but we also will update our data representation so we'll also be changing our word vectors as we learn okay so to build neural nets i.e to train neural nets based on data what we need is to be able to compute this gradient of the parameters so that we can then iteratively update the weights of the model and efficiently train a model that has good weights i.e that has high accuracy and so how can we do that um well what i'm going to talk about today is first of all um how you can do it by hand and so for doing it by hand this is basically a review of matrix calculus and that'll take quite a bit of the lecture and then after um we've talked about that for a while i'll then shift gears and introduce the back propagation algorithm which is the central technology for neural networks and that technology is essentially the efficient application of calculus on a large scale as we'll come to talking about soon so for computing gradients by hand what we're doing is matrix calculus so we're working with vectors and matrices and working out gradients and this can seem like pretty scary stuff and well to the extent that you're kind of scared and don't know what's going on one choice is to work out a non-vectorized gradient by just working out what the partial derivative is for one parameter at a time and i showed a little example of that in the first lecture but it's much much faster and more useful to actually be able to work with vectorized gradients and in some sense if you're not very confident this is kind of almost a leap of faith but it really is the case that multi-variable calculus is just like single variable calculus except you're using vectors and matrices so providing you remember some basics of single variable calculus you really should be able to do this stuff and get it to work out lots of other sources i've mentioned the notes you can also look at the textbook for math 51 or which also has quite a lot of material on this i know some of you have bad memories of math 51. um okay so let's go through this and see how it works from ramping up from the beginning so the beginning of calculus is you know we have a function with one input and one output f of x equals x cubed and so then its gradient is its slope right so that's its derivative so um its derivative is three 3x squared and the way to think about this is how much will the output change if we change the input a little bit right so what we're wanting to do in our neural net models is change what they output so that they do a better job of predicting the correct answers when we're doing supervised learning and so what we want to know is if we fiddle different parameters of the model how much of an effect will that have on the output because then we can choose how to fiddle them in the right way to move things down right so you know when we're saying that the derivative here is 3x squared well what we're saying is that if you're at x equals one if you fiddle the input a little bit the output will change three times as much three times one squared and it does so if i say what's the value at 1.01 it's about 1.03 it's changed three times as much and that's its slope but at x equals four um the derivative is 16 times 348 so if we fiddle the input a little it'll change 48 times as much and that's roughly what happens 4.01 cubed is is 64.48 now of course you know this is just sort of i'm showing it for a small fiddle but you know that's an approximation to the actual truth okay so um then we sort of ramp up to the more complex cases which are more reflective of what we do with neural networks so um if we have a function with one output and n inputs then we have a gradient so a gradient is a vector of partial derivatives with respect to each input so we've got n inputs x1 to xn and we're working out the partial derivative of f with respect to x1 the partial derivative of f with expected respect to x2 etc and we then get a vector of partial derivatives where each element of this vector is just like a simple derivative with respect to one variable okay so from that point we just keep on ramping up for what we do with neural networks so commonly when we have something like a layer in a neural network we'll have a function within inputs they'll be like our word vectors then we do something like multiply by a matrix and then we'll have m outputs so we have a function now which is taking n inputs and is producing m outputs so at this point um what we're calculating for the gradient is what's called a jacobian matrix so for m inputs and n outputs the jacobian is an m by n matrix of partial of every combination of partial derivatives so um i function f splits up into these different sub functions f1 through m fm which generate each of the m outputs and so then we're taking the partial derivative of f1 with respect to x1 through the partial derivative of f1 with respect to xn then heading down you know we make it up to the partial derivative of fm with respect to x1 etc so we have every possible partial derivative of an output variable with respect to one of the input variables okay so in simple calculus when you have a composition of one variable functions so that if you have um y equals x squared and then z equals three y um that's then z is a composition of two functions of well you're composing two functions to get z as a function of x then you can work out the derivative of z with respect to x and the way you do that is with the chain rule and so in the chain rule you multiply derivatives so dz dx equals dz dy times dydx so dzy is just 3 and dydx is 2x so we get 3 times 2x so that overall um the derivative here is 6x and since if we multiply this together we're really saying that z equals 3x squared um you should trivially be able to see again aha its derivative is 6x so that works okay um so once we move into vectors and matrices and jacobians um it's actually the same game so when we're working with those we can compose functions and work out their derivatives by simply multiplying jacobians so if we have start with an input x and then put it through the simplest form of neural network layer and say that z equals wx plus b so we multiply that the x vector by matrix w and then add on a bias vector b and then typically we'd put things through a non-linearity f so f could be a sigmoid function we'll then say h equals f of z so this is the composition of two functions in terms of um vectors and matrices so we can use jacobians and we can say the partial of h with respect to x is going to be the product um of the partial of h with respect to z and the partial of zero with respect to x and this all does work out so let's start going through some examples of how these things work slightly more concretely first just particular jacobians and then composing them together so one case we look at is the non-linearities that we put a vector through so this is something like putting a vector through the sigmoid function f um and so if we have an intermediate vector z and we turn into vector h by putting it through us a logistic function we can say what is dhdz um well for this um formally this is a function that has n inputs and n outputs so at the end of the day we're computing an n by n jacobian and so what that's meaning is the elements of this n by n jacobian are going to take the partial derivative of each output with respect to each input and well what is that going to be in this case well in this case because we're actually just computing element wise a transformation such as a logistic transform of each element zi like the second equation here if i equals j we've got something to compute whereas if i doesn't equal j um there's just the input has no influence on the output and so the derivative of zero so if i doesn't equal j we're going to get a zero and if i does equal j what then we're going to get the regular one variable derivative of the logistic function which if i remember correctly um you were asked to compute now i can't remember it's assigned one or assignment two but one of the two asks you to compute it um so our jacobian for this case looks like this we have a diagonal matrix with the um the derivatives of each element along the diagonal and everything else is zero okay so let's look at a couple of other jacobians um so if we are asking if we've got this w x plus b basic neural network layer and we're asking um for the gradient with respect to x then what we're going to have coming out is that that's actually going to be the matrix w so this is where what i hope you can do is look at the notes at home and work through [Music] this exactly and see that this is actually the right answer but this is the way in which if you just have faith and think this is a just like single variable calculus except i've now got vectors and matrices the answer you get is actually what you expected to get because this is just like um the derivative of ax plus b with respect to x where it's a so similarly if we take the partial derivative with respect to b of w x plus b we get out the identity matrix okay then one other jacobian that we mentioned um while in the first lecture while working through word to vac is if you have the dot product of two um vectors i that's a number that what you get coming out of that it so that the partial derivative of u t h with respect to u is h transpose and at this point there's some fine print that i'm going to come back to in a minute so this is the correct jacobian right because in this case um we have the dimension of h inputs and we have one output and so we want to have a row vector um but there's a little bit more to say on that that i'll come back to in about 20 slides um but this is the correct jacobian okay so if you are not familiar with these kind of jacobians do please look at some of the notes that are available and try and compute these in more detail element wise and convince yourself that they really are right but i'm going to assume these now and show you what happens when we actually then work out gradients for at least a mini little neural net okay so here is most of this um neural net i mean as i commented um that you know really we'd be working out the partial derivative of the loss j with respect to these variables but for the example i'm doing here i just i've locked that off to keep it a little simpler and more manageable for the lecture and so we're going to just work out the partial derivative of the score s which is a real number with respect to the different parameters of this model where the parameters of this model are going to be the w and the b and the u and also the input because we can update the weight vectors of the the word vectors of different words based on tuning them to better predict um the classification outputs that we desire so let's start off with a fairly easy one where we want to update um the bias vector b to have our system um classify better so to be able to do that what we want to work out is the partial derivatives of s with respect to b so we know how to put that into our stochastic gradient update for the b parameters okay so how do we go about doing these things so the first step is we want to sort of break things up into different functions of minimal complexity that compose together so in particular this neural net layer h equals f of w x plus b it's still a little bit complex so let's decompose that one further step so um we have the input x we then calculate the linear transformation z equals wx plus b and then um we put things through the sort of element-wise non-linearity h equals f of z and then we do the dot product with u and you know it's useful for working these things out you know split into pieces like this have straight what your different variables are and to know what the dimensionality of each of these variables is it's well worth just writing out the dimensionality of every variable and making sure that the answers that you're computing are of the right dimensionality so at this point though what we can see is that calculating s is the product of three sorry is the composition of three functions around x so for working out the partials of s with respect to b um it's the composition of the three functions shown on the left and so therefore the gradient of s with respect to b we're going to take um the product of these three partial derivatives okay so how do what do we that so um so you know we've got the s equals u t h so that's the sort of the top um corresponding partial derivative partial derivative of h with respect to z partial derivative of z with respect um to b which is the first one that we're working out okay so we want to work this out and if we're lucky we remember those jacobians i showed previously about the jacobian for a vector dot product the jacobian for the non-linearity and the jacobian for the simple linear transformation and so we can use those so for the partials of s with respect to h well that's going to be ut using the first one the partials of h with respect to z okay so that's the non-linearity and so that's going to be the matrix that's the diagonal matrix with the element wise derivative f prime of z and 0 elsewhere and then for the w x plus b when we're taking the partials with respect to b that's just the identity matrix so we can simplify that down a little the identity matrix disappears and since u t is a vector and this is a diagonal matrix we can rewrite this as ut hadamard product of f prime of z i think this is the first time i've used this little circle for hadamard product but it it's something that you'll see quite a bit in your network work since it's often used so when we have two vectors ut and this vector here sometimes you want to do an element-wise product so the output of this will be a vector where you've taken the first element of each and multiplied then the second element of each and multiplied them etc downwards and so that's called the hadamard product and it's what we're calculating as to calculate a vector which is the gradient of s with respect to b okay so that's good so we now have a gradient of s with respect to b and we could use that in our stochastic gradient but we don't stop there we also want to work out the gradient with respect to others of our parameters so we might want to next go on and work out the gradient of s with respect to w well we can use the chain rule just like we did before right so we've got the same product of functions and everything is going to be the same apart from me now taking um the derivatives with respect to w rather than b um so it's now going to be um the partial of s with respect to h h with respect to z and z with respect to w and the important thing to notice here and this leads into what we do with the backpropagation algorithm is wait a minute this is very similar to what we've already done so when we're working out the gradients of s with respect to b the first two terms were exactly the same it's only the last one that differs so to be able to build um or to train neural networks efficiently this is what happens all the time and it's absolutely essential that we use an algorithm that avoids repeated computation and so the idea we're going to develop is when we have this equation stack that there's sort of stuff that's above um where we compute z and we're going to be sort of that'll be the same each time and we want to compute something from that that we can then sort of feed downwards when working out the gradients with respect to w x or b and so we do that by defining delta which is delta is the partials composed that are above the linear transform and that's referred to as the local error signal it's what's being passed in from above to the linear transform and we've already computed the gradient of that in the preceding slides and so the final form of the partial of s with respect to b will be um delta times the remaining part and well we've seen that you know for um partial of s with respect to b um the partial of z with respect to b is just the identity so the end result was delta but in this time we're then going to have to work out the partial of z with respect to w and multiply that by delta so that's the part that we still haven't yet done so um and this is where things get in some sense a little bit hairier and so there's something that's important to explain um so you know what should we have for the jacobian of um dsdw well that's a function that has one output the output is just a score a real number and then it has n by m inputs so the jacobian is um a 1 by n by m matrix i a very long row vector but um that's correct math but it turns out that that's kind of bad for our neural networks because remember what we want to do with our neural networks is do stochastic gradient descent and we want to say theta nu equals theta old minus a small multiplier times the gradient and well actually the w matrix is an n by m matrix and so we couldn't actually do the subtraction if this gradient we calculate is just a huge row vector we'd like to have it as the same shape as the w matrix in neural network land when we do this um we depart from pure math at this point and we use what we call the shape convention so what we're going to say is um and you're meant to use this for answers in the assignment that the shape of the gradient we're always going to make to be the shape of the parameters and so therefore um the sdw we're also going to represent as an n by m matrix just like w and we're going to reshape the jacobian to place it into this matrix shape okay so if we want to place it into this matrix shape what do we what are we going to want to get for the sdw well we know that it's going to involve delta our local error signal and then we have to work out something for dz dw um well since c equals wx plus b you'd kind of expect that the answer should be x um and that's right so the answer um to dsdw is um going to be delta transpose times x transpose and so the form that we're getting for this derivative is going to be the product of the local error signal at that's in comes from above versus what we calculate from the local input x so that shouldn't yet be obvious why that is true so let me just go through in a bit more detail why that's true so when we want to work out um d s d w right it's sort of delta times dz dw where um what that's computing for z is wx plus b so let's just consider for a moment what the derivative is with respect to a single weight w i j so w i j might be w 2 3 that's shown in my little neural network here and so the first thing to notice is that w i j only contributes to zi so it's going into z2 which then computes h2 and it has no effect whatsoever on h1 okay so when we're working out um dzi dw i j it's going to be d w i x that sort of row that row of the matrix plus bi which means um that for we've got a kind of a sum of w i k times x k and then for this sum this is like one variable calculus that when we're taking the derivative of this with respect to w i j every term and this sum is going to be zero the derivative is going to be zero except for the one that involves w i j and then the derivative of that is just like a x with respect to a it's going to be x so you get x j out as the answer and so the end result of that is that when we're working out what we want is the answer is that we're going to um get that these columns where x1 is all that's left x2 is all that's left through xm is all that's left and then that's multiplied by the vectors of the local error signal from above and what we want to compute is this outer product matrix where we're getting the different combinations of the delta and the x and so we can get the n by m matrix that we'd like to have by our shape convention by taking delta transpose which is n by 1 times x transpose which is then 1 by m and then we get this outer product matrix um so like that's the kind of a hacky argument that i've made it's certainly a way of doing things that the dimensions work out and it sort of makes sense um there's a more detailed run through this that appears in lecture notes um and i encourage you to sort of also look at the more matty version of that here's a little bit more information about um the shape convention so well first of all one um more example of this so when you're working out the sdb that that comes out as a it's jacobian is a row vector um but similarly you know according to shape convention we want our gradient to be the same shape as b and b is a column vector so that's sort of again they're different shapes and you have to transpose one to get the other and so effectively what we have is a disagreement between the jacobian form so the jacobian form makes sense for you know calculus and math because if you want to have it like i claimed that matrix calculus is just like single variable calculus apart from using vectors and matrices you can just multiply together the partials that only works out if you're using jacobians but on the other hand if you want to do stochastic gradient descent and be able to sort of subtract off a piece of the gradient that only works if you have the same shape matrix for the gradient as you do for the original matrix and so this is a bit confusing but that's just the reality there are both of these um two things so the jacobian form is useful in doing the um calculus but for the answers in the assignment we want the answers um to be presented using the shape convention so that the gradient is shown in the same shape as the parameters and therefore you'll be able to it's the right shape for doing a gradient update by just subtracting a small amount of the gradient so for working through things there are then basically two choices one choice is to work through all the math using jacobians and then right at the end um to reshape following the shape convention to give the answer so that's what i did when i worked out dsdb we worked through it using jacobians we got an answer but it turned out to be a row vector and so well then we have to transpose it at the end to get it into the right shape for the shape convention um the alternative is um to always follow the shape convention um and that's kind of what i did when i was then working out dsdw i didn't fully use jacobians i said oh well when we work out whatever was dz dw let's work out what shape we want it to be and what to fill in the cells with and if you're sort of trying to do it um immediately with the shape convention it's a little bit more hacky in a way since you know you have to look at the dimensions for what you want and figure out when to transpose or to reshape the matrix to be at the right shape but the kind of informal reasoning that i gave is what you do and what works and you know one way of and there are sort of hints that you can use right that you know that your gradient should always be the same shape as your parameters and you know that the error message coming in will always have the same dimensionality as that hidden layer and you can sort of work it out always following the shape convention okay um so that is hey doing this is all matrix calculus so after pausing for breath for a second the rest of the lecture is then okay let's look at how our software trains neural networks using what's referred to as the back propagation the back propagation algorithm um so the short answer is you know basically we've already done it the rest of the lecture is easy um so you know essentially i've just shown you what the back propagation algorithm does um so the back propagation algorithm is judiciously taking and propagating derivatives using the matrix chain rule the rest of the back propagation algorithm is to say okay when we have these neural networks we have a lot of shared structure and shared derivatives so what we want to do is maximally efficiently reuse derivatives of higher layers when we're computing derivatives for lower layers so that we minimize computation and i already pointed that out in the first half but we want to systematically exploit that and so the way we do that in our computational systems is they construct computation graphs um so this maybe looks a little bit like what you saw in a compiler's class if you did one right that you're creating i'm i call it here computation graph but it's really a tree right so you're creating here this tree of computations in this case but in more general case it's some kind of directed graph of computations which has source nodes which are inputs either inputs like x or input parameters like w and b and it's interior nodes are operations and so then once we've constructed a graph and so this graph corresponds to exactly the example i did before right this was our little neural net that's in the top right and here's the corresponding computation graph of computing w x plus b put it through the sigmoid non-linearity f multiply the resulting dot product of the resulting vector with u gives us our output score s um okay so what we do to compute this is we pass along the edges the results of operations so this is w x then z then h and then our output is s and so the first thing we want to be able to do to compute with neural networks is to be able to compute for different inputs what the output is and so that's referred to as forward propagation and so we simply run this expression much like you'd standardly do in a compiler to compute the value of s and that's the forward propagation phase but the essential additional element of neural networks is that we then also want to be able to send back gradients which will tell us how to update the parameters of the model and so it's this ability um to send back gradients which gives us the ability for these models to learn once we have a loss function at the end we can work out how to change the parameters of the model so that they more accurately produce the desired output i they minimize the loss and so it's doing that part that then is called back propagation so we then once we forward propagated a value with our current parameters we then um head backwards reversing the direction of the arrows and pass along gradients down to the different parameters like b and w and u that we can use to change using stochastic gradient to send what the value of b is of what the value of w is so we start off with dsds which is just one and then we run our back propagation and we're using the sort of same kind of composition of jacobian so we have the sdh here and the sdz and we progressively pass back those gradients so we just need to work out how to efficiently and cleanly do this in a computational system and so let's sort of work through again a few of these cases so the general situation is um we have a particular node so a node is where some kind of operation like multiplication or a non-linearity happens and so the simplest case is that we've got one output and one input so we'll do that first so that's like h equals f of z so what we have is an upstream gradient um dsdh and what we want to do is compute the downstream gradient of dsdz and the way we're going to do that is say well for this function f it's a function it's got a derivative a gradient so what we want to do is work out that local gradient dhdz and then that gives us everything that we need to work out the sdz because that's precisely we're going to use the chain rule we're going to say the dsdz equals the product of the sdh times the hdz where this is again using jacobians okay so the general principle that we're going to use is the downstream gradient equals the upstream gradient times the local gradient okay sometimes it gets a little bit more complicated so we might have multiple inputs to a function so this is the matrix vector multiply so z equals wx okay when there are multiple inputs we still have an upstream gradient dsdz but what we're going to do is work out a local gradient with respect to each input so we have dz dw and dzdx and so then at that point it's exactly the same for each piece of it we're going to work out the downstream gradients the sdw and the sdx by using the chain rule with respect to the particular local gradient so um let's go through an example of this i mean this is kind of a silly example it's not really an example that looks like a typical neural net but it's sort of a simple example where we can show some of the components of what we do so what we're going to do is want to calculate f of x y z which is being calculated as x plus y times the max of y and z um and we've got you know particular values that we're starting off with x equals one y equals two and z equals zero so these are the current values of our parameters and so we can say okay well we want to build an expression tree for that here's our expression tree we're taking x plus y we're taking the max of y and z and then we're multiplying them and so our forward propagation phase is just to run this so we take the values of our parameters and we simply start to compute with them right so we have one two two zero um and we add them as three the max is two we multiply them and that gives us six okay so then at that point we then want to go and work out um how to do things um for back propagation and how these back propagation steps work and so the first part of that is sort of working out what our local gradients are going to be um so um so this is a here and this is x and y so d a d x since a equals x plus y is just going to be one and d a d y is also going to be one um then um for b equals the max of y z um so this is this max node so the local gradients for that is um it's going to depend on y b whether y is greater than z so d b d y is going to be one if and only if y is greater than z which it is at our particular point here so that's one and db dz is going to be one only if z is greater than y so for our particular values here that one is going to be zero um and then finally here we're calculating the product f equals a b um so for that um we're going to um wait sorry that slides along perfect okay so for the product um the derivative of f with respect to a is equal to b which is two and the derivative of f with respect to b is a equals three so that gives us all of the local gradients at each node and so then to run backpropagation we start with dfdf which is just one and then we're going to work out the downstream equals the upstream times the local okay so the local so when you have a product like this um note that sort of the gradients flip so we take upstream times the local which is 2 oops so the downstream is 2 on this side dfdb is three so we're taking upstream times local that gives us three um and so that gives us back propagates values to um the plus and max nodes and so then we continue along so for the max node um the local gradient dbdy equals one so we're going to take upstream is three so we tend to take three times one and that gives us three d b d z is zero because of the fact that z's value is not the max um so we're taking three times zero and saying the gradient there is zero so finally doing the plus node um the local gradients for both x and y there are one so we're just getting two times one in both cases and we're saying that the gradients there are two okay and so again at the end of the day um the interpretation here is that this is giving is this information as to if we wiggle the values of x y and z how much of a difference does it make to the output what is the slope the gradient with respect to the variable so what we've seen is that since z isn't the max of y and z if i change the value of z a little like if i make z 0.1 or minus 0.1 it makes no difference at all to what i compute as the output so therefore the gradient there is zero if i change the value of x a little then that is going to have an effect and it's going to affect the output by twice as much as the amount i change it oops right so and that's because um the dfdz equals two um so interestingly um so i mean we can basically work that out so if we imagine um making sort of x 2.1 well then what we'd calculate the max is to [Music] oh sorry sorry if we make x 1.1 we then get the max here is 2 and we get 1.1 plus 2 as 3.1 so we get 3.1 times 2 so that'd be about 6.2 so changing x by 0.1 has added 0.2 to the value of f um conversely for the value of y we find that the df d y equals 5 so what we do when we've got two things coming out here as i'll go through again in a moment is we're summing the gradients so again three plus two equals five and empirically that's what happens so if we consider fiddling the value of y a little let's say we make it a value of 2.1 then the prediction is they'll have five times as bigger an effect on the output value we compute and well what do we compute so we compute 1 plus 2.1 so that's 3.1 and we compute the max of um 2.1 and 0 as 2.1 so we'll take the product of 2.1 and 3.1 and i calculate that in advance since i can't really do this arithmetic in my head and the product of those two is 6.51 so it has gone up about by 0.5 so we've multiplied my fiddly at by 0.1 by five times to work out the magnitude of the effect of the output okay so for this stuff you know before i did the case of you know when we had one oops one in and one out here and multiple ends and one out here the case that i hadn't actually dealt with is the case of when you have multiple outward branches but that then turned up in the computation of y so once you have multiple outward branches what you're doing is your summing so that when you want to work out the dfdy you've got a local gradient you've got two upstream gradients and you're working it out with respect to each of them as in the chain rule and then you're summing them together to work out the impact at the end right so we also saw some of the other node intuitions which it's useful to have um doing this so when you have an addition um that distributes the upstream gradient to each of the things below it when you have max it's like a routing node so when you have max you have the upstream gradient and it goes to one of the branches below it and the rest of them get no gradient um when you then have a multiplication it has this effect of switching the gradient so if you're taking three by two um the gradient on the two side is three and on the three side is two and if you think about in terms of how much effect you get from when you're doing this sort of wiggling that totally makes sense right because if you're multiplying another number by three then any change here is going to be multiplied by 3 and vice versa okay so that so this is the kind of computation graph that we want to use to work out derivatives in an automated computational fashion um which is the basis of the back propagation algorithm but at that point that you know this is what we're doing but there's still you know one mistake that we can make it would be wrong for us to sort of say okay well first of all we want to work out the sdb so look we can start up here we can propagate our upstream errors work out local gradients upstream error local gradient and keep all the way down and get the dsdb down here okay next we want to do it for dsdw let's just run it all over again because if we did that we'd be doing repeated computation as i showed in the first half that this term is the same both times this term is the same both times this term is the same both times that only the bits at the end differ so what we want to do is avoid duplicated computation and compute all the gradients um that we're going to need um successively so that we only do them once and so that was analogous when i introduced this delta variable when we computed gradients by hand so starting off here from d um we starting off here with dsds is one we then want to one time compute gradient in the green here one time compute the gradient in green here that's all common work then we're going to take the local gradient um for dz db and multiply that by the upstream gradient to work out dsdb and then we're going to take the same upstream gradient and then um work out the local gradient here um and then sort of propagate that down to give us the sdw so the end result is we want to sort of systematically work to forward computation forward in the graph and backward computation back propagation backward in the graph in a way that we do things efficiently so this is the general form of the algorithm which works for an arbitrary computation graph so at the end of the day we've got a single scalar output z and then we have inputs and parameters which compute z and so once we have this computation graph and i added in this funky extra arrow here to make it a more general computation graph well we can always say that we can work out a starting point something that doesn't depend on anything so in this case both of these bottom two nodes don't depend on anything else so we can start with them and we can start to compute forward we can compute values for all of these sort of second row from the bottom nodes and then we're able to compute um the third lens up so we can have a topological sort of the nodes based on the dependencies in this directed graph and we can compute the value of each node given some subset of its predecessors which it depends on and so doing that is referred to as the forward propagation phase and gives us a computation of the scalar output z using our current parameters and our current inputs and so then after that we run back propagation so for back propagation we initialize the output gradient dz dz as one and then we visit nodes in the reverse order of the topological sort and we compute the gradients downward and so our recipe is that for each node as we head down we're going to compute the gradient of the node with respect to its successes and the things that it feeds into and how we compute that gradient is using this chain rule that we've looked at so this is sort of the generalized form of the chain rule where we have multiple outputs and so we're summing over the different outputs and then for each output we're computing the product of the upstream gradient and the local gradient with respect to that node and so we head downwards and we continue down in the reverse topological sort order and we work out um the gradient with respect to each variable in this graph and so it hopefully looks um kind of intuitive looking at this picture that if you think of it like this the big oak complexity of forward propagation and backward propagation is the same right in both cases you're doing a linear pass through all of these nodes and calculating values given predecessors and then values given successes i mean you have to do a little bit more work is um for working out the gradients sort of as shown by this chain rule but it's the same big o complexity so if somehow you're implementing stuff for yourself rather than relying on the software and you're calculating the gradiences of a different order of complexity of forward propagation it means that you're doing something wrong you're doing repeated work that you shouldn't have to do okay so this algorithm works for a completely arbitrary computation graph any directed acyclic graph you can apply this algorithm in general what we find is that we build neural networks that have a regular layer structure so we have things like a vector of inputs and then that's multiplied by a matrix it's transformed into another vector which might be multiplied by another matrix or summed with another matrix or something right so once we're using that kind of regular layer structure we can then parallelize the computation by working out the gradients in terms of jacobians of vectors and matrices and do things in parallel much more efficiently okay so doing this is then referred to as automatic differentiation and so essentially um if you know the computation graph you should be able to have your compute clever computer system work out um what the derivatives of everything is and then apply back propagation um to work out how to update the parameters and learn and there's actually a sort of an interesting um sort of thing of how history has gone backwards here which i'll just note um so some of you might be um familiar with symbolic um computation packages so those are things like mathematica so mathematica you can give it a symbolic form of a computation and then it can work out derivatives for you so it should be the case that if you give a complete symbolic form of a computation graph um then it should be able to work out all the derivatives for you and you never have to work out a derivative by hand whatsoever and that was actually attempted in a famous um deep learning library called fianno which came out of joshua bendio's group at the university of montreal that it had a compiler that did that kind of symbolic manipulation um but you know somehow that sort of proved um a little bit too too hard a road to follow i imagine it actually might come back again in the future and so for modern deep learning frameworks which includes both tensorflow or pi torch they do 90 percent of um this computation of automatic differentiation for you but they don't actually symbolically compute derivatives so for each particular node or layer of your deep learning system somebody either you or the person who wrote that layer has hand-written the local derivatives but then everything from that point on the sort of the taking doing the chain rule of combining upstream gradients with local gradients to work out downstream gradients that's then all being done automatically for back propagation on the computation graph and so that what that means is for a whole neural network you have a computation graph and it's going to have a forward pass and a backward pass and so for the forward pass you're topologically sorting the nodes based on their dependencies in the computation graph and then for each node you're running forward the forward computation on that node and then for backward propagation you are reversing the topological sort of the graph and then for each node in the graph you're running the backward propagation which is the little bit of backdrop the chain rule at that node and then the result of doing that is you have gradients for your inputs and parameters and so this is the overall software runs this for you and so what you want to do is then actually have stuff for particular nodes or layers in the graph so if i have a multiply gate it's going to have a forward algorithm which just computes that the output is x times y in terms of the two inputs and then i'm going to want to compute to tell it also how to calculate the local derivative so i want to say what is the local derivative so dl dx and the ldy in terms of the upstream gradient dldz and so i will then manually work out how to calculate that and normally what i have to do is i assume the forward pass is being run first and i'm going to shove into some local variables for my class the values that were used in the forward computation so as well as computing z equals x times y i'm going to sort of remember what x and y were so that then when i'm asked to compute the backward pass i'm then going to have implemented here um what we saw earlier of um that when it's x y you're going to sort of swap the y and the x um to work out the local gradients and so then i'm going to multiply those by the upstream gradient and i'm going to return i've just written it here as a sort of a little list but really it's going to be a numpy vector of the gradients okay um so that's um 98 of what i wanted to cover um today just um a couple of quick comments um left so um that can and should all be automated sometimes you want to just check if you're computing the right gradients and so the standard way of checking that you're computing the right gradients is to manually work out the gradient by doing a numeric calculation of the gradient and so um you can do that so you can work out what the derivative of x of f with respect to x should be by choosing some sort of small number like 10 to the minus 4 adding it to x subtracting it from x and then so the difference between these numbers is 2h dividing it through by 2h and you're simply working out the rise over the run which is the slope of that point with respect to x and that's an approximation of the gradient of f with respect to x at that value of x so this is so simple you can't make a mistake implementing that and so therefore you can use this to check um where your whether your gradient values are correct or not um this isn't something that you'd want to use much um because not only is it approximate but it's extremely slow because to work this out you have to run the forward computation for every parameter of the model so if you have a model with a million parameters you're now doing a million times as much work to run back prop as as you would do if you're actually using calculus so calculus is a good thing to know but it can be really useful to check that the right values are being calculated in the old days when we hand wrote everything this was kind of the key unit test that people used everywhere these days most of the time you're reusing layers that are built into pie torch or some other deep learning framework so it's much less needed but sometimes you're implementing your own layer and you really do want to check that things are implemented correctly there's a fine point in the way this is written if you saw this in sort of high school calculus class you would have seen rise over run of f of x plus h minus f of x divided by h it turns out that doing this two-sided estimate like this is much much more accurate than doing a one-sided estimate and so you're really much encouraged to use this approximation okay so at that point um we've mastered the core technology of neural nets um backpropagation is recursively and hence efficiently applying the chain rule along the computation graph with this sort of key step that downstream gradient equals upstream abstract the upstream gradient times local gradient and so for calculating with neural nets we do the forward pass to work out values with current parameters then run back propagation and work out the gradient of the loss and currently computed loss with respect to those parameters now to some extent um you know with modern deep learning frameworks you don't actually have to know how to do any of this right it's like it's the same as you don't have to know how to implement a c compiler you can just write c code and say gcc and it'll compile it and it'll run um the right stuff for you um and that's the kind of functionality you get from the pytorch framework so do come along to the pie torch tutorial this friday and get a sense about how easy it is to write new networks um using a framework like pytorch or tensorflow and you know it's so easy that's why you know high school students across the nation are now doing their science projects training deep learning systems because you don't actually have to understand very much debunk a few neural network layers together and set it computing on some data but you know we hope in this class that you actually are also learning how these things are implemented um so you have a deeper understanding of than that and you know it turns out that sometimes you need to have a deeper understanding so back propagation doesn't always work carefully perfectly and so understanding what it's really doing can be crucial to debugging things and so we'll actually see an example of that fairly soon when we start looking at recurrent models and some of the problems that they have which will require us to think a bit more deeply about what's happening in our gradient computations okay that's it for today
CS_198126_Modern_Computer_Vision_Fall_2022_UC_Berkeley
CS_198126_Lecture_18_3D_Vision_Survey_Part_2.txt
it's literally at the top all right uh yeah perfect okay um so let's see with that I'm going to jump into 3D Vision part two uh for which Mel and I have decided we love Nerfs the most so we're just gonna cover nurse today uh if if you want we do we did make a couple of slides on like monocular depth estimation if you want to come up and like check those out afterwards but uh today it's really we're just going to be talking all about Nerfs because they make me happy um so yeah so we recap the nurse a little bit um the basic idea again you have a black box function um that takes in your position in space XYZ and it's going to Output um color your density um and if we want to figure out what an object looks like from a given perspective we're going to query going out from our camera from our current location um and ask our Black Box function what is the color what is the density of all these points and try and from that sort of figure out what what is this image actually going to look like in that process is rendering um that is that is the basic idea of what a Radiance field is um and the recap of volume rendering color is the weighted average of all the samples that we take as we go out along the Rays um your your visibility is proportional to the density um and inversely proportional to the amount of stuff in front of us so if we have lots of stuff in front of us um like for the bulldozer after we've passed through the bulldozer there's lots of stuff that was in front of it particularly right here where the actual object was so even if this point right here returns a really big density even at this point here past the bulldozer returns a really big density because there's a ton of stuff in front of it we're still going to wait it super with a very small value um and you can see sort of the intuition of this on the right hand side this is not particularly color friendly um the top is supposed to be brown the bottom is supposed to be yellow I'm sorry Orange um so you can imagine casting out a ray in a scene and it passes through in the first uh it's just going to pass through a brown object as we sample along this Ray as we sample along this Ray the density that we observe is going to be very little and then it's going to spike when we're passing through the object and then the density is going to be very little again um so we only really pass through one just Brown object um so the color we're going to return is going to be brown and when we actually learn the radiance field we're going to compare it to our ground truth image of this object and say like okay how does how does the color that we came up with using our function compare to the real color and then the second example just to hammer this point home that it's really only the closest thing to you um that contributes to color we have we can imagine two objects in the scene the first one will be orange and the second one will be yellow we have a ray that goes through it both of these objects we're going to sample a whole bunch right the first thing we come into contact with is this Orange Bowl and you can see the density spikes at the orange ball and then goes down again when we're in between when spikes again and we hit the yellow ball the color we're going to return will be orange because all that matters is the color at this location here where the spike first hit um so like sure we do hit a yellow ball eventually but in this case it is behind the orange ball so the color we get back is just Orange um so yeah give people more questions on this this Intuition or yes uh yes first sorry his hand is right yeah it's gonna because the density it's the first if the first object you come into contact with is like glass and it's not perfectly dense you're going to get some contributions of color from that maybe you'll get like a weight of like one half and then the object behind it that's super dense um is probably going to contribute to the rest of your color but because even though it's perfectly dense it's 100 dense because it's behind something else it's only going to get a weight of like maybe one half basis yeah say it again oh refraction yeah we'll talk about we'll talk about that a little bit how we handle uh like Reflections and stuff like that um yeah that's a really good question we're going to talk about that in a little bit yeah what's up yes yeah we want to wait up more like how's that how's that so basically what we're going to do is we have the densities say okay say we're sampling the yellow objects here can you all see this okay-ish hopefully okay so say we're sampling the yellow object here first thing we're going to do is we're going to get the density uh like Sigma in this case it's written beta um beta is it's a proxy for just Sigma so I'm just going to write Sigma for now so we get Sigma I right here and we're also going to come up with T so how do we get T what we're going to do is we're basically going to sum all the density along here we're going to sum it all up out of all the density and then take one over the sum of all of our densities accumulated up until here so if there's a lot of stuff right here t t i rather if there's a lot of stuff right here if our density is in front of us are really big that makes this value T super small um so T is is really big when there's not much stuff in front of us T is super small um when there's like other stuff in front of us basically and you're going to do that by just summing up all the density that came before us along our red does that make sense okay party on um any more questions um all right party on um so that is that is the intuition whatever we hit first is going to be weighted heavily whenever we hit later on not as important um so what is the goal of neural Radiance Fields the the basic problem is we have a lot of different images of an object like I just walked around an object took a video and then I look at all the individual images that make up that video and I want to say okay given all of these images how can I come up with a neural Radiance field that's going to be able to generate new views of this object that's basically the the problem statement come up with a neural Radiance field so we can query at points in the scene and come up with new images too how do people feel about that yes friend how hard is it to get the ground troop positions and directions yeah because I see I've seen videos it's pretty non-trivial that we can find all the positions um of all of these you know where the phone travels as it's moving through space um there's some programs you can run that'll basically like like if you have an object with really sharp Corners it'll kind of look at the position of all the corners as we move and sort of triangulate I'm not going to go into it too much um it's the basic problem is like called structure from motion like how do we figure out given like a video of like us moving around an object how do we determine like some key points along that on that object um there's programs that do it like cold map um it's really gross to run uh it's questionable whether or not this is cheating using these positions to begin with uh yeah it's it's a it's a very difficult problem um classes like uh 19426 cover that class is like 280 um cs19426 cs280 um it's definitely it's definitely non-trivial but for the sake of this problem we're just going to assume that they're given and they're like pretty good um it's really obnoxious to get it but it's like it's doable and honestly I still feel like it's kind of cheating to have that to begin with but um there's there's some derivations of this that don't necessarily require this if you just have a continuous video um yeah it's it's a real problem but it's doable and and it's for the most part pretty accurate um it's like a classical technique that was long before the era of deep neural networks so it's been around for a while um yeah good question good question um are there more questions before I keep going yeah yeah one frame at a time yeah basically that's what it is we're just gonna yeah exactly yeah when we have our image we're going to say okay like if we want to generate a video of a camera like moving around like this we're just gonna figure out where all the cameras are sort of in like a little circle out here to take a bunch of those views along that Circle generate images together and then yeah just play them really fast and then we get the video yeah what's up uh I mean technically with your first camera you can just assign it to the origin like if we're if we're taking a video and like moving it around like our coordinate frame is whatever we want it to be like this object is still the exact my laptop is still my laptop if I rotate it a little bit or if I if I assign my my axes to be rotated a little bit it's still the same thing so with your first camera you can technically just call it the origin um does that get your question at all okay yeah it sort of it reminds me a little bit of like slam like some people uh simultaneous localization and mapping um it's sort of like self-driving like if you have a continuous video of your car moving along how do you figure out where stuff is and where you are um and in that problem yeah you do definitely take advantage of the fact that your second frame is going to be pretty close to wherever your first frame in your video is and just try and figure out like okay like how much do we move in this little one Gap from our first frame does that kind of get at it a little bit okay are there more questions all right yeah so so whether whether or not it's cheating uh it still gives cool results so we're gonna run with it uh and and the Paradigm is we have lots of images of a scene um and did I I had a yeah we have lots of images of a scene um we we want to just figure out okay from these can we learn a neural Radiance field and then that would allow us to to render new views if our neural Radiance field is really good um so yeah so like why why do we bother with 3D at all um because it seems like I mean this is deep learning after all why don't we just train a network that just takes in all of these different images nope come here that just takes in all of these different images um and just outputs a new image if you just say okay here are all of our cameras uh here are the locations where all these images were taken from here are our images predict an image given um a new camera position that we're going to give you like why do we why do we bother putting it through this bottleneck of like learning an array of Steel at all that's sort of a pretty non-trivial um question come here come on dude what's going on oh I'm on the wrong slide deck that's cool uh okay so why do we bother with this with this bottleneck why do we why do we bother learning a neural Radiance field instead of just telling a neural network like hey give me a new a new view of this object from this new position and the basic answer is like shape consistency also sort of known as like inductive bias um if we train a network end to end just hand it a whole bunch of images of a scene and train it to predict new images of said scene it's it's there's no guarantee we can penalize it pretty heavily for for warping the object between different viewpoints um we can penalize it pretty heavily because we know like we have ground truth views of the object from different views we can penalize it pretty heavily if the shape is inconsistent across images generated of it but in test time there's no 100 bona fide way to make sure that the shape is consistent um so on the right hand side this is something done just sort of janking something with stylegan there was no underlying three-dimensional representation at all it's just completely deep learning end to end no inductive bias you can see the hair is changing the the nose is changing the mouth is opening like the shape is not consistent across different views on the bottom we have a neural Radiance field of a plant and you can see there's like almost perfect consistency across the entire thing well there has to be um we've forced by putting it through this bottleneck by learning a neural Radiance field we've guaranteed that when we go to generate a new view it is going to be consistent um with uh the the shape across all these different views we're generating does that make sense to people this sort of motivate like why and it's it's not a it doesn't just apply to rating Fields like why would we learn a Radiance field why would we learn a mesh of an object why would we learn anything why why bother with that if our goal is just to get new images of said object and it boils down to wanting to make sure that this is consistent across all of our new views that this is feasibly an object yeah that's that's sort of the bottom line with this and we can use anything we can use meshes whatever but just ratings Fields give us some nice properties and um allow us to represent super fine details like this um so as someone brought up earlier uh Reflections are difficult um and when I said that we take in position x y z and spit out a color and a density I was kind of lying a little bit um so if you look at a mirror from two different angles the color on the surface of the mirror is going to be completely different um because in Real Life Light reflects and and in our little toy uh you know demo where we're pretending that light just moves in a pure straight line through objects uh you know we get completely different physics um so when I said we're taking an XYZ and spitting out color and density I was lying a little bit what we're actually going to do is take in x y z um and a viewing Direction uh to our function to make things a little bit more complicated um so if we now take in both position this is a vector I should say um of angles so now we're taking in a position in space where we're you know taking our our image from and also what the orientation of the camera is um which will help us when you have things like glare as you can see in the bottom of this image if we train a Nerf without we basically get rid of that term The View dependence when we get rid of view dependence you get basically very matte looking objects whereas on the left-hand side you can see there's glare from like the lighting um on the left hand side we learn a Radiance field that feasibly it looks like it has glare in the scene um which is much more impressive and obviously things like mirrors would just be absolutely like impossible to learn um if you don't have this view dependence term so we're really going to take in XYZ and the angle at which we're taking this photo from do people have questions on this because this is definitely a weird concept um yeah okay if you have more questions later if you've thought about it a little more yeah what's up I mean we're gonna assume like we have it but you can yeah yeah structured from motion is harder I'm gonna I'm gonna gloss over that for right now um but yeah that's a really good question it's pretty non-trivial to both get the position of cameras in space when we're pre-processing all our data our video of a camera moving throughout space it's pretty non-trivial that we can get both the position and the orientation of the camera when we're processing our data yeah um are there more questions okay party on so so let's take a look inside this black box F now this is a deep learning at the core it's basically just an MLP um it's a pretty simple MLP um there's really only two parts to it um we'll talk about positional encoding a little bit more but the thing I want you to take away right now is the first thing we're going to do is we're going to pass it through one multi-layer perceptron one just basic dents neural network and it's going to Output density and then we're going to take some intermediate activation from that multi-layer perceptron and as well as our view dependence which angle it's being taken from um and we're going to use that to get color um and sort of the reason we would we would bother passing in some of our activations from our density perceptron network uh into our our color MLP is simply just because there's probably some similarities between them like if you have to if you have to answer the question like how much stuff is at this point in space if you know if you can learn how much stuff is at the point at this point in space you can probably also learn color from very similar um features do people have questions on on the basic structure of just throwing two MLPs to multi-layer perceptrons um at this to just use as our F like a neural network yes yeah exactly yeah the point yeah the density here should be the same regardless of whether my camera was here before here when it was taken so so that's why we don't we don't want this network that's predicting density we don't want it to have access to view Direction at all to guarantee um that there's no way that the shape can be inconsistent from different viewpoints yeah yeah I mean about if we're if we have two cameras of this Dozer uh one Ray will probably go through this point another array might not hit that exact point exactly but it'll definitely get pretty close no no no yeah unless you have some weird kind of camera that can Point Raise yeah your camera would be wild I'm sure I'm sure you could create one using a bunch of like lenses and stuff but but yeah it's not likely um okay so again these are just MLPs they're just dense neural networks um very basically the first one takes in XYZ the second one takes in view direction as well as some of the intermediate activations from our first neural network just to sort of reuse some of the compute um yeah it's going to allow us then to to account for a little bit of you dependence but the density is not going to be changed by your view dependence your your viewing Direction um so this is basically what it looks like we're going to feed in ignore the gamma for now we're going to talk about that in just a second but you're going to take in your position in space pass it through a dense neural network um at some point you're going to spit out at the very like second to last layer you're going to Output your density and then it's also going to take in uh view dependence immediately after that and then at the very end of the network you set out RGB so we output density near the end of the network but before critically before we ever taken view dependence so it just means that we can reuse some of the computations um between you know having to predict density and having to predict color um so it's one it's one MLP but halfway through we output density and then immediately after take in our viewing directions um but it's it's just a very simple hidden uh hidden layer Style neural network nothing nothing insane going on um yeah so what was what was the gamma what is our what is what is this thing uh the Spy here um in between our XYZ and the input to our Network um I'm only going to go over this very quickly so so we can make sure to have lots of time to look at pretty pretty photos and and live demos and stuff um but positional encoding uh what is it um it's basically a little trick um rather than just have XYZ uh as an input to our network if we learn some kind of binary representation of x a pseudo-binary I should say representation of X and Y and Z our neural network is going to be able to create very very sharp decision boundaries um and I'm going to skip the next slide real fast oh where was it no here it is um if we don't use positional encoding if we pass in X Y and Z raw instead of first positionally encoding them with some kind of binary-ish representation we get very blurry lines we don't get sharp crisp edges as soon as we bring back in in the second to Right image as soon as we bring back in positional encoding we get really crisp boundaries and sort of the reason in my mind for that is that neural networks are really good at like learning like logic gates they can at least uh you can hard code all the weights and biases of a neural network um with maybe like three little hidden units uh to make like an and gate or an or gate um if neural network can learn a logic gate um it would it would be best if we can pass into them something that already looks like binary so that we can create these very sharp um we can we can make use of the fact that our data is in a format that is conducive to and Gates or Gates whatever that's a very high level representation I don't want to spend too much time on this uh just so that we can get to the pretty photos at the end but feel free to come up and ask more questions about this because I have a longer explanation I can give um yeah it's basically blurry or not blurry like all of these tiny little itty bitty details like sharp changes in our image um correspond to high frequency details and just blurry things like a green blob for this Dragon would correspond to like a low frequency detail it's oh it's a weird thing and again I don't want to spend too much time on it just for sake of time um so so feel free to come after and and I'm happy to we can break it down a little bit more on the on the Blackboard um so again the different things we talked about uh on the right hand side we want to use positional encoding because it's going to allow us to learn very sharp boundaries between um areas of this object that are extremely dense and then not dense at all um otherwise we get this like smearing effect um and we want view dependence because it's going to give us little effects like glare like reflection and stuff um and that's what allows us to sort of mimic on the very left hand side the ground Truth where you can see the tread of this bulldozer uh has a lot of like glare coming off of it um which would very be very difficult to learn otherwise so that's sort of basically what a Nerf is we've gotten through most of it so enjoy some pretty photos uh on the left hand side is a Radiance field where the camera is going through a pawn and then back out again in a nice pretty Valley on the right hand side this was at one of my research meetings where we do Nerfs stuff so we all sat still while someone walked around at the camera uh and that's all of us and you can see on the left bottom left um why Radiance fields are important things like fire you can't really do it with a mesh something that's like semi-see-through um it's very difficult Reagan's feel it's really good though so I don't know pretty photos this is why it makes me happy um so yeah um so just some minor little details it's like sampling um if we sample sort of uniformly at uniform distances all throughout our object um that's okay but it's kind of inefficient most of that's just error uh we can do a little bit better so instead of just doing one round of sampling where we're just going to sample all the points uniformly let's do two rounds in the first round we're going to sample uniformly and sort of figure out okay I think there's a bulldozer in this little middle region of the image and out here on the edges there's like nothing um sort of get a sense of like okay a longer array where is their stuff and where is there not stuff like preliminary sampling and then after the fact after we've maybe done like 64 I think was the standard for the original Nerf paper after doing like 64 uniform samples along this right let's do 256 a lot more samples where we actually received some density where there actually was stuff to be sampled um so it's basically just let's sample slightly more intelligently let's do two rounds of it instead um and in practice as just a little detail um sometimes we'll just represent the course round of sampling the uh uniform round of sampling with a separate MLP so we'll learn two separate networks with the idea um that are our second Network that samples only around the edges um can spend most of its most of its network capacity most of its knowledge just learning These Fine little details and not because it takes time and and effort to learn that 95 of the scene is air and we don't want the network to waste its time doing that we want all the crispy little details um so so for our loss function we're basically we have ground truth images taken from different points in our in our scene um and we know what color things should be if we look at an image taken from a specific location we're basically just going to compare that we're just going to do an L2 loss um just how close was our RGB values how close did we get this is the loss function we're going to use uh it's pretty straightforward um the only note is maybe we can't we don't have gpus big enough to sample every single Pixel for every single image so we're only going to render maybe like 4 000 Rays or so and just compare those 4 000 pixels and see how well they do um and as a little note there is no direct loss on density we train purely on how good our color is um and this is pretty non-trivial to think about the fact that we never constrain what the density is at any point in time we don't ever supervise the density directly and the reason this sort of is at a very high level again I'm sort of rushing so we can get to the the fun photos at the end and the demos um but the reason this sort of is because if you want your color to be correct along this Ray the density better be correct too um if our color out here is just noise right because there's uh not really an object here if our color out here is just noise and we predict really high density out at the very beginning of this Ray we're going to get Just Pure Noise our predictions are going to be garbage um however the tiny little contributions that got mostly shut out that the tiny little contributions from out here that have uh you know correct density um will will end up contributing back um positively to our loss and our network will say Okay um at this point in space you know where there actually was a bulldozer if we had a higher density because we have correct color out here and we had lower density out here where there was just garbage um we would have gotten a better prediction um I mean at a very high level gradient descent goes for um volumetric rendering is very good um yeah I'm gonna Rush this a little bit more but this is super non-trivial that this works yeah right so it's just like another parameter yeah I mean our volumetric rendering function is a bunch of additions and multiplications if we from this position have array intersecting durable those are and from this camera we have array that intersects the exact same point on our bulldozer um it's we're not going to get good results if we predict you know that there's high density out here before this point of intersection um you know this point of intersection right here will have consistent color between these two or consistent-ish color between these two views um so it'll learn like hey like we would have had we would have had significantly lower loss if we just listened to this color out here um it basically just boils down to if you have enough cameras you can sort of imagine triangulating all of these different points along our object yeah no we don't yeah we yeah we don't have any yeah it's not it wasn't a clever choice we don't have the actual we don't have depth information so that's basically it we're going to sample uniformly we're going to get a rough estimate of like our density and we can train our course Network by saying How well was its RGB how long were the colors compared to the ground truth images we're going to use those densities we got to sample more effectively just around the middle of our bulldozer and pass that through a separate Network that's really just good at learning these fine details just at the edge of our object and that's what we're going to Output so that's basically it at this point um so why does this work at all again we're sort of assuming we know camera positions it boils down to again just if you have enough images and enough different rays that correspond to the same point on the object the easiest way for the network to get low loss is to just output um that your your point in space where all these rays are intersecting on the surface of your object just output that this has high density and output the correct color that's like the easiest way for it to get low loss um I want to get to some nice demos here um so we have I have uh let's see if I can pull this up this is a project I was working on um it's basically it's not running right now so that's why it looks like this um this was like a video my sister sent me out in like Bodega Bay um basically just of come here I gotta find the command um let's see if I can get it to run here of just a bench out by the beach um and I just ran it through the pre-processing script that like this library that I was working on has um it's going to load up the data here in a second uh come here there we go we can see the images loading into the scene now um it's kind of hard for me to control this sorry um it's a little bit on the slow side right now but I'm just connecting to a server that's doing this for me um and in a second when all of these have loaded in we will see the scene start to render here and train in real time um should be about done aren't that many more camera poses it kind of ends about there buddy yeah you'll notice when we ran the pre-processing script it got it kind of wrong so these were taken vertically but you'll notice when the data was pre-processed it decided that the up Direction was kind of to the side didn't quite get it right pre-processing is hard so you can see it's actually starting to train here uh oh I'm gonna set it to have very high detail and be very slow so it's not going to be training much so it's supposed to be stairs out here sort of on the beach in a minute here we'll start to see some stairs forming in the horizontal there we go um the original Nerf took like three days to train very slow what was that uh this is using a derivative that our library came up with called nerfacto uh it's basically just an amalgam of a bunch of different Nerf derivatives called like instant NGP MIP Nerf uh I'll come back so you'll notice this was taken I didn't really give my sister any instructions for this she just took one a capture this it's all pointed in the same direction there's no information we don't have any images pointing behind us so there's kind of a lot of fuzz right behind the images there's a lot of noise that gets learned um in all the directions all the areas where our model you know never actually saw what what the scene looks like yeah it's a good question um in this case we've constrained the space um just because it's a little bit hard for a network to learn good values all the way up to infinity and back uh so in this case there is a bounding box put around it and anything outside the bounding box we're not going to query it um though if you want to do infinite scenes some people do do that um yeah so what we're using here to represent so actually let me see if I can turn because it actually you can see out onto the beach a little bit and sort of supposedly infinitely away you can kind of see the beach a little bit there um so that's not just learning stuff at the edge of the bounding box in this case we actually have something called scene contraction so instead of taking in our positionally encoded XYZ the first thing we're going to do uh is we're going to warp X Y and Z so instead of being out at like a hundred thousand we're gonna apply some like continuous warping of the space so that everything is sort of kind of close all the yeah so the actual XYZ and values that our Network gets have been transformed a little bit and squished we we can track space down so it's close to the scene a little bit it just means that we're not passing in values that are like a million because if a network gets a really huge value it's probably not going to do super well with that well um I mean you can just put a bounding box and just say we're never going to let the network query we're never actually gonna train on any points that were queried outside of this little Cube around our object like you can do that um but sometimes you want to represent things that are really far away um this scene isn't particularly good so you can keep asking me questions here and I'm going to cancel and start uh one that of the campanilla that's a little bit better um let me see or another company that's out in front of doe actually um where is it okay let me reload this um sorry uh where are we at no actually yeah if the network gets a value that's like like 100 000 times larger than a value it's ever seen before in training it's gonna just freak out and not it's not gonna do very well uh so yeah the reason we would want to take all of Infinite Space and sort of contract it down so that all of space sort of fits in this little box is just to make sure that our Network only gets values it only ever gets values that are you know between like negative one and one or so just so it's only ever getting values on the same order of magnitude um so yeah we just sort of we apply this little math trick um so every single point in our contracted space corresponds to a point out in like our real space um there is oh if this if this isn't loading I'm actually going to go show you uh hope it is loading uh oh but this one takes a minute there's a lot of images of this um where is it this is the library we're using for this um there's a really nice graphic of what it looks like so say we have this we have this scene um and maybe these maybe these little cone objects continue all the way out to Infinity right if we're sampling all the way out to Infinity this y value has to you know our network has to be able to spit out good values good colors good densities all the way out to Infinity that's just too much to learn so instead we're going to kind of contract space a little bit so everything fits within like this little unit circle you can it's just a formula that takes in any given point in space and Maps it into our little unit ball or normal and it's invertible so if we have our point in our little Norm ball we can figure out where it came from in real space um you can do with a cube too does that make kind of sense set scan there's something dough uh oh I'm gonna make sure I'm on the right Branch too uh but I think this is working um yeah so let me let me up the quality here yeah so this is out on dough so you can see the campanilla kind of up there it's quite far away we can actually move around so let's kind of we can we can exceed the boundaries because we're using space or we're Contracting space we can we can keep going the only problem is because uh we never had any training images that were out this close to the campanilla our network is going to get progressively worse the closer we get to it and if we get to a point if we kind of move a little bit to the left you'll see it's not really learning the campanilia as much as it is learning a bunch of God rays that kind of look like the campanelia uh so there's all of these these sort of god-related cake things and if we sort of back into our scene a little bit we'll we'll see that we'll see it come back into view but yeah so we can there's lots of different fancier versions of nerds that'll do you know space contraction um so many so many cool advancements have been made that allow us to train this in seconds instead of days um yeah yeah a lot of really cool engineering has happened um yeah are there more questions comments or concerns because we're definitely at the end of lecture people just want to come and Vibe and just look at pretty pictures with me I'm happy to do that but and there were more questions that were uh I didn't I didn't have enough time since I wanted to go over the recap of Nerfs again um uh I didn't really I didn't really talk about like why necessarily why it's non-trivial that we can learn density like why does this work at all because it's absolutely we can come and chat about that too but I'm gonna stop the recording here only people that come to lecture get the answers haha
CS_198126_Modern_Computer_Vision_Fall_2022_UC_Berkeley
CS_198126_Lecture_16_Advanced_Object_Detection_and_Semantic_Segmentation.txt
three you know all right uh we can probably get started um those of you guys that were here for the last two lectures uh Vision Transformers um and the one before that was on like attention uh how that's used in Transformers kind of uh the first part might be like a little bit of an overview uh so yeah we can kind of get into it uh the timeline for today is kind of to review CNN's their motivation what makes them good what makes them bad then the Transformer um things that have been kind of uh pushing forward the state of the art of computer vision uh we're gonna first start off with like the NLP context like Bert um how those embeddings are kind of generated what the purpose of that is then move on to Vision Transformers then compare Vision Transformers versus cnns and then the swing Transformer which is an improvement upon Vision Transformers or kind of computer vision applications so we're going to kind of talk about window detention patch merging then shifted window attention which is the the big thing in swing Transformers uh and with positional embeddings and then kind of talk about overall findings if we have time we can talk about some Advanced segmentation methods uh but yeah that's going to be kind of a cherry-on talk uh if we get to that or not yeah so starting off uh the tool suits the task we want to choose the correct tool the correct model the correct network to use uh to shoot whatever task we have at hand um so we talked about CNN's earlier um the game popularity about a decade ago um the kernel goes over the image uh certain stride length um creates a convolution with uh reduced dimensionality uh the way the CNN works is it detects edges and general image Contours so General features in earlier layers and as it continues um it kind of gets intuition for more specific features this is finally passed into an FC Network and you have a final output prediction we spent a decent amount of time the last couple classes on this um kind of earlier on so I'm just going to skim through this um something we didn't talk about too much is that cnns have translational equivalence what this means is that they kind of adheres to what is called a 2d neighborhood structure so because we have a kernel which is sliding along this image a certain stride distance depending on where our kind of region of interest is in the image the CNN will capture this at different points uh along its uh stride movements right so they respect locality they won't start by comparing the two opposite edge of edges of an image they'll start on one side and as the kernel progresses uh as the stride continues um it'll learn more and more information so cnns are translation equivariant like our eyes this means that if something if the region of Interest changes position from one image to another or undergoes some type of augmentation either like a rotation uh an illumination a size or a translation to different parts if I'm looking at somebody on the left side of the room versus the same person if they move to the right side of the room uh my eyes act as a stride right so I'll see them if I'm scanning from left to right this applies to many augmentations and we'll see how this uh kind of varies as we go into the Transformer architecture as well um so CNN's yeah the biggest takeaway is that they're transitional equivalent which means that as this image moves our output is also translated right um which which makes sense similar to how our eyes work so something that's very interesting is that cnns have shown a propensity for textural bias over shape this is extremely interesting because in a search that cnns can still classify perfectly given a shape augmentation while texture is constant right what this means is that if we augment the shape of a certain thing uh while we remain uh texture as as constant by applying textural filters to an image by stylizing images as it's called um their performance can decrease or increase depending on our shape and texture bias that we are applying to this so this this bottom example shows this well right we have a texture image of an Indian Elephant this is the kind of like skin of an Indian Elephant um then we have a Content image the tabby cat and our our Network classifies this correctly um and as you can see all of the uh kind of classes that it thinks are similar to this are all in that kind of like cat fox animal category right however if we apply this texture to this image we classify this as an Indian Elephant and our two other uh classes that we've determined this could be are are also corresponded to the texture it doesn't see this shape at all um this textural bias is uh kind of good in some ways but it can be bad in others there's been a couple studies shown on this um but uh yeah I thought this was extremely interesting we take a stylized image and a Content image by applying this style to this content this would misclassify different cities you can misclassify different kind of uh images as well so this is an interesting thing that we want to keep in mind as we go into Transformers um so going into the context of Transformers uh for NLP tasks right um we talked about this extensively when we were talking about attention um last class I believe was Vision Transformers uh but for NLP Transformers specialize in long-range dependencies right we talked about self-attention a way to tie important things in a sentence together by performing attention within a sentence uh specifically focusing on the uh priors that we have based on prior words that we encountered in a sentence um attention lets us focus on certain parts of the input and self-attention like we talked about lets us do this within a sentence so for example full versus MD just changing one word means that the reference for it changed so these two sentences are identical uh except for the last word so I poured the water from the bottle into the cup until it was full in this case it refers to the cup I poured water from the bottle into the cup until it was empty right so in this case it is referring to the bottle so in order to capture the uh kind of Trends over time self-attention lets us do this in a very methodical way and we kind of talked about this uh in the last few classes we also talked about this calculating attention right we have a query key value system where we have a query we want to determine its similarity to different Keys we want to update our value priors based on that dot product right so we're multiplying the softmax Q times K dividing by the distance to kind of correspond to the size of these Matrix multiplications we want to account for this size so this is standardized passes through a soft Max scale our values by this and our output is based on the magnitude of our values as corresponded to uh the other values um so essentially you can think of it as cuning the value weights um by determining whether your your query and your key are similar or different so in a perfect world you could have if you really know what you're looking for if you really know what you're doing your key and your value could be the same value um yeah we talked about this extensively uh we're going to be introducing a slight shift to this so we're going to be introducing a bias term in the denominator for this specifically for shifted window Transformers uh but now right we want to talk about something something new before we talk about how the tool suits the task right if you want to do some NLP stuff where we have a lot of text we use a uh like a Transformer we can use like bird to generate embeddings um use those to capture long range dependencies for like very simple 2D image classification tasks we want a CNN with a kernel right now what if we wanted to do something to handle both NLP tasks and vision tasks or just like a general architecture that can handle both of these that's where the Transformer comes in right um we can use Transformers for various tasks specifically uh Vision Transformers for computer vision tasks um so yeah this paper is extremely good an image is worth 16 by 16 words this is talking about how authors wanted to deviate very little from the classical NLP kind of bird Transformer architecture so we still have 1D token inputs we're flattening basically an image into patches of 16 by 16 pixels and we're passing these as our embeddings through a linear layer uh which will generate our embeddings and then passing those embeddings into our transform model um the positional information now is learned by the model we don't have a positional embedding that we're passing in we talked about this during the uh attention lecture how we can use sine cosine different metrics to figure out positionality of different words in our sentence um but that is something that a Transformer is able to automatically figure out um this is going to relate to the bias term that going to talk about when we go into the shifted window Transformer essentially this output once we kind of tokenize this picture into uh a an embedding is a classification head multi-layer perceptron to finally predict class attention also matters in this uh kind of vision Transformer architecture as shown on the right um this is straight from the paper where you can see we have an input here we have a tension focusing on the subject again we have an input attention focusing on the subject here we have a lot of things going on this is a misclassification so as you can see we don't really have attention on a one specific kind of Target [Music] it's like 60 more seconds exactly exactly you can think of it exactly like that um so we're saying that an image is worth a 16 by 16 uh kind of subset that we're flattening making a 1D embedding out of and and sliding that through our our Transformer model so we're basically just kind of like shifting uh the information that we have but we're keeping the Transformer architecture the same so we're able to apply this to uh kind of a computer vision task so yeah now let's talk about Vision Transformers versus cnns so Vision Transformers self-attention layers are Global right so this is global to the entire image only the MLP will compare neighboring pixels so the last layer the multi-layer perceptron also to a Transformer and embedding isn't embedding it doesn't matter if it's an image or a word it's just quite literally a vector and whose dimensionality kind of captures different information about its input um so it doesn't care whether this is an image or a word the Transformer architecture will stay the same this kind of ties back into the last thing that we were talking about in terms of one tool that suits multiple tasks right can we come up with something that is generalizable can we come up with something that is instead of being super specialized something that can handle a whole bunch of inputs this also reduces inductive bias in a model compared to CNN's so inductive bias is kind of assumptions that our model is making um and we want to reduce this in order for a certain model to be generalizable uh and this further ties into generalizing assumptions based off of training data into a model of a specific domain so for example for our NLP model that we talked about um a couple of lectures ago if we are assuming certain priors for that those are not going to apply to you when we try to apply the same Transformer architecture to a computer vision task right so uh for those of you that have heard of Optimus Razer as well if two models are explaining training data equally well the simpler model is preferred as a generalization right we want a model that doesn't assume priors about our knowledge or about our data and instead can be generalized to multiple tasks this kind of also ties into if we're talking about like the grand scope of artificial intelligence of which this decal is like a very very small subset of um people at like Facebook AI research um yeah Lagoon right they're trying to push forward um like artificial general intelligence so some kind of thing that like captures general intelligence uh as we know it um and in that kind of vein of thought having generalizable models is a very good thing on smaller data sets however uh it could be inferred and it data shows that CNN's outperform Transformers Transformers really shine when a lot of data is if we have a very small task there's no reason why we would need a very complicated very processing heavy Transformer architecture if we have a small like uh like 128 by 128 pixel image um a CNN should be able to perform just fine for that especially if you're just trying to figure out like the gist of an image um however we talk about this uh during our attention lecture as well if we have like a whole book that we want to sort through a Transformer using attention allows us to really speed up this processing we don't need to constantly like go through the entire uh kind of subset of our our universe of text we can focus on specific things during this process so a smaller data set CNN's outperform Transformers this makes sense um but the assertion is that bit's model human eyesight more than CNN's right they prioritize shape by default so while we talked about uh earlier right we talked about the cnns will prioritize texture um going back to the tabby cat example this is very clearly not an elephant this is a tabby cat with a an elephant's textural uh kind of component laid on top of it um and the CNN will misclassify that however vits will model human eyesight better because of this reduced inductive bias right now they're prioritizing shape over texture um this is a a strong benefit of of vision Transformer models and most of the kind of computer vision applications as we further technology are becoming more and more complicated this is where kind of Transformers shine yeah I think uh how does that identifies right so because we have a stride that stays the same shape and the same uh or our stride stays the same as we pass along an image and our kernel stays the same size um we're learning things about the image very generally as we kind of go across an image so we're not we're not gaining new information about how like one patch will relate to another patch right because we're we're just moving a single stride in a single fixed kernel size across our image so that's where like the bias comes into play if we're able to generalize multiple parts of the image like having like Global self-attention uh like Vision Transformers do this will reduce our our bias yeah is exactly yeah that's why like on a first pass from CNN like you're gonna get really General features only once you like make multiple passes you start um like combining or you run through like multiple DPN layers are you gonna like actually start learning uh higher order features yeah all right so uh a very interesting field is uh adversarial attacks so adversarial robustness um and this is kind of a third metric that I don't believe we've talked about before but we're going to be introducing in this lecture uh is uh being able to distort an image uh kind of uh by introducing noise into it so that certain classifiers are no longer able to classify these images because of the noise we've introduced but visually these are unimpaired or unobstructed um this is actually uh relating to a project that I did as part of mlab my first semester this was uh kind of looking at the Carlini Wagner attack who was a Berkeley graduate who's currently working at Google brain who released kind of a his own method for adversarial attacks the Carlini Wagner attack uh so yeah there's a lot of interesting literature in this field but because they prioritize shape over textual bias Vision Transformers seem naturally robust to visual distortions so for example we have like a little Panda here we introduce Some Noise um and now our classifier is really confident that this is a given even though it's visually like we can very clearly see it's a panda but by introducing targeted noise into an image we can uh we can kind of change the entire classification uh this ties into this as well right if we have like a slight augmentation right the left side pretty clearly looks like a cat like a cat with like a little pointy ears if we make the nose a little more defined now it's looking a lot like a dog right so just by changing a couple pixels our entire classifier even our human eyes can't really tell what's up right um so there's a lot of factors that factor into decision making in natural scenarios so self-driving cars in a storm aren't super reliable right their sensor Suites have noise introduced to them um and you don't want to miss classify stuff right you don't want to misclassify a uh like a like stop sign is like a person or something like that could have really bad implications but even the human eye isn't infallible like personally like the left one really looks like a cat to me and the right one definitely looks like a dog so uh these are very very small changes in lighting pixels things like that uh Vision Transformer limitations however shows in other fields of uh CB uh so in this case like image restoration semantic segmentation where patches are passed in and processed one at a time so border information can be lost right we're kind of focusing on the main we're putting our attention on the main source of information the main subject um so fine Grant pixel analysis within a patch also is weak so essentially if we take foreign like image and we break this into like certain patches we only have information now we're treating each one of these patches as its own data point so anything within this patch any information within this patch we're treating it like the same so like these are all the same this is all the same this is all the same it's all the same so if we wanted to do something where we need like really fine-grained pixel analysis within a pack we're not able to do that right um so that's that's kind of where uh like both CNN's and vision Transformers are a little weaker um but yeah uh in terms of adversarial robustness um because like vits are learning as humans learn by prioritizing shape over texture um they seem to be naturally robust to Vision distortions but there has to be a lot more uh kind of experimentation done on this uh yeah because you can think of this noise this noise filter that's being applied as essentially a uh a textural bias that you're applying to this this image um but uh yeah uh that's why it's it's misclassifying however ideally Vision Transformers won't have this problem all right so uh bit isn't always lit there are problems with vision Transformers uh the problem is uh let's let's look at the original kind of vision Transformer architecture right we're breaking an image into 16 by 16 pixel patches to create patch vectors with a linear projection uh we're passing these through combining them with positional embeddings and passing this into a Transformer right this is like a very very simple this is how our Transformer architecture works right we have a regular Transformer architecture afterwards we can black box that for now lastly we pass it through a classification head to give us kind of our final prediction is this a bird a balloon a blue sky in this case you know could be a little bit of everything um but we're again we're breaking the image into these 16 by 16 patches uh this generates right 16 image tokens as we we break it down given the size of you know our image depending on how we break this down um where each image uh each 16 by 16 patch contains 256 pixels um but the extraction of patches is a problem specific to image not text right in text we're not really like breaking up our uh Thing by by patches we're just kind of passing in these words or sentences as embeddings for bigger images the number of image tokens will grow extremely quickly which is okay for bigger gpus or if our task only requires the Precision of 16 by 16 uh patches right so for image classification we want to predict one label for the entire image so you want to take in this image on the left and say oh this is a balloon right so we're trying to like yeah if this is our image we're trying to say okay the subject of this image is a balloon however for tasks like semantic segmentation we want to classify each pixel right we had a lecture about this earlier we want to say okay these pixels are people this outline is a balloon and the rest is a sky right so we actually want to classify each each kind of sub pixel but if we're doing this thing where we're taking 16 by 16 patches uh we can't really do that right because we're treating this entire kind of uh input as its own self-contained uh like data point so we don't actually have information about what's going on in this we're just kind of like combining it into one oops we're just kind of combining this into one uh data board and this will grow exponentially right if a 16 by 16 patch is too big for certain tasks obviously we just instead of 16 by 16 patches just use every pixel as its own patch right and pass that into a Transformer however this will grow extremely quickly right on like an N squared scale right because as we go from like a regular 128 by 128 images to much bigger images 1920 by 1080. uh now we have 2 million tokens now if we are passing in a 4K image 3840 by 2160 we have 8 million plus tokens um and these are extremely long sequence links to be passed into a Transformer requires a lot of processing power uh and it's just not very fun for us as consumers of Transformers right um so these are problems with the typical Vision Transformer how can we fix this shifted Windows begin to swim um but yeah that was a pretty bad pun uh but essentially the shifted window Transformer was introduced in mid-2021 so not too long ago where they proposed a method to alleviate problems with vision Transformers using non-overlapping Windows to perform self-attention within groups then smartly combine these groups uh these windows together and then perform attention Within These so essentially you start off with like really fine kind of not quite pixel by pixel uh things but really small groups then you combine these these tiny groups called patches into Windows you perform attention within these windows and then you combine these windows together and then porm perform cross attention so now you're performing attention within two things that previously were not uh attended to I guess you could say um yeah so using the shifted windowing scheme allows for cross-window attention connections um a way to show this I broke this truck so I'll not be using that but if we had this image which we're going to come back to later um but if we had this image as uh 224 by 224 pixels right we want each of these patches to be uh uh sorry we want each patch to be four pixels by four pixels right this is kind of like a classic for the swing Transformer so we have a lot more granularity than our previous uh previous model Urban Transformer model right so now we have 4x4 pixel patches right we want to combine this so each each of these patches is one of these so we're defining now a window of 49 uh kind of patches 7x7 and then we have eight by eight of these windows to get to our kind of 24 or 224 by 224 model right so we have uh these 7x7 uh uh kind of amalgamation of patches which forms one window and we have eight by eight kind of sections that we haven't heard uh that is kind of like the the big motivation for this so essentially we have our image here right we have uh these these patches that we're making of pixels now we have these big uh kind of four by four uh sections of patches that we call windows and then we're shifting these on top of each other to classify that is kind of the uh overarching goal of this and we're going to kind of get dive into the details of this in a bit um but because we are developing a hierarchical feature map that yields um an improved Global representation of the model um we now know that because we're doing cross attention we're prioritizing certain features over other features right um that's kind of like an inherent benefit of of this model which allows us to develop a better understanding of what this Global representation is of our image uh the second benefit of this is that now instead of N squared uh computational complexity increase with respect to image size we now have a linear computational complexity because we're keeping the size of our package the same right the only thing that we're doing is we're combining the windows um so we'll instead of having N squared complexity where n is the number of vectors that we're passing in now we have m by n where m is the number of Windows uh number of like patches that we have um and this is going to be explained further as well um but yeah this is an example of image segmentation a lot of like self-driving companies and Technologies use this I think it's a very very cool field I think Ryan did a kind of a presentation on this earlier and yeah this flexibility allows the swing Transformer to really excel at image classification object detection semantic segmentation we're going to walk through a full example of a swing Transformer um to kind of talk about the motivation behind it as well as give like a very solid example so this is a swing Transformer swin large patch 4 Windows 7 224 22k to 1K that is a very big name but essentially uh we're gonna break this down uh the motivation behind the swing Transformer also is that we start with smaller patches and then we merge them into bigger patches in later stages right so by combining our Windows um a swin tiny model there's two types of models when tiny has c equal to 96 so a capacity of 96 so enlarge has a capacity of 192. sees the capacity of the model where C is the size of the embeddings when the image patches are initially converted into a 1D token you can also think about this as the number of hidden connections we need in our feed forward neural network so the amount of information essentially that we're capturing you guys might have uh when we were talking about the Bert model Bert has a capacity of 768 right so this that's the length of your uh embedding the dimensionality of that right um 224 here represents the image size um we have three channels because this is an RGB image so our input image size is 224 by 224 by three patch 4 means that the image is broken into 4x4 pixels uh but each of these four by four size uh patches um actually is its own image right because we're literally just cutting this image up so this also has a three channel uh kind of component so each patch has 48 feature dimensionality including the RGB colors these then undergo linear transformation converting them to a c dimensional Vector so in this case because we're using swin large model these are going to be converted into a 192 uh size vector and we know from previous classes that linear Transformations allow us to take like a three-dimensional thing and convert them into like a c by one dimensional like vector or a one by C dimensional Vector I guess in this case Windows 7 this means that we're partitioning the image into a non-overlapping way where m equals seven so we have eight by eight non-overlapping Windows total right where each we have uh 40 49 patches per window so we have eight by eight Windows right so 64 Windows where each window has 49 patches if that makes sense uh and each pack is a four by four by three image um the math works out on this 64 times 49 is equal to uh 3136 patches um and this is the same as if we just are breaking the 224 by 224 image into four by four patches so 224 divided by 4 squared right um the last thing is uh 22k to 1K uh this just means that the model is trained on imagenet 22k um which is like the number of class labels it has and then fine-tuned on an image net 1K right um so hopefully yeah this gives you like an idea not only of what this like super long title means but also the significance of each of these things um and kind of putting this into perspective is honestly like one of the most important parts of understanding the uh swing Transformer architecture okay so the architecture overview um you guys might be familiar with uh kind of the Transformer architecture by now has kind of been the focus of our last three classes um but essentially now there are two parts to the swin architecture on the left we have window uh kind of uh multi uh attention and on the right we have shifted window uh attention uh we're gonna start by kind of going over regular window detention uh wmsa uh and then we're going to move on to shifted window um we have a multiple multi-layer perceptron at the end uh with an input and output layer with some amount of hidden layers in between and this uses a jellu activation function right so we know that relu is a rectified linear activation function right it kind of looks something like right like this wow that's sweet uh yeah it kind of looks something like this um however jelly is a gaussian uh activation function uh an interesting fact actually is that uh jelly was developed by a uh a student a PhD student I think at Berkeley uh Dan Hendricks um so yeah super cool things coming out of Berkeley and jelly was like pretty much like the status quo for like any uh kind of Transformer architecture these days so that is very very cool it's been cited in a bunch of different papers um but yeah before uh passing into our MLP we pass uh the output of our windowed MSA into our layer Norm which normalizes the distributions of intermediate layers um this enables smoother gradients faster training and better generalization accuracy right um to kind of put this into perspective if we have like okay I'm really bad at drawing this stuff like you have like a really complicated this is like a three-dimensional surface and this is the surface you're trying to like minimize on right this is like a pretty bumpy surface it has like a bunch of bumps like here and there what layer Norm does is it essentially applies like a smoothing operation to try to smooth this so that when you're calculating your gradient the process is a lot smoother um so it's smoothing out your gradient surface yeah uh so yeah this does like uh we kind of compared I think bathroom to layer Norm in the last one uh but this this specifically focuses on like intermediary layers whereas bathroom will take like an entire batch um after one like kind of iteration of training and then smooth it out um so layer Norm is between these two steps batch form would be like somewhere afterwards if that makes sense there's like a different different uh part of the process um but yeah we've talked about MLPs before so uh onto what window.msa is right um this stands for windowed multi-head self-attention blocks um and this is performed within each window right so you have anybody Windows now within this window we're performing wmsn and the way this works is m is the number of Windows the attention spans so instead of calculating attention across all of our Windows um and we might have you know like this is one token this is another token this is another token right we'll have like a bunch of these kind of token embeddings instead of calculating self-attention across all of them we only want to calculate it across a certain M value in this case m is two right and here is the number of patch vectors and the process results in N vectors so yeah uh we know that in our example given that this is our example um we're taking 49 by 49 because we have two of these windows right each of these with 49 patches we're Computing self-attention cross attention within both of these 49 uh by four uh patches so the total number of dog products that we're Computing is 49 times 49 um and since the number of patches however in each window is fixed we know the number of patches in each window is guaranteed to be 49 and that's not changing complexity will become uh kind of proportionate to uh image size uh or sorry sorry complexity will become linear to image size and not quadratic as it is for vit because as we increase image size for vit we're we're recomputing uh each dot product across uh the size I guess that we're adding if you can imagine increasing image size is like adding a border around the image we have to compute cross the tension between each uh kind of part of the image that we're multiplying against um in this case however uh as n increases or sorry as M increases right uh kind of the end here will stay the same because we're keeping the number of uh patch vectors the same uh at the end of this stage one is done uh and we can move on to stage two which is Shifty shifted window uh multi-head self-attention uh this is uh going to be kind of a more clever way this is the purpose of the paper if you guys do want to check out the paper uh we can post a link for the ad afterwards um but yeah uh now uh this is an uh author's example of how the patch merging process works so we know how windowed attention works right you're literally just taking the uh the dot products between two different patch uh patch Windows right um we're gonna use an author simpler example for uh kind of understanding how patch merger works so if you have m equals four Windows right and you have a small image that can be broken into eight by eight patches so this is total eight by eight patches we break it into four window uh four windows with 16 uh patches per window so 64 total patches each of these has a 4x4 patch once the tension is finished stage two begins and the first thing we want to do is undergo a merging process where we want to concatenate the features of each group of two by two neighboring pixels so each little two by two block we want to concatenate into one uh kind of uh one like window one smaller window this creates new patch borders and remakes Windows um so to kind of put this into perspective um this is our original image right eight by eight this is one window the first step is advancing each patch border by two pixels right so the top left patch border is is shown here and we want to do this for each of the four patches right so for this for this patch over here this previous patch we Advanced this by two pixels on each side and this is done for each of these four Windows right by advancing the patches now right we want to take these new intersections with our image and use those as our new patch borders so this results now in nine Windows three on each kind of level but swin has a solution to address this increase from four Windows to nine windows and its solution is to like cleverly combine these windows together uh and the way it does this is uh shown on the next slide uh but before going into that I do want to show that this is our initial kind of breakdown of Windows after we do our little like padding pseudo padding and remake our Windows now we have nine Windows right and keep in mind a window is the kind of modem under which we perform self-attention so self-attention will be performed within each of these windows right so keep that in mind each one of these uh kind of smaller subsets is its own patch okay so keeping the edges of the new windows that intersect the image right um we know that uh we're going to be kind of introducing the entire size because we're introducing we're uh increasing Dimensions by two on each side we're going to be increasing the total dimensions to 4C um but something we want to keep in mind this is where we left off uh on the last Slide the way that we're combining these into a smaller amount of uh patches is by combining our two by two patches so each of these two by two patches into one patch so now in the shifted window phase we're resizing our Patches by combining these uh features this reduces the number of Patches from eight by eight that we had before eight by eight to now four by four right this reduces the number of uh kind of total elements that we have from 64 total elements to 16 total elements so we're actually reducing our dimensionality by a factor of four right however because we're reducing each component right height and width uh by this certain factor of two we also want to scale our depth or like our C Dimension by two so essentially we're reducing the number of patches to one-fourth of the original but we're doubling the size of the embedding size during this patch merging stage right so while we're reducing the breakdown of our our patches we're shifting sorry we're Shifting the actual boundaries that we're using for each of these patches we're keeping the embedding size and doubling it so that the total number of features that we have we're not actually losing information we're just reducing dimensionality while keeping the total amount of information that we have consistent um I'm gonna also uh kind of pull up the paper that corresponds to this so you guys can can see an example later um but this is kind of the most important part of the Schwinn Transformer architecture you're performing a two by two merging of patches so each of these patches is a 4x4 pixel image now you're merging these and by doing that you're reducing the dimensionality but you're increasing the uh the C kind of value your uh kind of other dimension by running this through a linear layer right so we know that linear layers allow us to change certain Dimensions depending on their structure and we're using that property to uh in like a kind of creative way all right so the next thing to talk about is optimization post patch merging so this optimization technique doesn't use a padding but instead rearranges blocks to only have four windows so you can think of this as this right so we're increasing our boundary by two on each side right essentially shifting this down shifting all of our windows down by two right if they start off on the top left right but now if this was our entire image and these were our four Windows when we do this shift we're left with a like two two by whatever this is a length of two kind of border around our image the naive solution for this would be to zero pad this right before performing our attention but the authors of this paper came up with a very creative solution which is rearranging these blocks to provide savings and compute power so once we do this shift if we have an image or a kind of a section A C and B by doing something called a cyclic shift where we move C to here a to here and B to here so moving like across the image to now calculate attention within this entire thing so we don't have to recompute this uh this kind of uh area of information so essentially we're taking this image we're performing a cyclic shift to move it to this part we're calculating attention and then moving it back marking that this has already been calculated which provides savings and compute power so instead of having to do an entirely new operation to get whatever values for this we're shifting then calculating a tension like normal so keeping like our time complexity constant then moving this back this is a very smart thing and this is an optimization that the authors propose the last thing we're going to kind of talk about uh not quite lasting but is positional embeddings right where we learn patch position information instead of it being provided so Envision Transformers we provided like a sine cosine some Metric of um positional uh information and we included that into our embedding in this case I talked about adding a bias term at the bottom right where before we knew we take our query we take the kind of like dot product with the key uh the keys that we have we soft Max all that well first we divide those by the distance um and then we we soft Max the whole thing and multiply by our value to kind of perform an informed search right now we're adding a bias term to the bottom what can that do within an MIM patch window assuming that this is our password we have like you know certain dimensions of this uh 7x7 patch window a patch can be at most six patches away from another patch plus or minus six if we assume like a constant scale right essentially what this is saying is that I started this patch right where can I look for other patches along this row I can look at like plus one plus two plus three plus four plus five plus six right if I was here it would be minus one minus two minus three minus four minus five minus six and for each of these movements we have a zero so if I don't want to move then it's zero right so for performing this cross attention calculation um do I really have to look at kind of every single possible pixel that I could use no right because I'm only performing attention within this patch so this now including a bias Vector of 13 by 13 components where I have my plus or minus 6 in this direction plus or minus 6 in this direction basically this is encapsulating the possible kind of places upon which I can calculate self-attention so by including this bias term at in uh our attention calculation and by creating a smaller Matrix of 13 by 13 Dimension so in this case because it's seven by seven there's plus plus six or minus six directions that I can go to plus of course zero so that's 13 in both directions so it's a 13 by 13 uh possible directions I can go to so essentially I'm limiting the scope of the attention I can calculate which is all right in fact it's preferable because I want to calculate self-attention um hopefully this makes sense um kind of explaining how positional embedding can be done smart so these are a lot of optimizations that the authors thought about um which are are quite quite very clever um are there any questions kind of about this or the kind of last couple slides sorry this is a lot of information but foreign yeah so the channel size is uh is still there we're calculating our uh attention right this this is actually this four by four by three is actually a 48 Dimension feature Vector because of the channels it will uh yeah what is changing is uh kind of the information contained in this so as we combine these now the channel size will change right we'll be doubling it potentially or like having it depending on the linear transformation that we're applying to so very similar to what we talked about with uh like CNN's and advanced architectures uh on that front okay this is uh kind of where the math comes into play uh which is a little confusing but this is the combination of patches uh and then finally kind of Landing into a prediction so if we have seven by seven windows an eight by eight uh seven by seven packets per window and eight by eight Windows uh currently encoded in C equals 192 because this is a swim large model uh then we have you know 3136 patches we do the merging process where we reduce the number of Patches by one-fourth we talked about this earlier right we went from 64 to 16 patches when we reduce dimensionality from eight by eight to four by four uh kind of drawing this out really quickly we have eight by eight but now we are making this into four by four and combining this this this this this this this right et cetera Etc so we're reducing that dimensionality um so we're having each Dimension which now results in 784 patches we doubled the encoding size though right through our linear embedding so we're doing two times 192 to get 384 encodings even though we have 784 patches now our dimensions are height divided by eight times width divided by eight and two c that is the end of stage two in stage three and four sorry we're kind of doing the same thing we're once again having the dimensions so we get height divided by 16 with divided by 16 then height divided by 32 would divided by 32 right and every time we do this stage we're doubling the information that we're containing in our encodings to keep this constant right so now we're going from 2C to 4C then to 8C after stage four we have 224 which was our original originally we had it divided by 4 after stage two three and four now we're doing 224 divided by 32 which results in seven by seven tokens and an embedding size now of 19 uh 192 times 2 times 2 times 2 right so that represents stage two three and four to get 7x7 tokens and 15 36 Dimensions M now equals 7x7 tokens so when we get an m equal to our patch size right uh or our window size when we have the same amount of uh tokens as patches in a single window we know we finished the process right uh this makes sense because now we've broken up this image essentially into like a representation equivalent to our initial kind of patch breakdown the last component is an average pooling layer right so taking these certain components pulling them together and finding the average of each subset and a norm uh to get a single representation with 1536 embeddings lastly we have a classification head to convert this embedding to the right class so the MLP classification head at the end is kind of what gives our final prediction okay taking a step back swing Transformers manipulate windows so that attention is performed on certain subsets but cross-attention happens between regions as windows are combined and moved and shifted this begs the question are we hurting some of the things that make Transformers what they are right are we reintroducing inductive bias by kind of making these similar to CNN's in that now we have what is essentially like not quite the sliding window um but now we have shifted Windows which is you know kind of the same thing uh are we reintroducing this inductive bias as we're doing this there have been studies done on this and it shows that they perform pretty well in regards to corruption robustness and that swin models still have a shape bias far lower than vit right so vits have a quite a high shape bias which we're very glad for right we want to prioritize shape over texture but swin models have a lower shape eyes than Vision Transformers right so that could be something that we've introduced by shifting Windows which isn't super ideal however they don't fall prey to the issues that cnns do like being weak against adversarial oh that should be against against adversarial attacks so are we getting closer and closer to the Goldilocks of the shape texture bias right we want something that is able to identify changes in shape and able to identify changes in texture much like human eyes but we want kind of a model in the middle we don't want something like a CNN where it's so far towards the texture side that by just laying over another texture on top of an image and it'll misclassify but at the same time we want something that doesn't have the same problems as Vision Transformers and that we don't want like high computation costs as well as not being able to classify certain subsets of an image the beauty of Transformers is in their genericness they can capture patterns accurately no matter the data type or domain or use case right it's the same Transformer architecture where they're using this for vision applications or like NLP applications right these can these are basically looking at patterns no matter what the data is because of the kind of self-attention calculations more data results in more performance right we've talked about this earlier Transformers are good if you have a ton of data and vit should clearly visible improvements up to 64 layers so as you scale the model you're improving performance right which is what we expect for Transformers lastly squin is scalable right it has linear time complexity M by n time complexity as you have an n and then an N that is uh kind of changing uh as you scale the size of your image um reattention and other techniques can help Transformers get even better if you know we calculate attention once but we can recalculate attention on the same Parts um but yeah we talked about this this is like our very first example uh last week where we're trying to do like a a French to English translation using a Transformer model Andre karapati who's the former head of AI at Tesla um also kind of cites this paper as as very cool uh so yeah a lot of a lot of really cool things happening in this field food for thought so it's a quote by an author that I forgot but essentially what are the benefits and drawbacks of having a model that can take in any kind of data process it and pull out patterns right is generalizability the future of AI kind of like a general AI that is able to a black box model where I can pass in whatever and ask it for whatever and it'll be able to synthesize these patterns accordingly right I can ask humans a whole bunch of questions right is this chair the same color as this chair I can ask like if you're bilingual like what does this mean in this language right can we create a model that models the human brain by being able to have one model that can do a whole bunch of things depending on the input and the output that you want right um is Goldilocks novel we want if you want task Precision for self-driving cars do we want a really robust like Goldilocks model is a self-driving car going to need to like you know do these translations right a whole bunch of considerations that we want to make overall the field of adversarial robustness is still developing just shape and texture Aren't Enough features to prove invariance to certain attacks so you know some other kind of future stages can be testing swin against Carlini Wagner or more advanced adversarial attacks um that's basically it that is all of it um but thank you guys for coming um hope you learned something new uh these slides and this recording will be posted um yeah thank you guys for coming
CS_198126_Modern_Computer_Vision_Fall_2022_UC_Berkeley
CS_198126_Lecture_12_Diffusion_Models.txt
recording um okay it's 7 10 it's probably a good time to start so today we will be talking about something called diffusion models and this is going to be the last lecture in the series of image generation uh lectures that we have seen in the last few weeks and hopefully this will cap out everything that you've learned so far in a nice way uh this is a quick agenda today um I just want to give a disclaimer that this lecture is more mathematically involved than the previous lectures I just want to make sure that these details you know you don't need to know any of them I'm gonna also mention those because I feel like if I don't I'm going to be doing a big disservice to the fusion models but yeah if you get lost in the weeds that's fine this is understandable this math is not easy to understand I would I'll say that myself yeah in that case I guess we can get started so there are many different kinds of generative models out there and we have seen two of these classes uh in this class so far we have seen variational Auto encoders or vaes and we spent the last two lectures on Gans there are other models too like Auto aggressive models energy-based models we won't talk about those in this course but if you want to learn more about them I think you should check cs294158 it's taught by Dr Peter reveal they'll that course will go over this um these kind of models what we will do however is we are going to talk about this new class of models called dinizing diffusion models so I'm going to start off with some theory behind diffusion and some of the math that goes into it before I delve into that let's talk about how uh density let's talk about how you synthesize data in the first place um in machine learning we sort of make this assumption that your data comes from a distribution can be any kind of data can be image data can be text Data audio anything else we assume that it it has like an underlying distribution and your data set is basically samples from the distribution the goal of generating models is to basically learn the distribution so you could sample from it and basically get another data point that looks like it came from the training data set does that sort of intuition make sense yeah and if you if you look at if you look at the image to the uh to the right on the top we have this like print data set and we can kind of see that it comes from a distribution that if you follow along my cursor kind of looks something like this and what the model does is then it learns to like sample from this distribution and it generates this like test set and you can kind of see that it looks very similar to the Affinity computer cleaning data set which means that it must have learned what the distribution actually is now this distribution is not known to us which is why we have to learn it from the actual like data samples themselves and that is what like all of the models that you've seen so far have been doing in a sense now how do you learn distributions this is kind of a complicated question to ask um there are two main methods to do it you your distribution is is going to be a probabilistic nature right so you could Define something called a likelihood for the distribution so you can say that okay if I have a sample if this likelihood value for the sample is going to be high then there is a good chance that that sample came from the distribution but if it's going to be low this means that that sample probably did not come from the distribution what we can do is that we can try to increase or maximize the likelihood of the of of the of samples that this model generates that's one way to do it the other way to do it is to minimize some sort of Divergence metric between your data distribution and the distribution that the model learns what this diversion says there are many ways to there are many Divergence metrics um I'll get to some in a few in a few minutes but that's sort of how generative modeling works and in a sense we learned about these like two kinds of models we learned about vaes we learned about Gans and even though it might not be obvious those clusters or models are exactly doing what I just described so the vae loss looks something like this complicated expression up there but what really is happening is that this x this loss is basically a Bound for the likelihood so if you minimize the loss you're in a sense increasing the likelihood and that's why vaes work similarly for against we have this kind of adversarial loss function which looks like a complicated mess but what's really happening is that when you're trying to optimize again but when you're trying to optimize the subjective you're essentially Trying to minimize something called the DJs between the data distribution for the of the training set and the distribution that that generator learns this DJs is called the Jensen Shannon diversions it's basically a Divergence metric that says that okay if this value is low then the distributions that we are looking at are fairly similar but this value is high then they're very dissimilar any questions about this so far okay so once you have learned these distributions that sort of mimic the data distribution how do you generate a new sample one way that pretty much all pretty much all models do it is you look at a sample from random noise and you convert that sample to something that looks like it might have come from the data distribution if you remember again you should you took a look at some like latent noise Vector you pass it through the generator and and I'll pop an image right that's how guns work and it's exactly it's exactly how vais work as well and in in both of these cases what's really happening is that you pass this noise into this like model and you get an image out so you're taking a single step from your noise Vector to the actual image the single step might not be easy to analyze it's it's like you have a whole like neural network in between you don't really know what's happening in there so what diffusion models do is instead of taking one big step they take multiple Tiny Steps because these are actually easier to analyze and but what's really happening is that these tiny steps are basically a Markov chain have you heard the term Markov chain before yeah so yeah this is basically the motivation for why diffusion models work you don't need to sort of notice the the main idea is that these models were inspired from non-equilibrium stat Mac or statistical mechanics yeah the slide I just wanted to like put the inspiration out there if you don't even know what that is now a diffusion model has two parts there is something called the forward process which is you take an image and you add noise to it and you can you take that noised image and you add some more noise to it and you keep repeating this process so your image can be some data point say x0 you add noise to it you get X1 you add noise level you get x2 all the way up to some XC you can you can repeat this process for three time steps and the noise that you add can be parameterized by the this gaussian distribution given over here you define these values called beta for each time step the sort of like the set of values is called a noise schedule because it controls how much noise you're adding at each time step it follows a particular schedule and well what happens is that if you keep adding noise to that image you can actually kind of show mathematically that what you end up with at the very end is basically Pure Noise um this is a mathematical trick that was proposed in the original diffusion paper it shows that okay um I know that my trying stuff I know that the image that time step T condition on the image at times of P minus one is going to be a noisy version of T minus one and you can sort of Express that in like this particular wave again don't worry if you don't know what that means but what this trick is basically showing is that you can express X of T in terms of X of Q minus one but you could recursively repeat this process and you can actually directly Define X of K in terms of x sub zero so you can directly go from the initial image to a noisy version at some given time sub key and this is a very cool track that will simplify some of the mathematical um I guess like results that you will see later so uh I said that there are two parts here the fusion model this is the second part it's called the reverse process so we just trigger an image and basically made it into Pure Noise that isn't really helpful right we want to generate a new image and in the vae in game lecture we saw that we could generate an image out of noise and this is what the reverse process is trying to do so what it's saying is that okay if I look at my sequence of like noising Steps in the forward process what if I can find a way to reverse that so I I start with noise if I move to a slightly less noisy version of the image and I keep repeating that until I get back to the original image the goal of a diffusion model is to basically learn how to do this process in reverse any questions about the two main steps of the fusion model okay so when we were moving so if you look at the whole process which went from x0 to x 1 to X of 2 to X of T minus 1 to X of T what we were saying is that we know that the conditional distribution of X of T condition on X of T minus 1 is the gaussian right but we don't know what the reverse distribution is like we don't know what the distribution of X of T minus 1 conditional on X of T is and that's going to be very hard to find because it turns out that if you apply something called base rule to this like set of distributions you need like these set of terms to Define this like reverse distribution and these set of terms are very hard to compute because you need to know what the exact like distribution looks like and that's not possible to like it's like it's intractable to compute you can't really compute it in like polynomial time for example you have to evaluate this like very nice integral which is not possible so what you do is you try to approximate what the distribution looks like okay any questions so far yeah yeah the Q function is going to be the distribution of X of T conditioned on X of T minus one so it's the distribution that defines what what x of T looks like yeah yeah and key of X of T minus 1 given X of K is basically the reverse distribution we don't know that so we try to approximate it using this P function which is basically what the neural network does which because it's parameterized by this Theta right so and before we Define our neural network we need to sort of Define what the distribution should even look like because if you don't know what the distribution is then we can fill it to 4 in a neural network um and a I guess like proper manner so turns out that okay if I were to look at my forward process and I look at say some step X5 if I look at the distribution of say X4 given X5 or maybe X3 given X5 or maybe X2 given X5 it's like this is what the this is what the distributions look like um turns out that these distributions are very hard to sort of compute because they look very weird like I don't know what this distribution is in the red right this is some like random distribution that I can't really parameterize similarly it's the same if you were to find the distribution of X1 condition on X5 all the way up to MX3 but wait if I look at the distribution of X4 which is right before X5 if I explain to my eyes enough I can see that this kinda looks like a gaussian right does that pretty agree that this looks like a gaussian Down Below so what you can do is we can say that okay I can just make a guess and say that okay my reverse distribution should also be a gaussian uh there actually it turns out that our guest is indeed correct and the reason behind that is it goes into this like very deep theory behind stochastic differential equations I don't want to get into that because I haven't taken a question like myself and I don't completely understand it myself either but uh but there is like a justification for why we can make that assumption as to why the reverse distribution is also a gaussian now a gaussian is parameterized by two terms you need to have a mean and a variance what diffusion does is it tries to learn this mean and variance through a neural network any questions so far cool now how do we train this we need to have a loss function right we need something that we can minimize or maximize um so it turns out that we can use the same trick that you might have seen in the VA lecture is that okay I can't really what we're trying to do is we're trying to maximize the likelihood that an image that we generate looks like it comes from the original data distribution but that likelihood cannot be calculated but in the VA lecture we can see that okay we can bound the likelihood using something called the variation variational lower bound we basically apply the same trick to diffusion the the uh apply this bound to the actual likelihood and it turns out that you end up getting a loss function out of this which sort of looks like a sum of like multiple smaller loss functions like L of T all the way up to your power of zero and those loss functions are going to be given by these terms down below now typically we ignore the term L of T because it's almost always zero doesn't really contribute to the loss it turns out that the main terms that do contribute to the loss are L of zero all the way up to all of T minus one in this lecture we are going to sort of ignore what lf0 does because it's a bit complicated and I don't want to get into the reads what we will do instead is we will analyze what these like middle terms look like especially this L of little T which is defined from 1 to T minus one any questions yeah again the little T is going to be a time step and the uppercase t is going to be how many steps we're on the diffusion process four it's going to be say a horizon of that term sounds uh if that if you have heard of that term before so if you look at this term over here this is what we are trying to sort of analyze we are looking at the kale diversions okay you don't even know what the kill diversion says it's basically another Divergence metric that says that okay if this term is low then the distributions here of x t condition on X of t plus 1 and X of zero is going to be very similar to the distribution that are neural network lines but if this term is high then those are going to be distributed our goal is to sort of drive this term as low as possible we are looking at the kill diversions between this distribution over here and this P Theta over here it turns out that this distribution Q of X of T minus 1 condition on both X of T and X naught has a very explicit form which I have written um on the top over here I'm not going to go into the derivation behind why this form why this distribution has this form I have linked some papers at the very end if you can check this out if you want to if you're curious about that and what ends up happening is it turns out that if you try to look at the KL diversions between this distribution over here and the distribution that your neural network is trying to predict you end up getting this term down below what what really ends up happening is you're trying to minimize the L2 loss between the mean of your learned distribution and the mean of the conditional distribution that is given by this expression over here by utility of mu turns out that you can do some more algebra and you can re-express this like whole term where my cursor is pointing at um utility of mu t as this expression down below in in the red box those two expressions are the exact same again the derivation is going to be in some some papers that I linked at the very end so what this objective is saying is that we want our learned mean to predict what's in the red box because we're trying to minimize this L2 loss right so what the authors of the diffusion papers say is okay what if we What If instead of trying to predict everything inside the red box we try to predict a single term instead because this whole expression looks very complicated right so let's see that okay we know that our totality of mu has this form over here what if we take inspiration from that and parameterize mu of theta which is the learn mu in a very similar form but what we will do is we will make this very um just like noise term at the end parameters instead so what what this equation at the very top is saying that okay I can write my learn mu as this equation and the learn part comes from this Epsilon if you make those substitutions into the loss function and you do some more algebra what really ends up happening is you're trying to predict the noise that was added at time step 0 to get to time step key you're trying to predict the noise that was added and what this and what this means is that if you if you know what the noise is you can take a noisy image subtract that noise and get the original image back but if you were to say start with Pure Noise and you subtract some other noise from it it's possible that you might get a completely new image back instead so this is the main intuition behind diffusion models any questions that before I move on I just want to make sure that this part is kind of important so make sure the intuition is clear again doesn't really matter if you understand the math or not the math is very complex um don't get lost in the Beats I just want to make sure that you get the high level details so this is the objective that we are trying to minimize over here and in the at the very last line and it has like this complicated mess of terms at the very beginning turns out the authors say that okay just so you can drop all of that it's kind of like extra baggage we don't need that what you can do is instead minimize this very simplified expression instead so instead of looking at all of those like different weights you just focus on minimizing the noise between the noise that was added to the uh to an image during the forward process versus the noise that you're trying to predict so you could denoise that image now this does mean that you don't really have that same lower bound that you started with anymore because it's going to be a weighted version of that doorbound but it turns out to be fine and it actually actually it actually increases the quality of images that you see um later on so this is what the training process looks like you start with some image in your data set you add noise to it and using the trick that we defined earlier you can instead of like adding iteratively during the forward process to go from x0 to some X of T you can go to x of T in one step directly you pass that noise image into your model you predict the noise that was added and you try to minimize that with the actual and you try to minimize self the distance between that and the actual noise and he just repeat this process up to the model has sort of converged this is like a pretty standard training Loop that you might have seen in quite Rush before and once the model has trained the way you can synthesize new images is you can start with random noise you could pass just like completely noisy image into your model protect some noise from that and you can subtract this predicted noise from the actual noisy image in hopes of getting a denoised image back instead but and and if you recall all the way back here we were going to do this process in Reverse right we're going to start with Pure Noise find a slightly less uh noisy version of that and keep repeating that until we get something that looks like an actual image and that is now possible that our model that can predict this noise which is what this algorithm is doing any questions about that yay okay so yeah that is it for the theory of diffusion um again there are some derivations that I did not go over but those aren't going to be included in some blog posts and papers that I linked at the very end I would encourage you guys to check it out again just like a final call any questions about how diffusion Works before I move on to how it's actually performed practically boom so what we have discussed so far is how diffusion Works in general um and turns out that this x sub T we were we were pretending that this x of T was going to be some image it doesn't have to be an image it can be anything else as well what we will see now is what happens is that X of T is indeed an image and this is sort of what the diffusion process looks like you in the top half you see an image that is being um noised up in the forward process you keep adding noise to it until you get like pure stochastic ones at the very end and in the bottom half you see sort of the reverse process you start with Pure Noise But as time goes on you get an actual image out of that hopefully these two sort of um videos give you some more intuition as to what's happening in diffusion any questions about this oh yeah I'm curiously with each of the uh lines oh right so this is so the image is going to be like some very high dimensional um object right because this can like 256 by 56 by three the plot on the right is basically showing what's happening if you were to look at say a single dimensional object for example so what what's what this is what this is really showing is that um on the right on the left hand side we have the actual like distribution of the data and we take some sample from it which is like going to be some starting point on the very left and we keep adding noise to this until we get something that looks like a pure gaussian which is what sort of this like gradient of colors look like at the very end and the picture at the bottom is basically doing the opposite it's saying that okay if I were to start with a gaussian if I learned some like diffusion reverse process I couldn't map back to something that looks like my original data distribution because you can kind of see that the left-hand points of the left endpoint of the top image is very similar to the right endpoint of the bottom image which kind of shows that this diffusion process was successfully able to learn the original data distribution okay so we know that we are going to predicting noise and this noise has to be added to the image which means that the noise must have the same dimensions as the image what is one model that has the same dimensions um as the input and the output and you saw this during the segmentation lecture right you know that if you pass an image into a unit you get some segmentation mask out insert you get some segmentation mask out which you can use to sort of like label different pixels into like different classes but the authors say that okay I can also use this model to like predict noise instead because this is just a model the output purely depends on what objective we are trying to well what what objective we are minimizing the model with respect to right so yeah it turns out that with some with the objective that we showed earlier which was a noise predictive objective you can use this unit model to like output noise and this is not really the same Unit Model that you saw earlier it turns out that they added a whole bunch of stuff to it they replaced um your normal unit blocks with resnet blocks convex blocks they had like attention modules okay don't worry if you don't know what attention is I think that's going to be covered next week they replaced um batch Norm with something called grip Norm they replace values with other activations called swish Engineers these are basically activations that look like radio but have mice of properties they basically took like all the modern CV tricks that you might see in different papers and they threw it all into this like one model and it actually works very well so I guess they were right um that's a good question it actually so okay the original paper use a thousand time stops turns out that using more transps is better so there was an improved paper that repeated this process for four thousand five substance said and they were able to get like better quality and there have been improvements ever since to sort of reduce the number of time steps you need I think modern diffusion models take Maybe 250. something around that um there are other tricks to make it even faster there is a model called ddim I'm not going to cover that in this lecture I think you can that makes it so that you only need to run this block like say 25 frames instead yeah um why okay I wanted to escape a presentation yeah so I actually have what this model looks like pulled up um here so this is the normal unit but they actually throw in like this whole bunch of stuff to it so they add like these residual blocks which you might have seen on the resnet homework they add like these attention blocks again don't worry if you don't know what attention is that will be covered next week um yeah um I I have linked the slide in the at the very end if you want to like take a look if you want to have a deeper look at the architecture that uh and how it's implemented inside torch okay so that is how image generation and using diffusion models work uh and it actually gives like pretty good results and there have since been some tricks to improve the results even further so remember those beta parameters that we defined at the very beginning those controls how much noise you add in the forward process and and in a sense it also sort of affects the reverse process because the reverse process is trying to reverse for the process right so um researchers said that okay if I were to say use a linear schedule which means I increase these like beta terms um linearly I get a forward process that looks like this top row over here and it converts my image to noise really quickly which makes it harder for my reverse process to sort of go the other way so what the instead hypothesized is that okay what if I use a different function maybe something like a cosine which is um I guess it looks something like this and at the very top it has a very practical change it doesn't change that quickly right so saying that okay what if I change slowly at the beginning um a sort of like ramp up later on and the idea is that you convert image images to like noise slowly you your destroying information at a slower rate and this should help your model learn the reverse process better and it actually does improve uh sort of the results uh I just want to emphasize that this choice of cosine was like completely arbitrary you could also use some other function it's just something that happened to workflow yeah this is all right um it's they do some weird tricks to make this thing work so it turns out that you apply the the linear schedule you require schedule to like these like beta turns the apply the cosine schedule to this like malfunction that I sort of um glossed over at the very beginning so they Define Alphas like when minus beta which is why you sort of go ahead and something like that okay oh I just realized that this bar was floating over this entire drawing okay so when we defined our distribution for the reverse process we said that okay it's going to be some gaussian with some mean and covariance what we we kind of neglected the covariance part until now so the original diffusion paper used to fix covariance just so that okay I'm going to say my covariance is equal to some Sigma Square Times the other the identity Matrix where Sigma squared is either the beta term or the modified version of that and the idea behind that was um my covariance Matrix is going to be very small it's not going to have a big impact on my model anyways and the mean is going to contribute more significantly uh it's going to be it's going to contribute more significantly than the covariance so it should be fine to go ahead with something like that and that actually works kind of well but there was a second paper that came after the original DDP and paper that said that okay you can that's a completely clear statement to make but if you try to learn the covariance it's possible that you might end up improving your images even more and what they found is that it doesn't really affect the image quality but it can help you improve your log likelihood now my understanding is that people in this field use log likelihood as a measure of how much of the initial distribution you were able to learn so if you have a lower if you have a higher log likelihood this means that you are learning more of the initial data distribution which means that you could um sample more and more images from it so in a sense this log likelier term can be viewed as a measure of diversity right so uh the authors of the second paper said that okay if I parameterize my learned covariance Matrix um in this way where V is a term that I'm going to predict with a neural network I could get better likelihoods and that's one Improvement they made any questions about the the noise schedule or The covariance Matrix right uh yeah not really uh well the covariance Matrix of some like random Vector is going to be given by uh or rather uh it's gonna be like it's it's it's it's a big Matrix but it turns out to be fine because there are some tricks you can do with covariance matrices and like numerical linear algebra it's sort of like non-deep learning related but yeah weird magic basically yeah so yeah it's not that big of videos they also propose some more architectural improvements um uh I put those up there I don't think that these are that important because architecture of this unit model like changes cut off every week if different papers use they all use some unit but like different variations of it so these are some changes that they found helpful in like this one particular paper so that okay make the model bigger which is kind of obvious um they were using something called attention modules which I sort of preferenced earlier this is that okay add more number of heads again if you don't know what that means you will learn about attention next week um this is that okay I'm using these like researchable blocks and stuff in my unit and I know that there's this other Gan model called Big gang which works really well what if I steal some parts of the architecture from there and that also improves the quality even more they also something called adaptive to primalization my understanding is that your covert adaptive incidence normalization last week this is basically A variation of that in a sense but applied kicked up normulation group normalization instead um again don't Focus too much on these improvements because like I said the unit model kind of changes from paper to paper like I I don't even think like these changes are used in like something like imagine or Dolly tube something like that oh and this is a big one so recall when you covered Gans you said that okay I can condition on a class label and generate an image that sort of pertains to that label because earlier games were just like spreading out stuff right there was like no control over what you sort of generated but you say that okay if I condition on some label instead I could guide the Gan to sort of generate a very particular kind of image foreign you can sort of apply that same idea to diffusion so they call it classified guidance and the way it works is very very Jank they train a classifier to predict the class label from the images in your data set but instead of just going with the original images they also feed in the noisy version of the images and try to predict a label from that and once this glassware has been trained what they do is uh you think the gradient of that of that classifier and they added to the result of your diffusion model and they do some like weird sampling thing with that gradient it's kind of highlighted in the step down below they take the gradient of the classifier they sort of multiplied with some other terms and they added to the um mean that you would get from the neural network and as and the reason as to why this works is going to be highlighted in another blog post that I've linked at the very end it is the the justification for this is kind of mathematically heavy uh I don't want to get into that in this lecture but it turns out that this improved results by a lot so uh so when you have like generative models you need a way to measure how good a model is relative to other models and there are some metrics that you can use um one metric that is very popular among folks in this field is something called the FID or the um fresher Inception distance I think it's called yeah and there's also like a version of that called the sfid which is just spatial FID um basically what this what these metrics measure is how good damage quality is and turns out that the lower your metric is the better your model is uh this sort of metric was designed to sort of imitate how humans would sort of Judge images and it's kind of widely accepted by the community so yeah people people trust us a lot um there are other metrics called the Precision and recall what Precision measures is the image Fidelity it tries to measure how close the your general generated images are to something from the data distribution and recall measures the image diversities like are you sample are you generating a very diverse group of images are you trying to like generate like one thing every time and you can see that with these improvements that I mentioned earlier along with classifier guidance diffusion was able to sort of beat Gans and pretty much like all across the board diffusion is getting better FID than Gan models or Transformer models for generating images and I think this these two images will sort of like Drive the point home so this is from the paper called diffusion models Beats beat Gans at image modeling so on the very left you have images from a big gang which was sort of the state of the art until this point in the middle you have images from the diffusion model and on the left you have the training set and okay these two images sort of like these two classes of images are pretty good you don't see like an noticeable difference between these two yet but something that you might notice is that the images from the Gan look very similar if you look at all like the flamingos down here it looks like sort of a varied version of the same image whereas you see more diversity in the diffusion models so this is sort of showing that diffusion models can get like can capture better Fidelity and better diversity of your data of your data distribution now there are some more assistive images again this is sort of the same issue with diversity and philology as before but you can kind of see if you look at all of this like images of people holding fishes the begin model is kind of struggling that say Ryan got faces you can see like just like demonic phase over here but like in all of in pretty much all of these images whereas diffusion is much uh is able to capture sort of like some of this like finer detail uh in a much better way so this was basically the point when diffusion models took over against this was I think this paper came out 2021 last year and yeah ever since the diffusion models have sort of been dominant in this field so earlier I sort of highlighted this method called plus bar guidance which was able to sort of steer diffusion models to sort of like this dominant path and the way that word was taking was by looking at the gradients of a pre-trained classified model or rather not a pre-trained but a model that you trained yourself and I was looking at the gradients of those however that is a very Jank process because you don't know what's sort of like going on under the hood there it's hard to sort of interpret why this sort of technique worked well and you can't really use a pre-trained classifier because this classifier must also work on noisy images so you must train your own and to sort of like avoid these sort of dilemmas researchers came up with a better way to sort of guide your division model and they call that classifier free guidance what how this model works is you train something called a conditional diffusion model so like the way you had traditional gains you could feed in the class directly into the diffusion process itself instead of using like an external classifier and while they were using this model to sort of understand the behavior of how guidance work they accidentally ended up improving the results even more so uh these are sort of like some hyper parameters that they tried and they were able to notice that okay for a certain type of parameters the results were even better than older diffusion models and like other Gan models and this is sort of the way different models work right now people use classify free guidance all the time I just wanted you to be aware that this thing sort of exists but one thing about guidance is that it can sort of trade diversity for increased quality because when you condition your model on some class label you're inherently telling the model to predict certain kinds of images which means that you're kind of automatically decreasing the kinds of images that you can produce you're decreasing the diversity of all the images that you can produce and this is sort of more evident in this picture so in the left column is you have a diffusion model that is predicting that is sort of like generating images without any guidance but on the right column you have a guided process now you can see that the guided images are slightly like there are definitely like better quality but there's also like less diversity in them now the question of diversity versus quality is sort of my open question like some people might say that okay more diversity is better some people might say that better quality at the expense of diversity is better it's kind of up to the uh the human protection human practitioner to sort of decide on what they consider to be more important okay now there's another class of diffusion models called latent diffusion models so the idea is that you were training this diffusion model on images until now and turns out that these images are like very high dimensional which is going to be a problem because this is going to make the whole process like very confusionally expensive turns out that training a huge Fusion model can take can easily take uh days to maybe even months on like gpus which is like not acceptable right in certain conditions another thing about diffusion models or I guess like images is that if you were to say store an image and if you would when you're like storing an image you can convert that image into say binary and what research has discovered is that more bits were being used for sort of capturing the pixel level details and less bits were being used to capture some of the semantic details in an image so like if you had an image of a dog there were less there's dedicated to sort of describing what the picture contained and more bits just describing something like I don't know like the pause or the four or something like that but it turns out that when you're trying to run a generative model and this is not exactly just for diffusion I think this sort of goes for all kinds of generative models like even vaes or Gans or any kind of model that you may see is that you want to run a a generative model on the semantic part of the image instead because that is what you're really trying to sort of generate right so the researchers came up with a way to sort of make that happen they said that okay instead of trying to predict instead of trying to run this model on an image directly what I can do is I could compress this model to a latent space run diffusion on the latent space instead of like up sample that latent back up to an image so the way this works is they they try to learn latent space first um and they sort of use this like very complex loss up there what this is really doing is this looks like again because it's using a combination of adversarial losses which sort of promote um reconstruction quality and it's also trying to use like some other kinds of losses and other terms this sort of learns like some meaningful latent space again um I would recommend reading the LDN paper to sort of get an intuition as to why this loss is defined this way but it turns it turned out to be sort of the best one that they could use and once you have just like latent space learn you can convert from an image to the latent and back to the image using an encoder and decoder model which is what like this D and E represent and once this space has been learned they run diffusion on the latency themselves and this is sort of what that process looks like you take an image you run it through the encoder you get some latent you run the forward process on it and now you run the reverse process on the latent itself and once that is done you can pass it through a decoder to get an image output back um I wanted to throw this model out here because this is how stable diffusion Works have you guys heard of stable diffusion yeah this is exactly how the model works you can also like add other things to it like you can try to feed in say you can try to condition on things like semantic labels text other things breaks etc etc and this model can sort of like handle all of that okay so that is what I had for today's like diffusion lecture we can spend the rest of the time looking at pretty images so there have been a lot of progress and diffusion in the past two years and I want to highlight some of the models that people have that Labs have come up with so uh this one is called Dolly chew this was of the for this was one of the first like texture image models that sort of um highlighted how powerful diffusion can be at uh the sort of uh modeling so you can see that okay if I were to like feed in some like random text I could get very like high quality images back um in fact um I have sort of linked I'm gonna go through like multiple models and I have linked the websites for all those models in the slideshow so you can like check those out on your own time as well so in response to Dolly Q which was developed by open AI Google came up with their own model called imagine which I think it's a pretty clever name because you have image gen imagine Right image generation I think that's kind of cute so um they were able to get like better results than Dolly too because Dolly wasn't exactly like a fully diffusion model I think it has some other things thrown around in it as well but imagine was a pure diffusion model it was only diffusion um okay so after this model came out people were like why stop at images what if you try to do video generation using diffusion instead that also works this was proposed by Google this paper called video diffusion what they do is they feed in this prompt firecrackers and they were able to get this sort of result out in response to Google releasing this Facebook made their own model called make a video uh so on this slide I have some examples of videos that might that are like somewhat realistic like a horse running out of a pond or an artist like paint brush on a piece of canvas they can also like generate other kinds of things it's like you can fit in the prompt a grizzly bear in a Calculus class and it throws out this image to the left or you could have like a panda I think let's mix that and this model is able to like uh generate images for like all of these kinds of prompts in response to Facebook Google again came up with their with another model the extended the Imagine model that I had that I showed it shown you earlier two videos now again these are just some didn't list the prompts for this um on their website which uh that would be kind of curious to see what sort of prominent like these results out and if I stop there you can also turn you can also like generate 3D models using text so uh I think this paper actually came out last week so this is like very very new um I have linked the actual website in the slideshow so you can look at other models too see um how this sort of uh model works finally you can also apply the vision to RL I don't know if there's like any robotics enthusiasts In This Crowd but that's also completely fair game you can apply diffusion to so trajectory planning and even something like offline RL this was a paper that came out earlier this year called diffusion ql and it achieves set of the art results in offline RL yeah again uh just a summary you've entered over a lot of stuff today we eventually over how the generative modeling works and how you can view image synthesis as sampling from a data distribution we talked about some theory of diffusion again I know this was very math heavy if you're confused that's completely fine I've actually linked all the papers that I used in the in the slide over here I've also linked some other helpful papers and other resources like blog posts or maybe um uh like say videos or that that others have made I would recommend checking all of these out in your own time if you want to like learn more about this because like I mentioned earlier diffusion is sort of the state of the art in general modeling right now that's all I have for today
CS_198126_Modern_Computer_Vision_Fall_2022_UC_Berkeley
CS_198126_Lecture_10_GANs.txt
okay um so I think let's get started here so today we're going to talk about Gans but before we get started I was wondering if people had uh more questions or if we can run a little bit of review uh from the lecture on Tuesday um if people like questions comments or concerns about um like this idea of like a latent space and like a code book representation of inputs um if not I'll do a quick review but anyway but if there's if there's pressing questions that's probably a slightly more pressing than the back half of this lecture which is going to be um far far less crucial and critical so if people have questions um I'm happy to take them here all right um so yeah so we're going to talk today if not about um Gans we're gonna go over first a little bit um some stuff that we talked about on Tuesday um so this is a review but again please feel free to stop me if there's any questions or comments or concerns about what I'm saying um so on Tuesday we talked about latent variables in this idea of like codes like we have um a whole bunch of different inputs um in like an image data set and we want to figure out how to compress them and map images into more compressed uh representations um in the form of like a say a 32 dimensional Vector so we have a 128 by 128 image um The Challenge in the case of like an autoencoder is to figure out how we can represent that 128 by 120 image with simply just like a 32 dimensional Vector like how can we encode all of the important information um that is in an image and put that into into a vector that then we will pass to a decoder who is responsible for looking at that vector and saying okay based on this Vector I know what was in the input image I know what the most important parts were and I'm going to try my best to reconstruct that um and if we want to we can after we're done training um an autoencoder on this this setup of just a giant data set of unlabeled images um we can generate new images if we want by just sampling uh new latent vectors um that maybe we didn't get uh while training on our data set um and if our decoder is is working well hopefully it will be able to attempt to or it will it will attempt to decode that into something that that looks good and realistic um for our data set um so that was sort of that was sort of the key idea that we wanted you to take away from like Tuesday um yeah are there questions or comments on that all right um so the thing that we were really looking for from like our autoencoders is we really wanted things that look like in distribution for our data uh which means so given like a data set of like cats or dogs we're hoping that our generator will generate an image that looks like a cat or a dog and like a real cat or a dog too not like a stick figure or something lame um but how do we like judge this because this is sort of a really subjective thing like how do we know when our data looks good what does good mean um so if our data is something really super high dimensional like an image 128 pixels by 128 pixels by three color channels that's like a lot of different choices we can make at every single Pixel um and this this can be this can be really difficult to figure out like given this really complex input what does it mean to be good what does it mean to be like realistic um and oftentimes you don't really have enough data to form like a real probability distribution so it's really hard to kind of to figure this out um how do we figure out whether something is outputting according to our data distribution or not um sort of this weird nebulous concept that we're trying to solve um and rather than trying to to solve this directly by like by hand specifying what defines a good looking image uh this is deep learning we're just going to have another Network try and judge how realistic our outputs are for us um and that that brings us into uh what Gams are in the Gan game other questions so far all right um so the core idea is that we're going to have two networks that are competing against each other one is trying to generate data and the other one is trying to guess whether that data uh looks real or it looks fake uh this is sort of a game theory-esque concept the discriminator uh wins this game when it has uh low loss when it's classifying images correctly as real or whether they're fake uh and the generator gets really low loss when it fools the discriminator and gets the discriminator convinces it to classify uh its outputs as real images um so they're competing against each other so your discriminator network is going to take in a batch of fake images and a batch of real images and try and figure out which ones are real which ones are fake try and divide them up if it can and the generator again is just responsible for feeding the fake images and trying to get better and better results according to your discriminator network um and because everything here is still fully differentiable um all of our all of our linear layers convolutions all of it's still entirely differentiable um we can still back propagate all the way through from our discriminator's output um from the classification that it does all the way back to our generator's weights so we can still we can still run gradient descent and all of that um but yeah the that is the that is the high level for Gans here um it is it is effectively this game theory this game that these two networks are going to be playing competing against one another and you'll notice it's sort of a slight shift from autoencoders whereas Auto encoders were trying to take inputs um represent them in a compressed way and then decompress them um instead of the bottleneck being in the middle we have we have something in the middle between these generators the discriminator and Generator networks um that's a lot more High dimensional it's sort of like we just swapped the discriminator uh or the the encoder and decoder from our Auto encoder setup and it's a slight Paradigm Shift um the differences whereas in Auto encoders we're training jointly the encoder and the decoder were working together to try and best reconstruct images uh with Gans our discriminator and Generator Network are competing against one another um but but other than that the same techniques the same architectures you use for your encoder and decoder will work exactly the same for your discriminator and your generator respectively um so it's not it's not a terribly um large conceptual jump uh or a jump in terms of what you need to actually go and Implement um yeah that's that's what I want you to take away um so the things we have agency to choose uh oh and I should note as well um the input to the generator is going to be just some kind of random noise Vector sampled from some multivariate gaussian or something like that um it's just going to be some random noise the generator is then going to try and figure out okay like how can I assign some meaning to this random noise so that I'm not always outputting the same thing every time because if your generator outputs the exact same image every single time your discriminator is always going to win it's going to say ah I've seen this image before uh and it's always going to to tell for certain um that an image is fake so we need we need to give it some source of Randomness um something to try and again assign random meaning to um the generator is going to see this latent variable of noise and just try and decide okay if this value is Big that means my cat is going to be standing up or running or something like that um it's going to get this this variable this latent variable of just noise I'm trying to sign meaning to it so that it can generate um a whole host of inputs or a whole host of real looking images so we have we have agency to choose how big the latent Vector is and what distribution we're going to sample from um we can choose the generator architecture that goes from just a single variable latent variable just a vector all the way up to something that looks like an image it should have the same size same shape as our real data um we can also yeah and we we also get to choose like the discriminator as well um this is a typo I apologize the discriminator architecture architecture that goes from either real data or fake data all the way down to like a value between zero or one that classifies how well uh how much we think it looks real or fake uh and then like the loss functions we're using hyper parameter optimization all that so those are the things that we have agency to choose when training again um so this is sort of layman's terms hopefully don't worry too much about the math on the right if you want to go back look at it later that's fine um but for now the thing I want you to care about is that on each batch um of real data for the number of times that you want to update the discriminator's rates for this real batch of data we're going to take um all of our real data and we're going to also generate some fake data pass that to the discriminator and then choose to update just the discriminator weights um and then alternate back and forth between training the generator the discriminant to the generator the discriminator giving them each a turn to improve before going back and forth um and you can choose whether you want to update the generator multiple times before you update the discriminator or vice versa so these are these are some things in the training procedure that we also have a little bit of agency issues but again the core thing I want you to take away is when you're training again uh you simply just alternate back and forth between the generator and the discriminator just doing gradient descent freezing the other one's weights [Music] um yeah are there questions so far on on Gans or this idea um training procedure yes friend yeah yeah [Music] yeah well it's yeah so it's basically like it's really again like going back to the very start like it's really hard to figure out like what makes an image look like realistic so we're just gonna have like a network that's given a whole bunch of real data and fake data and yeah if the generator does its job and is able to to start trying to figure out some patterns of like what makes our image look fake like how can I tell that this image that I've gotten is clearly fake as opposed to like real um then our generator will get some feedback uh it it can tell based on what the generator is deciding um like why the generator made the decision as to as to whether the image looked real or fake our generator will start getting like feedback we can propagate our gradients all the way back into our generator energy generator can see like Okay um the discriminator knows that this color scheme that I've been outputting is clearly like fake I need to I need to change this a little bit um does that make a lot of sense yeah yeah it wants to fool it exactly and the discriminator starts out like horrible right we randomly initialize it it is garbage it is no good um it starts as soon as it starts getting a little bit better it starts to notice some patterns like okay our generator is currently just spitting out random noise clearly uh that is just fake I'm going to look for things that aren't random noise and our generator is like okay well I gotta start generating something that looks like maybe some random edges or or uh textures that look like in our in our real images and then discriminator goes okay shoot now it's generating textures that look about the same okay well now I guess I should look for like high level features like okay there's texture that looks like like the cat's fur but it's not arranged in a way that looks like a cat Okay so I can tell then that this image that has the texture of a cat but no actual shape to it clearly it's fake okay now I'm gonna update and say I'm gonna start looking for shape now so it starts it starts getting as they start uh training they start one-upping each other and the hope is that slowly our generator gets fixed it starts learning to fix all of these problems like okay I shouldn't output random noise okay I should output something with like shape okay I should output objects that like make sense in the context of other objects in the scene and so it starts wanting to fix itself slowly but surely foreign the loss function here so the only like loss function that we have to worry about uh is our discriminator is going to have some well we'll talk about this a little bit more a little bit later um but like the the basic idea is that like the we're going to use the discriminator the discriminator is classifying whether it thinks the object belongs to the class of real images or the class of fake images and so the discriminator sort of uh loss judges both how good the discriminator is doing uh and how bad the generator is doing so if the discriminator is classifying getting a really low like classification loss it's doing really well and our generator is doing really shitty uh so our generator score is sort of like the opposite of that uh it's like the negative of that basically is is sort of like an easy way to like start thinking about it so like the only loss is just like a classification loss really at the end of our our discriminator um except when we actually go to update our Gans weights we're going to take the negative of that loss and use that as our loss function for the generator does that it's a little bit nebulous we'll go a little bit we'll get a little we'll get to that a little bit later uh yeah but the loss is really just coming from the end um from the discriminator and how well it's classifying the generator's outputs um so let's see uh some little notes Here uh why why might we need to update the discriminator multiple times uh as opposed to maybe our training the generator multiple times um so the generator only gets better if our discriminator is good so remember when I said um when I was talking about our generator or our discriminator if it is looking at an image um and it's getting random noise as it's fake images and then clearly like cats and dogs is real images if our generator is never able to figure out like okay this random noise is clearly just like fake images then our generator is going to have like no clear way it's going to have no understanding of like how to improve um the generator only improves when the discriminator starts getting a little bit better and starts saying like Okay if I want to classify this as real it needs to have some real looking texture um things like that so it's important that the discriminator is able to actually do its job because if the discriminator can't do its job the generator is going to never it's never going to get any kind of feedback on how it can improve um so in order to make sure that our generator is doing its job and giving meaningful information to the discriminator about why its images suck uh the discriminator may need to be updated a few more times than than the generator um so that it can actually discern like a meaningful difference um and this is like a hyper parameter to tune again like how many times to update your generator or your discriminator um for like every single time that you're going to step your generator's weights um so yeah so like loss functions so this looks really scary uh we're gonna break it down here so ignore like the idea of like expectation um The Big E so let's just go over like some basic terms here so like V is like our value function which corresponds to like how much uh value like I think it is the discriminator is getting out of this game um X is your input data just a tensor Z is just a random Vector of noise that our generator is going to use as some kind of like source of Randomness so it's not just generating the same thing every time uh D is our discriminator G is our generator okay um so what this ends up being is basically cross-entropy loss um which I know we we briefly mentioned it's basically a classification loss um if we are given a set of data that has the label one and a set of table data that has the label zero um this is simply the classification loss um for for something for something which in this case are just is our discriminator that has to classify between the classes that are labeled as zero and the classes that are labeled as one um our descriptors output for a single image is just a scalar um which is why this this makes sense if you're interested later in going back and confirming that this math makes sense um so remember this is sort of this is a game theory scenario uh if the if the the discriminator and the generator have opposite goals one wants to make this value function really big one wants to make it uh extremely small um in this case the higher the value function uh the better the classification so if the value function is super high that means the generator is doing its job very well and classifying the images correctly so the discriminator wants to maximize this value function um whereas the generator has the interest of fooling the discriminator and making this value function as small as possible um and if we're maximizing in this case a value function just note for the discriminator we're going to do gradient Ascent but this just comes down to like a single keyword in your Pi torch Optimizer um not something to worry too much about it means just step in the opposite direction from your normal gradient steps um so that is that is the min max part of this um Oh I thought we were going to break this down a little bit more uh yeah that is that is the high level takeaway probably the most important idea is we have some some value some value function which corresponds to how well the discriminator is classifying the discriminator wants to maximize its value uh and the generator wants the discriminator to suck meaning it's doing meaning it's it's generating images that discriminator look very real and it wants to pull this value function down it wants to steal the value from the discriminator they're competing against one another um so big high level takeaways we have two models a generator and a discriminator we have a value function that corresponds to negative cross-entropy loss which is to say it corresponds to classifying very well generator wants to minimize it discriminator wants to maximize it um they're going to be competing against one another and we're going to alternate between updating our discriminator and updating our generator back and forth back and forth generator wins when its data looks real the discriminator wins when it's able to correctly decide what is real what is fake um this is this is sort of a meat of it so if we want to sit on this for a while and like just like pepper me with questions like that's totally fine um these are like the high level takeaways and probably the most important things from this lecture so are there other questions or comments on on this idea of this sort of game that these two networks are playing um and how they can sort of uh inform each other and and progress and get better over time hopefully generating real looking images at the end yes friend it's like the value so like we had a loss function to represent um like how poorly we're doing in this case the value function just represents how good we're doing it's sort of an arbitrary choice but the hope is that you understand that with the value function um if you have something like the discriminator which is trying to maximize your value function you need to step up the hill and try and get to the top of it um so yeah it's it's basically just the opposite of loss if you're trying to minimize your loss then the value function is something you're trying to like maximize um it's classification yeah it's we had a classification loss called the cross-entropy loss and to get the value function we just literally took the negative of it we're just kind of lazy but yeah the paper they could have they could have simply just formatted it um as a loss that the discriminator is trying to minimize and the generator is trying to maximize but they wanted to be confusing or maybe they'd been doing a lot of reinforcement learning uh in which you try and usually maximize a value function it's sort of just a choice but again um core idea value function is the opposite of a loss um it's something a quantity quantity you would usually want to maximize so your discriminator wants to maximize um the value function here its ability to correctly classify and then the generator obviously is doing its job if its data looks real and discriminators like I don't know what this is this is real or fake I don't know um does that make one more sense yeah if I confused you with words please tell me yeah wait say it okay yeah yeah so we're gonna like yeah you're not gonna you're not gonna update the generator's weights so we have like we have our generator uh this is only sort of On video okay that's fine so we have our generator and an outputs an image um and then our discriminator um so and we also have a batch we also have a data set of real data over here real big so what we're what we're first going to do is we're going to freeze this thing we're going to freeze our generator so we're going to freeze all the weights we're not going to update them with gradient descent um we're going to take a batch of real data and a batch of fake data right just passing in some Vector of random noise just some source of randomness um so we have our we have our batch of real data our batch of fake data and we're just going to train the discriminator we're just gonna just like a normal classification task just train it a whole bunch um do like one iteration of gradient descent sample some more fake images sample some new real images pass them both and you know um to give it new data to work with you that a couple of times uh like 10 times say uh and then we're gonna do the opposite then we're gonna freeze our discriminator we can still calculate gradients even though it's what I mean Frozen I mean uh we're no longer going to update the weights of radiant descent um so then we're going to sample some fake images we only need the fake images when we're updating the description or the generator we're going to see how well it does in the discriminator and we're going to ask the generator okay which direction could you step your weights so that given these fake images again given these or the same yes let's just say given the same fake images again uh how could you have generated these images slightly better so the discriminator thinks okay they're a little more real and you're going to do that just like once say um that's your like second step and then you're gonna loop back to the top and start training the discriminator again does that make a little sense yeah how can I how can I clarify this a little more or what what have I said that might have been uh confusing or potentially uh conflicting well I mean discriminator [Music] yeah so we have we have we've generated a bunch of like random noise and we're going to feed that discriminator so we get a bunch of fake images figured that right now we're going to feed them to discriminator and the discriminator is going to give us like a scalar loss or in this case it's gonna have like a value function um so I should rewrite this it's not a loss anymore it's a value of like how well the discriminator did like good job um right and we can actually then take the partial derivative of our value function with respect to our generator's weights right because again this is like all or uh it's all differentiable right it's all just Matrix multiplications and like really losing stuff convolutions so it's all it's all entirely differentiable we can do calculus on it so then we're going to say like okay using the partial derivative of our value function with respect to some of our weights that are in here how can we taken if we're given the same random noise again how could we have generated better images is basically like what this gradient is yeah because again like it's all differential right like it's just it's multiplications and additions things that we can take derivatives of um but yeah we're just as we're calculating gradients as we're doing back propagation we're just not going to update anything in the discriminator does that make a little more sense all right oh and um no no that's good keep it coming yeah exactly um yeah and and the idea is sort of like it's a game right like if if our discriminator recognizes like in the bottom right hand corner of our images um let's say okay in the bottom right hand corner of our real images there's a pixel this is a homework question so you're you're in on it or uh quiz question say there's like a bottom right pixel here that's red in our real images and our fake images don't have that our discriminator is going to say like oh I see this pixel colored entirely red this must be a real image or there's a pixel here that isn't red in the bottom right hand corner all the real images have the pixel that's read in the bottom right hand corner this one doesn't it must be fake the generator is going to clearly learn like okay if I've made this if I had updated my weights if my weights were such that this pixel was just a little bit redder the discriminator might have a little more trouble so it's just sort of this game like any strategy the discriminator comes up with like I see that real images tend to do this whereas the fake images don't have this this sort of defining feature as soon as the generator identifies that feature that or as soon as the discriminator sorry identifies this feature as clearly separating real images from fake images it is then the generator's job to figure out how to fix that how to how to generate a feature that looks like that to fool the discriminator and sort of this back and forth that's sort of that is the main idea um yeah it's it's a game between the two um a zero-sum game between the two of them does that make does that make sense to people how are we feeling about that all right so conditional Gans so we've talked about just if we have like a data set of just numbers like seven is one image and another image maybe like you've got a six or whatever just digits right we've talked about how we would go about generating like realistic looking digits but what if we want to have like some control over which digit gets generated because currently with this setup if we want to figure out if we want to generate an image of like a seven we would have to just keep feeding it randomly vectors keep getting the generator randomly vectors until we get something that looks like a seven and that's not ideal especially as you have more and more classes um like imagenet is a data set that is a thousand classes so do you really want to sit there feeding it random noise until it spits out the right kind of object that you were looking for like no um so we're conditional Gans is basically the sort of solution for that so yeah we have a data set of like random number or different numbers between like zero to nine inclusive 10 classes in total um we don't have agency we would like to have agency um basically the idea is appending to our Vector of random noise we're going to just depend on a one-pot vector corresponding to that class so if we want it to generate a nine we're going to depend on all zeros until the very last digit where we're going to append on a one so the question is sort of like or let me make sure I'm not getting uh too ahead of myself yeah so the question is like how do we actually make sure that the generator is incentivized to use this feature because right now it has absolutely no incentive to use this information um of us telling the generator like we wanted you to generate a night here it has it has no absolutely no incentive um but the basic idea with a conditional Gan is we're not just going to give the generator this label we're also going to give it to the discriminator and again this is where the idea of like Game Theory comes back in a really big way if the generator generates an image and it was given this data it was given this right here um oh I should also add we're going to append on uh this this one hot Vector to our real images as well so if our generator completely ignores this our discriminator has a strategy now since every correct image every real image comes with a real label the discriminator has a very real strategy now of simply looking at the label we pass in and asking itself like does it correspond to the Imaging question all real images will have correct labels but if we get a fake image with maybe a very convincing looking six but it's clearly labeled as a three the discriminator has a very real strategy of being able to look at the label and say like no this label doesn't correspond to what's in the actual image and can clearly identify a fake image's fake or real image as real so because the discriminator has this very real strategy of being able to look at this label that previously until we you know put the label over here um our generator was able to completely ignore by making sure that uh our discriminator has access to it as well um the generator is sort of forced if it doesn't want to get cornered by the strategy of the discriminator it's sort of forced to pay attention to this label and realize okay uh the discriminator is going to know that I'm supposed to Output of nine I guess I should open a nine so the only way for the generator to avoid detection um is to make sure that it's outputted data actually matches the label um since all real images will have a correct label so that's sort of that's the idea of like a conditional Gap are there questions on that probably are yes friend no it's good I appreciate it so we don't actually like um yeah are you asking like what the architecture would look like yeah exactly uh sorry this got a little bit this got a little bit messy let's see if I can clean this this visual up a little bit got a little bit convoluted um yeah that's that's basically it so the generator took in uh the label that we asked it to generate an image of as well as a source of random noise and it outputs a pair it outputs an image and it also outputs this same Vector I don't know what we want to call the vector uh plus L subscript to make subscript f for fake so the the discriminator takes in uh the output of the generator the image the fake image if uh as well as the label uh that the generator was supposed to generate an image for the discriminator takes in a data set of images um pairs really of images with their correct links so this is the real image I subscript R in this case as well as L subscript R for the real label so the discriminator always takes in an image as well as a label and its job is to figure out whether it thanks this pair of image and label looks real or not and again like there's a very real strategy if the discriminator sees that for fake images the image does not match the label like it's clearly going to to be able to tell real from big and the generator will have to fix its very quickly at night no no yeah this label this label we gave it to it like we yeah we chose the labels we we picked it ahead of time um and this the generator cannot influence this value right here all it can do is try its best to make the image match the label yeah are there more questions all right um we will truck on here yeah this is this this is the last section that's really important if we have time we'll cover the stuff at the end and you can feel free to look at it on your own um but this is sort of the meat of it that I really want to get to let me grab my water here so there's like some pretty there's a couple of kind of glaring problems with this we're not glaring some of them are subtle but there are there are issues with this and they're they're very real issues and our sort of white Gans are not as popular anymore um compared to much more stable methods so the generator can stop learning we need to make sure that the generator is progressively getting better and better but there's a problem if the discriminator gets too good like way too good like on almost any image it's almost able with 100 accuracy to classify it as real or fake the generator has a very real problem then in this case when the discriminator is way too good there is no direction in which the generator can step its weights to become slightly better the generator's weights are going to go to zero if the discriminator is way too good um this can be this can also be thought of in a more theoretical sense of like the discriminator learns super sharp decision boundaries um and lots of lots of flat spots on your lost landscape for the generator so this is that is one very real problem if the discriminator is so good the generator might have literally no directions in which it can step to to uh better its advantage in the Gan game uh the generator can also slip up and get cotton mode collapse if it finds a set of maybe like 10 outputs that do pretty well um the discriminator is pretty convinced that they're real the generator might be incentivized to just start outputting those fake examples a lot um but by the time the generator catches on the discriminator might be sort of stuck in its ways and and just perpetually outputting those 10 images mapping every single source of random noise to the same like 10 images um this is called mode collapse uh for any for any setting of The discriminators parameters the generator will always have um it's its value in the Gan game or it's um Advantage maximized uh when you output only the one best performing fake so this is sort of like a weird yeah to describe this in a slightly more mathy sense like you can think of like the set of all I'm gonna do it over here I can't see imagine this number line is like the set of all possible images that the generator can output right and maybe like this is the score on the corresponding image right the or rather uh it's a value function that the discriminator or the generator is trying to minimize so I guess it would be the other way around but so this is the value function on the single lowest example um the single the single best performing image that the generator could create the generator's expected value in this scan game uh or the the value function in the Gan game and expectation is minimized when the generator only outputs that one image um because if the generator outputs a couple of images that are like over here and don't fool the generator as well well then the generator has done worse in the Gan game uh the generator sort of encouraged to just output this one example every single time is sort of the idea um this is called mode collapse that's not good and it's definitely an issue um there's also a lot of training procedure issues um so the vanilla Gan setup doesn't converge uh like with our other models that we train your loss function goes down and it eventually stops improving but on training data your loss is unless you've done something like horribly wrong your loss is probably just going to go flat your training loss um so it's gonna it's gonna converge it's going to stop improving there's going to no longer be a way for your model to improve on your training data uh Gans do not have this it is a Perpetual game and there will always be slight little edges uh the discriminator can always try and over fit to specific features specific patches of pixels in real images and say like oh I've never seen this patch of pixels before this very specific configuration of pixels so I'm going to try and over fit to that uh there's always going to be there's going to be a Perpetual trade-off um between your generator and your discriminator you're never going to have a clear way to look at like a loss function and say like okay it's no longer improving there's no there's no metric to tell you when again is done training um yeah it is it is never going to converge um in our default setup there's going to be a Perpetual trade-off so these are these are a whole bunch of different issues with Gans that are like really common and it's why Gans are really really annoying to train um they give really good results sometimes but they can be incredibly fickle um yeah are there are there questions on this because this is like something you're going to probably encounter on the homework like at some point and like you will hopefully um if we've done our job right you will you will observe it in at least one context um because it's sort of one of the defining features of Gans is how unstable they can be in like the default setting okay um if there are any questions on that we will keep rolling on um but that is sort of the the main thing all of the stuff before now is really the main stuff we want you to take out of this lecture everything from here on out if it goes way over your head totally fine it literally does not matter uh we're not going to have enough time to cover all of it anyway um which is why again if you all have questions on anything I can go back to a slide here and we can just hash it out um because everything before now is like far more important than like these little minor improvements that people have found to like make them a little more stable and stuff um yeah so people want to like if if there's questions comments concerns points of discussion um before I'll move on I'm happy to take those first all right um I will keep trucking then um so the first thing is looking at the actual like math for the love function that we defined um this Improvement would be known as the non-saturating loss it was proposed in the original Gans paper but they didn't actually use it for some reason um I'm not sure why they didn't test that one out but uh it's basically a simple reformulation of our generator's objective so beforehand it was very simple there was a single value function that our generator was trying to minimize and our discriminator was trying to maximize just a single value but now we're going to change that to the discriminator and the generator are going to have slightly different goals um so the problem with this um if the generator is failing totally so you can see on the x-axis it's a little bit small I apologize the x-axis is the discriminator's output whether it thinks an image is real or an image is fake so just from like the the partial derivative of your loss with respect to your discriminator's output of zero or one does this very first derivative this very first partial derivative um if you look at the the case in which the discriminator is uh not being fooled this granted thinks okay this image is fake it's clearly zero it's a fake uh if you look at the actual slope like the magnitude of the slope the magnitude of this partial derivative uh is incredibly small when the discriminator is just shutting out the generator so this is kind of a problem that the generator only gets like that first partial derivative the derivative of the value function with respect to uh in this case it's the discriminator's output on generated data that first that first partial derivative um is problematically small for your generator uh when the discriminator clearly recognizes that it's fake so rather than formulating this right here as our generator's objectives Trying to minimize this is the only term um in our value function that our generator has any agency to change rather than trying to minimize that instead we're going to try and minimize something or maximize rather something slightly different um it basically boils down to a slight mathematical trick rather than minimizing this term in the cross-entropy loss you're going to maximize something that's extremely similar um I I don't I don't think I have a good a good way to explain this but it ends up being a little mathematical trick so in the case when the discriminator is absolutely just shutting down uh your generator and it's clearly recognizing okay it's a fake image let's classify it as a zero uh your generator gets much much larger much much larger gradients and it will learn a lot quicker um and sort of helps us out with some of these these training instabilities to some degree um yeah it's it's sort of a strange a strange thing it's basically the way to think about this is rather than the generator trying to minimize uh the the value function which is the negative cross entropy loss let's try and instead just maximize the probability that our generated images get classified as a one as real data it's like this it's the most minor of tweaks but it makes a large difference rather than just blindly Trying to minimize the uh the value function the negative cross entropy loss rather than just blindly trying to do that try and very specifically make it so that your generator's objective is to maximize the probability of getting classified as a one um using like similarly like a binary class cross entropy sort of loss it's a little bit strange um don't expect you to get it right away the idea though non-sitrating loss will give us larger generator gradients and allow us to train quicker when the discriminator is just shutting down our generator um there's literally no way I have time to get in the water Steam games uh so we can just cut it off right there if you want to you can peruse this at your leisure uh but there won't be quiz questions or like homework questions on it it's just sort of advanced stuff if y'all want to go into it so I'll end the recording there but if you all want to come up and talk about it I'm happy to talk more about these Advanced kind of uh Gan training techniques
CS_198126_Modern_Computer_Vision_Fall_2022_UC_Berkeley
CS_198126_Lecture_6_Advanced_Computer_Vision_Architectures.txt
foreign and I should turn off the zoom background blur Ry options like this oh it does show up uh yeah there's like speaker notes on your screen but there's be careful because I accidentally just put something else in the first longer okay I don't have too many but yeah there's an interview oh this is where it has speaker notes and stuff I think it's fine I don't need this man that eraser is disgusting this one yeah oh wow you're really going at it well I just think I was just too ready to go I usually uh yeah as our slides were they and put which is the product describing to replace the names of it oh no it's fine well it started as their Slide the best facts TZ so yeah it goes they're probably gonna make their own updates though oh I I did yeah party on pictures are good first time we got foreign foreign foreign foreign things okay let me make sure it's it was obviously I think it is for the most part of it that's something the green and the yellow look identical we can go to something different oh my gosh it's mine started oh do you want the mic um I think you should be fine though what hello hello all right so I think we can get about started here I want to start today uh before we get into like talking about uh more advanced CNN architectures I want to start by sort of recapping what we talked about last Tuesday um I apologize that was kind of a rough introduction that was uh that was me making a couple of last minute edits that probably hurt more than they helped so I want to just review um convolutions and and the architecture of a CNN to make this more clear and put it put it into perspective how it relates to um just standard um dense neural networks I think it's fine um so we talked I think I think most people felt okay about um the actual mechanics of doing a convolution um and I just wanted to sort of clarify that when we do a convolution operation um we treat it like a layer like with standard dense neural networks uh we treated matrix multiplication as sort of like a layer where all the Learned parameters were all the values in our Matrix and all the values in our bias Vector it's going to be a very similar story with the convolution um where your convolutional layer will have a bunch of filters right and all the values inside our filter are learned and we also have a bias term that gets added to the output of moving each window on each location of our input we refer to it as a volume simply because it sort of looks like a cube um and I wanted to to reiterate um this idea that again if you have a whole bunch of different filters so if you have one filter you're going to get an output map for every location that we started and put our our filter where every location that it overlapped and if you have a whole bunch of filters if you have a whole bunch of them then you're going to end up stacking up all of the different outputs um from taking each one of your different filters and running it over your input and you get something that has a number of channels in the output um and again you can kind of think of this as like a layer of a neural network so after we're done doing this we're still going to add our activation function like array Loop or or something else um we're still we're still going to to very much treat it yeah as a layer and because all these values in here um because they're all because our output our loss is ours differentiable with respect to all these different parameters we can still take partial derivatives of our loss with respect to our parameters and do gradient descent um so it's it's simply as opposed to doing matrix multiplication on a vector um when you have images and a CNN you can simply just do convolutions instead of your normal matrix multiplication um which is demonstrated up here um you do convolutions followed by your activation functions you have pooling layers if you want to decrease the size of this volume because it can get quite unwieldy um if you want to decrease the size of it so it's much quicker uh to take this convolution you can do pooling simply just looking at each if you have a two by two pooling you're just going to look at every channel look at a little two by two square take the maximum look at the next two by two square over the next two by two square over to simply just chop your the size of your output in half um and were there were there questions on on this on the sort of mechanics of like what does CNN is and what it what it looks like mechanically um yes friend um yeah yeah every filters has its own bias so basically every everything yeah you can yeah you can think of it like that I I don't really think of it as like after we're done doing all the element wise multiplies um from overlapping our filter with our input uh we then just add the bias corresponding to that filter onto it but yeah that's like in that's probably an equally as intuitive way to do it um is just add the corresponding bias to the corresponding Channel the corresponding Channel and the output yeah um but again the big the big thing I really want to get across is that you treat it just like another layer um you're just gonna stack a bunch of them um and then eventually when you want to get to the end you take this volume and you just unravel it in a very specific you just flatten it all out um and then just do like a dense layer a regular matrix multiplication um at the end yeah yeah no not really um it's just there because like if we if we have like a huge input that's like 256 by 256 I think it's like huge uh the output assuming you don't do a strided convolution which is just taking your filter overlapping in this case a three by three area and instead of moving over one to overlap in the next era you move it over two um if you don't want to do a striving convolution a very simple way to do it is to just look at individual little squares and just take the max in this whole area in each one of these areas and just spit that out um that just immediately cuts the whole thing in half on uh your height and width axis so you have a quarter as many values now but it's still ideally still sort of captures all the main information that was in that uh that feature volume um yeah it's it's more just used so we can get our feature volume down to a reasonable size so it doesn't take forever to run convolutions on it just makes it a lot quicker yeah no well so we're going to talk about um segmentation which is where you have like an image like a person in it and you need to Output another image except each pixel has basically been like labeled with like there's a person in all of these pixels inside of like here and then everything else is background so in that case your output is the same size as your input which gets a little bit weird and we'll talk about that more later how you because at face value that would be like super inefficient all of your convolutions are on these just huge inputs we'll talk about that more later but generally for things like classification you do want to start bringing it down a little bit um there's no reason to to leave our feature volume huge um if we can shrink the size of it without reducing the amount of information in it hopefully does that make a little sense are there more questions because last lecture was confusing and I apologize that was a pretty rough introduction to CNN's um yeah yes friend yeah uh yeah so for your input image the depth is like RGB uh like one one channel is what we call it for each pixel but say we had like 10 filters uh it no longer corresponds to like color the output of a convolution layer with 10 filters it simply means there were 10 different activation Maps one from each filter that we obtained so your your Channel's Dimension uh would be 10 Deep um yeah it just corresponds to like if if this filter is corresponds to like horizontal edges and this filter corresponds to like edges like this um you can just sort of look along the channel and see like okay like was there an edge that went this way was there an edge that went this way you can just sort of look in all of the the different features throughout that channel um they just correspond to what the filter what that filter in the previous layer picked up on does that gonna make sense there's not like an intuitive like sort of like color um as as you after the first layer um but but yeah excellent question um are there more questions okay um if there are any more questions then I will hand it over to Rohan who's going to talk about um more advanced CNN architectures and I just want to contextualize this um with there's only really like one architecture that you really need to take away from here which is going to be resnet we'll get to that um the rest of them like if it goes over your head like don't worry it's it's fine this is just more for people who who want to know um resnets are the only one we should really like feel free to ask like tons of questions because we want to spend the most time on that and all that um so yeah go on hello um so this is going to be kind of a survey on the uh the history of uh neural network architectures for computer vision um starting from uh there's a little timeline here starting from uh alexnet and moving forward um laneet is something that was created quite a while ago um and alexnet is like the first kind of visible Improvement in this field um that kind of starts the motivations behind creating a deep Network that can be learned from without any gradient problems and things like that um so yeah we're going to be talking about alexnet BGG um the motivation Pine Inception Nets as well as briefly about mobilenets um and Landing with with resnets and a couple other um miscellaneous networks um none of these are state of the art on imagenet now I believe um things have kind of transitioned uh to Transformers and other much more advanced architectures that we're going to be talking about in the next couple weeks um but this is understanding the motivation behind these um really sets the stage for uh the the future of advancement in this field so yeah convolutional Nets uh like Jake was describing a single convolutional layer these were proposed in 1990 by Yen lacun um and he he kind of pioneered that pattern for today he's currently head of AI at Facebook AI research so furthering the metaverse um for those of you that are into that kind of thing um but alexnet in 2012 was a a big groundbreaking feat in that it proposed stacking layers of convolutions Max poolings following it up by fully connected layers and coming up with not a very deep network but one of the deeper networks um that were introduced at the time um this kind of uh turned a lot of heads when it achieved a around 17 error rate on imagenet um there are conferences yearly that will evaluate state-of-the-art models and papers against imagenet um and it won in 2012. um in fact most of these architectures one in their respective year um vgg in 2014 I think Inception net in 2015 a lot of really cool advancements um so going on over the architecture of alexnet um there's a lot of things to consider and this makes it look extremely complicated um in reality all this is doing is stacking convolutional layers and pooling layers intelligently to synthesize information from low-level features and work its way up as it passes through the network to higher features which really is the motivation behind most of these architectures start off with a bunch of data a full dimension image with all the features that you have in whatever your input is um starts to detect low level Trends create feature Maps pass it through pooling layers to condense the information that you have and lastly come up with a prediction or a model that can be used to predict because now you have a feature map that corresponds to different classification metrics so there are some observations there's only five convolutional layers um and the next slide should have an updated drawing that's hopefully a lot easier to understand than the previous one um but this has a convolutional layer followed by a Max pooling layer another convolutional layer followed by Magic pooling then three convolutional layers stacked followed by a Max pooling layer and it ends with a couple of fully connected um layers um so three Hefty dense layers are following this last stack of convolutions and Max pooling and that results in a lot of parameters um dense layers are are fully connected so you're doing your matrix multiplication across the entire layer so your number of parameters Stacks up quite a bit this also means that the number of computations that you need to do stacks up quite a bit but we want to go deeper right um only having five convolutional layers and three dense layers really limits the amount of information we could synthesize by our feature Maps um you might have heard the term deep neural networks um this course I believe is called Deep learning for computer vision so that is something that we're going to touch on um but this this can barely be considered a deep neural network just because of how shallow it is we want to abstract these Concepts more we want to be able to have our model learn higher order feature maps and by that I mean low level features um like edges things like that but as we go higher um starting to see the correlation between like color spaces um how edges lead to other forms and things like that yeah this is basically about what like what I was talking about um there's a lot of space that we want to have and we want to uh go through this architecture um so yeah this this is hopefully a lot easier to understand than the previous drawing and it's saying the same thing so you have um a this is a 5x5 image with three channels um similar to what Jake is drawing here actually um this is your original image in like RGB or something it's passed in through to a convolutional layer it's Max pooled so its Dimension has been reduced um either through like some average pooling um or some gradient like that um pass through another convolution that's Max pooled again Dimension is further reduced and then you pass through three convolutions which don't change Dimension you Max pull at the end of that and then you go into the concept that Jake was talking about earlier which is flattening so by unraveling the final kind of layer that you have your and you end up with a one-dimensional Vector instead of whatever you landed up with after your convolutions these are then passed through three dense layers this is what we were talking about earlier and lastly passing to a soft Max function so up until this point all of the activation functions are religious or rectified linear um that's at the end of each convolutional layer and Max pooling layer softmax at the end will scale on a zero to one scale so it'll give you like your final like predictive class that you would want to have this is how the sizes are changed as a image would go through alexnet so this is say you start off with a 227 by 227 by 3 image um you have a stride of four so this is like approximately quartered um your kernel size and we talked about that is like the the size of the kernel that you're sliding across um the image that's 11 by 11. you pool with a stride of two so you're having a dimension um the actual image dimension you go through another convolution um which adds depth to the image of given your your kernel size um and once again you pull you're having the dimension of the actual so basically each of these steps corresponds to each of these steps uh as you can see here um you end with three fully connected layers and you go into a soft Max function which ultimately instead of really so scaling uh linearly soft Max will scale more logarithmically um and it'll give you a final like probability map um are there any questions on Alex now actually before uh we move on yeah what's up yeah so uh if you don't specify a certain type of padding valid padding is going to be applied to make sure that as you're sliding your uh kernel across an image uh you're left with the same dimension is there anything you want to add Jake or no that's I I should have mentioned adding two yeah I mean you did a good job we basically just had a bunch of zeros on the outside um if we want to just pull away the uh the size of our yeah and by having a stride of one um you don't like lose information you don't reduce size you're quite literally going over every single Pixel in your image um yeah and as you can see uh we use a relu activation all the way until the end uh vgg is you can think of it as a deeper Alex net um there's not too much to go over here um the motivation behind this is you want a deeper neural network um that's kind of the purpose of this now instead of five convolutional layers you have 23 you still have the three dense layers at the end which computes a little bit better because computers evolve CPU power is able to handle these high order Matrix computations um but yeah you have a lot of parameters um as is kind of a side effect of all these computations that you're trying to do uh so this is an interesting observation using a one by one convolution to create a transformation of input channels um so one by one convolutions can actually be used as a form of padding and dimensionality addition and reduction um we're going to be talking about that a little bit more um as you can see in like I think one or two slides but we do want a higher accuracy we want a deeper Network we want something that is able to effectively both space and time effectively compute these Matrix products yeah so yeah and you're also adding pixels to uh the the top and bottom as you're sliding if that makes sense there is a yeah there's a picture on the next slide I believe uh uh okay yeah yeah so a one by one conclusion uh the the motivation behind it is you're applying a linear function that doesn't lose information as you go across better radio right um yeah exactly that's why the size won't change the important thing the channels change though oh right the number of filters that you have and my channel yeah you won't change the actual size of the image Dimension the depth of your like yeah depth is kind of how I think about it but in reality is channels that are gradiently moving yeah that was a good question though yeah so this is an example with vg16 so this is 16 layers um with three dense layers at the end as well um as you can see as you go through your convolutions your output Dimension is shrinking So based on your kernel size your actual image is shrinking but your number of channels increases as you gain information um as you can see in BGG 16 specifically it looks like the size is approximately getting halved every time um and yeah this is more so just a visualization of how you have multiple layers that are stacked they're pulled together which is where you're having the information that you have between each of these steps um as you apply your kernel of size two by two you're increasing the number of channels and information that you're getting um and lastly you're passing it into three dense layers to do those major Matrix multiplications at the end post flattening all right so um Inception Nets uh this drawing is also quite complicated um but what you need to take away from this is that um you sorry is that you want to be able to use one by one convolutions to keep your input size the same as you go through multiple steps as well as do your regular Max pooling and convolutional uh step with a certain kernel um this is going to be explained more clearly there's a visualization for this as well um but essentially you're taking information from the previous layer um and also using it in the current layer that you're using and this is kind of kind of lead into the next topic that we're going to talk about so yeah there's a lot of one by one convolutions um these are used to only change the number of channels and not modify the input size um these are deep and wide to capture features at different scales um and that is also going to be covered in terms of depth wise uh convolutions as well as point-wise convolutions these are combined to form a more efficient computationally method while still retaining all the information that we get with a traditional uh convolution neural network uh yeah this is uh not explained super well um but essentially um we want to encourage discrimination in lower stages um increase the gradient signal that gets propagated back and provide additional regularization um the motivation behind this is that again we want to learn low level features um in earlier stages of the classifier and then as we go through we want to consider previous layers and previous inputs that we've taken into account um to regularize and make sure that spatial uh I guess discrimination is maintained in our gradients so as we're tuning as we're going through the the back propagation process there are a couple of common issues that can uh kind of result from just blindly stacking layers um so you might ask uh we saw Alex Ned and we saw bdg which was like basically the same thing but we added a couple more layers why can't we just infinitely stack these layers um and there are a lot of problems that come with that that Inception and resonance um try to fix and that is adding residuals yeah uh what do you mean by branched uh yeah the classifier is at the end uh by this uh again there's there's a drawing for it but uh essentially you have a yeah yeah you have a multi-headed yeah so you're you're taking whatever your uh input is in a certain step and applying it to the output of another step um this maintains kind of a about it's a backwards way of maintaining a residual value uh which is why it's called Uh residuals um we want higher accuracy and a simpler architecture and these are two things that Inception Nets um are able to give us in some form um but residents bring to another level as well um so yeah this is the intuition that I was talking about before adding more layers shouldn't hurt because the layer can learn the identity transform which is essentially how you're transforming a certain input in every step so the accuracy should not decrease as you add a bunch of layers however this is not the case adding more and more layers hurts because you're not taking into account the previous identities that you've had post-transformation um this is kind of where residuals come into play and you'll see that soon um Vanishing gradients is a common problem as you add a bunch of layers stacked together and that the learning signal or the gradient computation becomes extremely weak the model struggles to learn um and your weights start Vanishing um the other side of this problem is explode ingredients which isn't as applicable to this but another problem a third problem is shatter ingredients and the the point of shatter ingredients is that as I go deeper into an extremely deep convolutional neural network my gradients actually start resembling white noise so there's no uh pattern my model isn't actually learning anything because of the depth of the network this chain rule that never updates dates based on a previous step it only goes backwards in time it never learns the identity transform this is where residuals come into play that's why it's called resnet the solution is make it easier to learn at least the identity so keep information from previous stages into future computation that's like the the key motivation behind residuals so yeah this is what I was talking about earlier um which is that you have X which is some identity that you want to keep in mind um you have f of x which is the function that you're applying through your weight layer and relief activation function um as your X goes through a weight layer the function is applied you go through another weight layer this f of x kind of encompasses that process this is the function that you've applied to X now your output is whatever f of x is the motivation behind residuals is that after your f of x has been applied you add X back into your network um so by multiplying or by adding the resultant X by whatever your original identity was and using that as the input for the next layer you've maintained a semblance of what you had prior to whatever function you've applied so this helps especially when you're adding you're doing this computation like hundreds of times as your layers and your the depth of your network increases okay yeah so as you can see each one of these jumps is a residual um that's being computed um so a 34 layer residual um will have jumps uh in this case shown between every two um but there is another topic um that we wanted to talk about um which is bottlenecking so as you tune uh how how often your residual is considered so for example if this jump was after three or four layers um this is a process known as bottlenecking versus if it was after every layer um the idea behind this is that adding residuals will increase the time to convergence because you're increasing the number of backwards considering computations that you have so if you're if your bottleneck isn't as big your time to convergence will be smaller so as your uh as your residual skips more and more levels your time to convergence will be smaller but your results may also not be as good because you're not negating the problem this is highly dependent on what system you're using to compute these um as well as like where the model is eventually running so yeah um yeah dude that was an example of a very long uh residual net adding skip connections makes the identity easier to learn because you're quite literally adding a previous identity to the resultant of a transformation uh as you can see this is a gradient map uh the Lost surface of resnet with and without skip connections uh oops yeah as you can see um this is used to uh this makes the loss a lot smoother um because your identity is preserved as opposed to trying to retrain after every transformation yeah I am not sure actually yeah because like the loss you'll get after every update step is like a number so I'm not super high event like a low dimensional projection s yeah yeah this is like probably like really important thing for today but like this idea of like why it'd be important to sort of be able to learn the identity like it's sort of a weird thing um are there any questions or comments or concerns about that yes yeah for sure right so like if you have a dent snail Network like like let's just ignore convolutions right now if you like a dense neural network trivially you have a matrix called the identity Matrix which is just ones along the diagonal and it spits out the exact same thing that it took in so if I have like a vector you and I multiply it by like the identity Matrix and like X1 X2 whatever uh I'm going to get that exact same Vector here x that I took in so like if I have a neural network and I just trivially add so I have like a whole bunch of like layers right like this this new connection to the next layer right like just a dense neural network and I just make each layer like the identity Matrix if I make all the weights correspond to the identity Matrix like there's no reason I shouldn't be able to make like a million length neural network which is kind of absurd but like in practice we've observed that like if you add if you put a million layers on a dense neural network it's going to just learn like garbage like it's not going to work at all but like that's kind of weird because we should just trivially be able to add more layers if you carefully select the weights so that it spits out exactly what it took in it's just kind of curious that it was observed that deeper networks don't work and this makes it super easy so if your weights are literally all zeros and your biases are literally all zeros you're going to spit out exactly what you took in so it makes it really really easy for the network to just say like hey okay we've got enough information at this point in the network like we don't need to learn more complicated features I could just send the weights to zero and just ship exactly what I have about halfway through the network all the way to the end does that kind of make sense like it's way easier to learn to just uh to change what you have um or it's it's much easier to just spit out exactly your features like halfway through the network if your network decides like okay like we've got enough information to like make a good classification but we're only about like halfway through the network with with this it's just super easy for it to learn like okay we don't we don't need the rest of these layers they're they're only going to just confuse the signal um uh uh how do you know exactly like how many years oh like how many layers to have in your block here yeah so I mean resnet just used two uh two is a fine Choice it's it's just something you can tune and there's like foreign so like if you're doing the chain rule it just results in a lot of multiplications right like the more like um like we talked in the third lecture about applying like the chain rule um to deep neural networks and if you know um every little individual step in your network if you just multiply the partial derivatives of all of those steps you can find the the derivative of your loss with respect to a given parameter just with the chain rule but if all of these different things that you're multiplying are even a little bit smaller than one immediately like at a certain point at a certain number of multiplications your partial derivative uh your your chain rule that you've gotten as a result of many many multiplications just gets sent straight to zero um and on the other hand if your uh values if all of your individual little partial derivatives of all the individual little steps are a little bit bigger than one it's going to explode um and that's just not helpful [Music] um it's problematic uh we would like yeah we would like to update our weights in way that is like sort of uh sort of regular a little bit more consistent um so that our weights aren't either just exploding because the gradient steps are huge or they're just literally never going to change so does that make sense yes yeah if you're stacking like a bunch of sub 1 multiplications you're going to be left like a super small number as you go back yes and it doesn't necessarily mean you're close to a minimum either like because your lost surface can be like a little Plateau it just means that for some of our our parameters really early in our Network where the gradients that we're getting are either huge or super small it just means it's not moving it's not moving isn't necessarily the same thing as being at a minimum and there's other things that help with Vanishing gradients like batch Norm helps as well um and it's probably like a bigger contributor to like stopping the vanishing gradient problem then uh and residuals but like residuals help I think residuals are more for like shattered gradients so that's when like you introduce a bunch of like meaningless noise into your gradients so they're slightly different problems where your gradients start resembling noise instead of something that's actually meaningful as you go through back prop because that was no it comes from again like the the depth of the matrix multiplication that you're doing uh so without like by losing form of the identity uh that is the reason that stacking a bunch of layers doesn't result in like better performance or strictly better performance even like equivalent performance actually uh you would think that like going from like three to twenty layers uh this is like a very small example but like that shouldn't produce accuracy when in reality like if this is scaled it can yeah awesome group wise uh your Dimensions so yeah this is a really good point um so if your layer is a convolution um the dimension can change which is why often this is result like kind of viewed as f of x plus W of X where W is a transformation that you do on X two to make it the same Dimension exactly yeah to make sure your Matrix addition stays the same like like basically you have like the X number of layers and things like during the state actually with that you know like like they are here you're talking about which weights are you saying that like such that you know learned unnecessary like it's more so that you maintain information that you had previously post a feature map being applied so you're not necessarily zeroing out uh learned weights uh this like Jake was saying like other ways of uh normalizing your data as you go through like Bachelor affect the vanishing gradient problem more than residuals do the main point of this is that you want to maintain a semblance of identity as you go through your uh your network if that makes sense but that was a very good catch on the dimensions of X yeah this is often viewed as plus W of x there any other questions about resnet all right dope uh so the next thing to talk about is global average pooling um which is designed to replace fully connected layers in cnns um this is also used in resnet in replacement of the fully connected layers um you're generating a feature map for each category of the classification task so once you have a the vector that you want to classify your classification Vector you're generating a feature map for each of those averaging those and then that is fed into the softmax layer um this uh the typical dense layer that you previously had that's facilitating these connections um does not enforce correspondences between feature maps and categories you're kind of throwing feature Maps down the drain as you go through the three dense layers at the end of the network there's no parameter to optimizing global average as well which saves overfitting time um because you're just generating a feature map averaging those and feeding that into a softmax function you're not actually tuning a parameter to your data and in dense layers this often results in if I have a very deep neural network that is trained on a certain subset of data if I have a bunch of dense layers at the end it's very easy to overfit to the data that I have provided uh for training um so this kind of prevents that um and this comes more into play also in Mobile nuts and uh the efficient that's that will be talked about as well all right um there any questions on the previous kind of topics all right all right that that was kind of the meat of this lecture uh but mobile nuts are very cool in that you're you're using depth wise convolutions and point-wise convolutions to reduce the number of computations that you're you're doing um yeah so as we get deeper the number of channels can get large which leads to a lot of parameters what if we process each Channel separately and then intelligently combine those at the end how can we still retain data from each Channel while reducing the number of uh computations that we need to do the answer is depth wise separable convolutions so this is a a pretty decent visualization of how that works if you have like a three channel uh like some X by X image you're applying a uh like a feature map to it um that results in one product and then you're left with one product by one channel um instead of that what if we took each Channel individually we applied a smaller or a lower Dimension feature map uh to it created three depthwise layers um and then combine those with a in this case it would be a three three a one by one by three convolution so once I have these three layers I can get the same output size here by applying a one by one by three uh feature to it if that makes sense um hopefully this will explain it a little bit more so if I have an eight by eight by three uh kind of channeled image after a convolution step we want to increase or decrease channels in this case we want to decrease it so that we're left with the same output size as we would have if we did this three by three by three map so by taking one by one by three images uh kind of calculations we're left with the same eight by eight by one image that we we wanted to get um yeah this is extremely important when we want to Stack a bunch of filters or a bunch of layers so when this computation is being scaled over a a wide variety of filters that we're running in this case it's 256 but we want to be able to do these computations more efficiently uh and there's a little example that will hopefully show how the math behind this works so essentially yeah if you're involved by 12 by 3 image and a like five five three feature you would have to do and this result in an eight by eight by one at the end of your process you'd have to do 75 computations as this is applied into like a smaller subset over here and this is three channels um times this would be 64 times right so this would be times you're 64. um and then this would then be multiplied uh 256 times or however many channels that you're doing um on the other hand uh if we did the other metric which is instead of a five by five by three we have a uh five by five by one uh which then we still have uh 25 which is the amount of computation we're doing on a single pane times uh 64 times again there are three channels um sorry 25 times 64. uh which gets you you're saying eight by eight by one as you're stacking this is then multiplied by 256. so now you have 64 times 3 when you're one by one by one convolution is applied to uh the kind of single single eight by eight by one image that you have if that makes sense um there's some visualizations on the next slide that make a lot more sense yeah so you're applying this one-dimensional filter to each channel of your image that is applied to to each one these are then concatenated together and you apply a smaller pointwise convolution one by one by the number of channels you have to end up with a feature uh or a a resultant that's the same Dimension as you would be expecting which highly reduces the number of computations that you need to do so yeah normal you would do three by three by three by eight which is you know you have a three by three by three block you're multiplying that by eight or whatever the size of your your output is um versus now you have a three by three by three the same computation that you're doing but now you're doing a one by one by three times eight which is like the size of your uh I guess output channel so if I had a starting off at the the last step right if you have an eight by eight by three image that you're you're running this on now instead of going through and like I guess multiplying this by 256 which is like how it would generally work I'm running a thin one by one by three layer here so this is 64 times three it's 192. and this is what is being multiplied by the 256 and added to our previous product which is 74 times 64. so these two are being added together to end up with your your final computation for how many I guess multiplication parameters you have other questions about this yeah so this is the same kind of chart that we had before but applied to the mobile net um and thinking about it in terms of like simpler problems simpler uh layers kind of helps you visualize it better yeah uh I'm trying to okay yeah so mobilenet has a lot fewer parameters which results in a lot faster convergence time um and it matches Inception of D3 accuracy just by using depth and point wise convolutions and combining those so you can think about it instead of doing one step that results in one map and multiplying that by the number of filters you have you're not doing one step which generates a a feature map of the same number of channels so you're applying this one step to every channel and then your next step is applying a different sized convolution to do your your filter multiplication so instead of one uh yeah so instead of one step you have two steps that are being combined um which reduces complexity quite a bit alrighty I guess we can quickly go over like squeezing anxiety networks basically uh you squeeze you apply this through a couple of dense layers and then you rescale so we talked about global average pooling um how you come up with a feature map for each Channel you're compressing this through a fully connected layer passing it through relu and with a fully connected layer you can also expand this back to whatever Dimension you originally had um rescaling according to the layer output is also not as computationally intensive I think the slides are pretty good and compressed in a very visual way uh the remainder of the piano more impressive art it's actually the other way but I I hope the main takeaways are that you'll like understand those rules and that you you see that we've like adding all of these different sort of like tools to your tool belt now so like residual connections you can just use them in place of regular conclusion of Step by Step local convolutions um and one by one convolutions you can just throw them uh instead of using a standard convolution like they're just different little tools that you can you can swap out if you're very well CNN building plus um you know yeah I think understanding like the the base of how optimizations are held and the problems that certain optimizations face and others don't really sets the stage for like future networks like the efficient net in 2020 um we can straight up just go by what we've already learned in that we know we can pass through a one by one convolution a depthwise convolution which is where we apply this filter to each Channel individually um recombine them using our squeeze and exide networks and then pass it through another one by one convolution to maintain like dimensionality um so really uh that's all these architectures are it's just taking a bunch of building blocks and putting them together yeah this is kind of a graph comparing efficient net in latency and accuracy to other networks yeah so these are some things that these models wanted to optimize over time accuracy performance and model size um model size is something that has a trade-off if you get too big you lose out on other metrics like accuracy um performance is something that directly corresponds to depthwise convolutions and mobile nuts for Edge Computing and things like that you want to drastically reduce the number of computations that you want to do yep that is basically everything for today thank you guys for coming oh and there will also be a quiz there will also be a quiz at least is there a homework position closer or not [Music] um and do next Friday [Music] thank you for sure
CS_198126_Modern_Computer_Vision_Fall_2022_UC_Berkeley
CS_198126_Lecture_1_Intro_to_Machine_Learning.txt
I didn't see is there oh what I don't know if this works hey out of curiosity can y'all hear me in the back or do I need to figure out the microphone situation are we okay or all right what was that yeah I've never had to use it before though I don't know should I give it a shot do [Music] okay system works I don't know [Music] is this is this marginally better I can't tell okay see is this Mar better can youall hear me better up there people that thought mik would be is it not do I need to help [Music] me yeah totally reach no I know I thought I would be better totally I agree friends in the back is this better is the microphone like working yes you is it not am I no sadness evidently they can hear you well I know but I want it I don't want to have to strain my voice friend well I can always just like hold on to oh you think it's a distance uh I don't know I mean test it slap it no it's on okay it's clearly on that's fine there there's a mic volume setting can you don't want to like don't bre people yeah so I don't know old thing sound I guess test maybe I don't think it's I don't think it's substantially better if you talk like normal what time we got okay we still have five minutes we're CH well I think it's directional sucks so I think it matters which direction it's point that up a bit higher maybe this is like as high as it's gonna go I mean the Z what that way it doesn't like flop if that's uncomfortable don't do it I don't know how mic's working neither do I I'm not an audio Eng just talk okay anything else good fine working you have chalk is working I don't need chalk where we're going we don't need chalk probably for like an example or something want to elaborate no there aren't really this is just sort of conceptual but there's plenty of questions put me over time I'll put you on the spot you don't know it's like so so in in yesterday's paper four five it depends if you're counting from three or one oh it was five I'm right oh no okay all right that's fine sit down my gallery so enjoy the point I think so I think that'll mean that I'm doing it right um I should CL myate chring yeah we don't know if this is like app it h yeah nervous so park wherever you want dude it's your world man sit wherever you want sit on the desk facing the class like a you know like a cool Professor disappointed Professor yeah this a cool Professor is this a cool Professor I think it's a professor if you're like yeah guys that's I substitutes are under [Music] or whoever was like in in an adjacent office that's it I think it's directional because if I this feels like it's getting Amplified yeah it's fine I have a timer on here so does I think it should be recording oh it's not up there perfect uh he's not active in the club anymore we talk about Brian Brian left uh want to touch these up last so she should have her name on this put it on right I don't think I can I'm working how do you know uh CA Zoom says it's recording right here it says it right here recording stop Rec no one will ever know is recorded no because this it would be weird if someone did because this is my own Zoom room someone's just sliding in my zoom room all right I'm gonna start here go be peanut gallery all right it's about 710 yall good to get started someone saying no oh all right uh so welcome everyone thank you for being here welcome to cs198 1226 deep learning for computer vision um I'm Jake today I will be your lecturer um we have many other people on course staff we'll introduce them in a little bit um but yeah the time being thank you all for attending um I appreciate that you are here and I'm really excited to teach yall about deep learning and computer vision um so let's and the slides there we go so outline for today um meet our course staff uh Logistics tell you a little bit about the course itself um and then we will get into the meet of today's lecture which is just sort of a a broad Strokes overview about what is ML and how do we think about it um so uh core staff uh more formally introduction I am Jake um I'm a senior here at Berkeley I'm computer science uh somewhat obvious from the fact that I'm here teaching a computer science class um I'm mostly into computer vision go figure uh as is a lot of people on our course staff uh we have Arian we have Verona who is here today um Arvin Rohan and Ryan these are people you'll see lecturing uh as well as on Ed stem answering questions uh at homework parties so on so forth um so before we get into the course today or talk about logistics I want to tell you a little bit about what this course is uh why we're here um why I think this is an important class um so this is computer vision um the aim of this course is to be basically a boot camp uh into not only computer vision but just deep learning for people who are freshmen haven't had a chance to interact with the material before um personally I can say that when I was a freshman it was really really difficult for me to break into ml um it felt like clubs courses research were all off limits to me um and this is kind of my attempt to try and give back and hook people up with the resources that they would need uh in order to get a really good positive first experience um with ML and specifically with computer vision um the scope of this course is limited to computer vision uh with the intent to try and get as deep as possible and as close to you know state of the art as possible so you can see what it looks like what the field actually looks like today um and this course is supposed to be you know fun forward I just want to make sure that's emphasized and out there and provide people a really good positive first experience because for a lot of people in this class this is probably the first time you've seen deep learning a lot of people in here are like freshman sophomores they've taken 61a math 54 math 53 whatever um and they want to know what it means for like computer science so I just want you'll know y'all to know this is supposed to be fun and a good time uh and we really really do hope that you have a good first interaction uh with machine learning here um so this is like just some fun stuff that this course will kind of go over so at the top left we have like a lot of more basic uh computer vision tasks uh like classification this is a cat where is the cat which is localization uh localizing all kinds of objects cats dogs ducks and then showing where they are what pixes correspond to cats dogs Ducks whatever we're going to talk about how we learn from like large data sets of just images how do we how do we gain insights from that top right we'll talk about Mas Auto encoding a little bit later bottom left is 3D Vision 3D Vision makes me really happy uh how do we go from just different images of objects to like representing them neural Radiance Fields is what that is we'll learn a little bit more about that later and then the bottom right yall maybe heard about dolly dolly 2 that was just something off of like the MKBHD video on Dolly 2 uh text based image generation The Prompt for this I believe was a bowl of suit that is aortal to another world uh we'll talk a little bit about generative art at the end um hopefully hopefully that is intriguing as intriguing to you as it is to me um so that's a that's a flavor for for the kinds of things we will be seeing throughout this course um so some quick Logistics here um if you haven't seen already hopefully you'll have but this is our website tiny URL fall 22 CV decal um we have an ET stem um the slides are obviously these are going to be linked on our website I should have pointed that out um on our website recording slides assignments will all be posted um and so you can find these slides and find this edem Ed stem link a little bit later uh but Ed stem uh is your way to interact with other students in the course staff um so this is a new course and there's going to be a lot of hiccups we're going to screw up at some point so this is where you let us know that like we've screwed up and it's a again like a good chance for you to interact with the material ask questions talk to fellow students that kind of stuff um quizzes and assignments will be on grade scope this is the code again you can find this later on the slides uh the slides will be on the website classes Tuesday Thursday 8 to 7M physics building to y'all are here so good job you got that one down uh and normally codes have been sent out make sure you use them um syllabus is on the website it is the website rather so make sure to read it for a little more information um does anyone have kind of like uh questions here before we get started questions all right party on party on feel free to interrupt me at any point or raise your hand and I will try and get to you um this is supposed to be fun and interactive so you know uh feel free to interact um so uh some announcements uh the first quiz will be du at the end of next weekend it'll be released a little bit later either tonight or tomorrow on grade scope um first assignment is in two weeks don't worry about it because it's not going to have anything to do with like the first two lectures uh so y'all are chilling for a little while here the first assignment will be short too just pertaining to the tools that we use for deep learning um but it'll be released either later this week or next week but again it's short and you don't have to worry about it for for a while here um so now for the first lecture um if anyone doesn't have any questions then I'm gonna keep rolling on all right all right all right so again this is is kind of this is meant to be an introductory course um hopefully we get all the way up to state-ofthe-art um but this is for a lot of you probably your first time using deep learning for many of you probably your first time with ML so again this first lecture is really just supposed to be like what is ml so youall get a flavor for like what we're trying to do here um so a little bit about ml ml is just the Paradigm of approximating data from fun or approximating functions from our data traditionally if you wanted to do a task you would have to find a way to code up that task make a function yourself like in python or whatever language you would choose um with ML we're going to try and use our data to figure out what our function exactly does and what it looks like um sort of a change in the traditional programming Paradigm um so why would we want to do ml if we can just program things ourselves sometimes it's really really hard uh to program things yourselves um so it's really easy to just learn from the data instead uh we like to be lazy here uh and and that is that is the core of ml how do we let our data Guide our functions so if you had challenge of trying to classify this digit a seven if I asked you to write a a program in Python to classify that digit you would probably struggle um it's really really hard to take in like an image and figure out what digit that is I don't even know where we where we would start with that um if you wanted to try and program that no it's just not happening um it's much easier to try and find a way um with ml to separate sevens from other values um and that's a little bit that's kind of like the flavor of the things that we're trying to tackle here uh with ml in general how do we figure out what kind of function um could separate this for us rather than trying to program one by hand so let's narrow one a little bit on what ml is it comes down to template creation at the end of the day so rather than specifying some kind of uh function fully by hand uh with ML we're going to define the structure of the function but leave a few parameters in there um which we will learn from the data that will determine exactly how our function behaves for now uh don't think too much about how we would go about learning these parameters but I just want you to understand kind of like again what ml looks like and in this case it could look like something very simple like on the right we have a function that takes in an input and our parameter a uh it's very straightforward if the input is greater than a we return true otherwise we return false um this is a function where a in this case is a parameter that we will learn um and it determines the way in which our function behaves um so that's that's sort of the idea is we have a a template of a function and we will fill in the parameter values from our data so uh in this case let's think of like a onedimensional line we have red data and we have blue data um our parameter is still a if our input is less than a uh we spit out the value red otherwise we spit out the value blue again this function is just like a hypothesis a hypothesized function it's a template um we need to find a good value of a that's going to work pretty well we might suppose uh that the value one works pretty well um in this case it does if our input is less than one odds are looking at this data that it is red uh and otherwise it is blue so we hypothesized um what our function kind of looks like and we tried to find a good value of our parameters in this case just a uh that fit um and and spit out you know good values that we we liked um however if we change it very slightly and we just change the inequality so that the input is greater than a you return Red otherwise you return blue there might be very there might be no values of our parameters that work well um and this is the important part of ml is that what kinds of function templates we create really really does determine how well our functions or our models can do so it doesn't matter how cleverly you select your value of a you can't get better than like 50% accuracy here and it's just due to the fact that our model our function here um is bad it's not meant to do well on this data um so it's just important for you to note that the kind of function that you're using uh really does play a very big role in how good your results are um so being even more specific um the art of ml is simply figuring out what form function takes and this is sometimes referred to as like a class of model and we're going to try and figure out uh we're going we're going to leave blank certain parts of this function uh these are called parameters these are values we're allowed to learn um and how we learn these values to you know approximate how we approximate the best possible values for this function um that that is something we'll talk more about but that those are the basic three steps for like what again what is ml um everything you'll see in this class everything you might see in other classes whether that's CS 182 CS 189 CS 188 uh or graduate level classes whatever it literally this is all it is um what is the template for your function um what values of that function are we allowed to learn and how do we go about learning those values so it's literally just template creation um does anyone have questions of this point because this is for people that are uh new to ml this is probably still slightly nebulous and you probably might have questions at this point um so if any have questions feel free to raise your hand ask you're expected to still be again somewhat in the dark on this um we're going to talk more about specific kinds of models um which parameters are learnable and how we go about learning them more in the future um yeah but for now this is the this is the broad idea if you haven't seen ml before of what we're trying to do um so if there are no questions I will keep on trucking so how do we describe all of the different kinds of ml um it's useful to kind of like categorize things um we really have a broad definition of what ml is and now we want to try and like break that up a little bit more into the different kinds of tasks that we might be doing um so in ml we have supervised learning which is like classification like figuring out whether that image is a seven or a one or a two or a three or whatever um or regression which you maybe seen in like stats or something like that we have unsupervised learning which is how do we learn insights from just swads of just unlabeled data like our images um with digits you know had labels on them like we could tell clearly that that's a seven and we could label it as a seven what happens if we don't have labels like that we just want to learn from data in general um that's un supervised learning we'll touch a little bit more on that later in this course and then reinforcement learning is something we're not going to talk about at all in this course but it's how do we uh react to environments that we don't fully understand um so that's sort of that's sort of the the taxonomy of ml so to speak so a little bit of vocab before we keep on going um function or model these are terms that are used interchangeably uh it's again just a template with some parameters that we're going to try and learn the best values for um parameters weights and biases these are all just interchangeable terms for parameters um weights and biases are terms that specifically arose from Deep learning weights are things that are multiplying other values and biases are things that are generally being added um but they're all parameters and they're all things that are learned a hyper parameter is like a non-learned parameter uh it refers to things like how big our model is uh what class of model we have chosen to use in the first place um the training procedure these are things that aren't leared um how you would go about learning uh which training procedure which learning algorithm you would use that's an insane task um these are just things we set ahead of time like setting what do we want our function to look like we have loss function cost function risk function uh these will come up a little bit later they're all just interchangeable um you might hear them get tossed around um and we have feature and this just refers to bits of data they're either inputs or representations of our data that we've learned um so you'll see all of these these getting tossed around um does anyone have clarifications on the vocab before we move forward yep we got uh no well you have you have just a like a whole bunch of data so like imagine we just have like a whole bunch of images uh and we don't attach labels so like we just have a data set of just images and we don't have labels and it's like how do we learn from that how do we how can we do interesting things with this data still does that kind of make sense okay whereas supervised learning is when we have the labels attached and our computer can say like ah this is a seven uh not from the image but from a label that we've attached to it yeah what do we got so this case uh in supervised learning in unsupervised yeah sorry sorry um in unsupervised learning yeah that could be a task is say we have a whole bunch of images and we want to like we we don't necessarily like say we want to Cluster them as a frequent one so like we have have just a whole bunch of images and can we find some groupings of them that are like similar um so so yeah and frequently we'll talk a little bit more about this down the line but you can use uh actually no I'm not even gonna sorry uh but does that sort of make sense like things you can do with unsupervised learning yeah there's a lot of other things we'll talk about too with unsupervised learning um but hopefully that that gets you to understand what the the whole the whole deal with unsupervised learning is um anyone else have questions un like vocab and stuff all right party on uh so let's talk a little about the ml pipeline um so the process of actually doing ml um it's pretty straightforward we're going to figure out like what our problem is this is sort of like a project management level figuring out um what our problem is what are we trying to solve um we got to prepare the data get it ready and make sure that it is usable for whatever model we're going to be using down the line and then we pick our model our loss function which is what are we going to be trying to like optimize for um and then train it minimize our our loss function um optimize our model and then that's it um so let's go into this a little bit more defining the problem this is very much just like a project management thing uh but just like what does your data look like uh what are you trying to do and what is your metric for Success un like a project level like what do you what do you want this to do um these are just things you need to figure out hopefully these are somewhat obvious but uh but it's important that you have like an understanding of what you're trying to do when you go into any ml project um repairing your data so you have to collect your data don't take this for granted because like seriously in in the real world it can be really hard to find especially for computer vision like the right kind of data set that you want and if you have really crummy data your model is going to learn really crummy things garbage in equals garbage out uh don't take this for granted we're gonna have hopefully good data for you uh in the in the course we're going to be using like data sets that are widely used in research um but in general for your own projects this is like so important uh don't take it for granted the existence of good data good data that is labeled um yeah uh we need to find a way to represent data with numbers all of this all of this all of this is just math at the end of the day our functions are just templates but it's still math um so we need to figure out how to convert everything into numbers if we have text we got to get it into numbers uh image files need to be represented by numbers every single data point it's got to be numbers um you have to select which parts of your data are important which parts of our data do we want to use as input to our function um and we sometimes do things like normalization to make sure that our features are like all about the same quantity um this is you'll see this more down the line it's just something to keep in mind that our features should be like on the same scale uh and frequently we just put everything in vectors we just take all of our important numbers and just string them up into a vector um that's like Mission critical that you understand that too um vectorization of our data just take everything that's important plop it in a vector um yeah yeah and make sure that the order stays the same uh otherwise it's chaos um so representing labels um one hot labeling this is just this is getting a little bit ahead of myself um however it's still like important to know and it's a good chance to think about how we go about representing data one hot labeling uh is the most common labeling scheme so before we had images you know 01 2 three four five six seven eight nine um and how do we like label that because our labels too need to be represented as numbers um and sometimes it's really difficult um if we just put the label as the number four frequently it's really best practice to represent the number four as follows four zeros followed by one followed by as many more zeros as we have labels um having one unique digit output um for all of our potential different outputs that we want our model to be able to recognize um so one hot labeling just to sum it up is simply a vector of zeros with a one in the position corresponding to what kind of data that is so again if we have just a vector if we have for classifying digits you know 0 one two three whatever um our labels in this case for an image with the number three in it um zero indexing would be or the image zero that corresponds to the first spot in our label zero 0 1 as this is the third spot followed by six more zeros one two three four five six seven eight nine 10 there are 10 different outputs or 10 different kinds of data and different kinds of images and we have one spot for each of them if our image is a three we're going to poop a one in that spot um this is something that's useful more down the line um but it's just important to think about so if we're trying to classify if our model isn't sure say it gets something that looks kind of like like maybe like that like is this a seven or is it a two maybe maybe it's ideal that our model says I'm not really sure 50/50 I'm kind of split between whether that's a seven or a two I don't know uh and then what should we do in that case what do we want our model to Output like would we want our model to Output four and a half because that's the middle between seven and two but wait that's not four and a half that's clearly messed up that's not right so rather than that it's probably better that our model just says H you know what I think 5050 probability that's a two and then 50-50 probability that's a seven you know it just it allows us to be more expressive about what our data is what we're representing um so that's kind of an important point and it's a little bit maybe confusing so does anyone have questions on that um because yeah one hot labeling something we'll see a lot uh this guy yeah used for theit zero oh no it would be okay yeah yeah so if we had the image zero right here that would correspond to just one followed by all zeros yeah sorry maybe that was a little bit confusing is everyone okay with that any questions comments concerns all right um yeah so that's getting ahead of myself but the important Point here really um is that it's important to think about what our data looks like and to make sure that it can be interpreted correctly um representation of data is like again Mission critical um because without that it's Madness um so defining a model and defining a loss function um so we're going to talk a lot about different kinds of models so don't worry about that now just kind of blackbox it for now um so we want to know again again as we talked about before like what is our template uh that we're going to be filling in with parameters um that we're going to be learning um what kind of model do we want to use that that is a function of what kind of problem this is um what do our inputs look like what are our outputs look like um thinking about what kind of model to use requires you to think about again yeah what kind of problem this is what does our data look like um all that kind of stuff um again we will learn more about this so don't worry too much if you if you can't think of an explicit model at the moment um we can also try different models and just see which one works better um that's something we'll we'll talk about more down the line um so yeah that's defining a model defining a function um needs to be done and remember we need to like optimize this we need to figure out like what settings of our parameter are best but in order to figure that out we need to know what we want if we have no concept of like what we're aiming for like what metric we're trying to optimize like we're screwed like we can't even we can't even start um so different models will have different ways to optimize for the parameters but all of them need some kind of goal some kind of metric that we're going to optimize for and this is what our loss function is basically uh once you have the model once you have your data your job is going to be to minimize this loss function and the reason we call it a loss function is it's a value that if it's High uh that generally means our parameters are kind of bad they could be better and if we have low loss that means we have good parameters um it's our it is our our metric for success to have low loss um and you need to make sure that whatever your objective is um corresponds to a loss function that is low so in supervised learning before we were talking about again um like labeled digits if we in this case uh wanted low loss we might want our models outputs which will just be again in one hot format we might want them to match the correct label given an input image so given a three we really want to make sure that our model is spitting out something that looks like this a one hot Vector with a one in the threes position and to do that you can just take like the mean squared error just say like if our model is like you know say say it's doing like8 like 0.2 it's like 80% sure that we have a three and 20% sure that it's a four mean squared error if we just look at the correct label which has a one in the three's position and a zero in The Four's position we just take the distance between our correct label and our model's output that is a a single value that we can optimize for just a single number that represents how good our model is doing um if our error um between our labels and our models output is really high uh then you know our loss is really high and we need to be doing a better job and if it's really low we're doing a good job so it's a single value mean squared error um is just an example of a function that's taking the distance between our models outputs and our correct outputs um and it just it's a single value a loss function that scores how we're doing um we'll talk more about that later so if you didn't if you didn't get that that's okay uh but just again the idea that I really want to make sure I'm conveying um is that we want some kind of metric to optimize for does anyone have questions comments concerns yes friend yeah if the L function something you like dve or something you choose model oh yeah you choose it yeah it's it's another like hyperparameter it's something we choose there's all different kinds of loss functions you can choose that make sure that our models output like close really close to our labels um mean square error is just like really easy to write so frequently why it's like brought up first um yeah it's it's something we're going to choose before we go into it um because again we want to make sure we have like an idea of what we're trying to optimize so our loss function is just that metric for what we're trying to optimize um so yeah it's something we're going to choose beforehand um so training this is something again we haven't talked about models yet so we we can't really talk about training procedures um but once we have the data the model the loss function we just have to use some algorithm to figure out which values of the parameter parameters in our function are optimal which ones are giving us good outputs um according to our loss function we'll cover these more later um but the the hope is that you sort of yeah you'll get a you'll get a flavor for this more down the line um but the understanding I want you to take away here is that training is the phase where we're going to just select the good values the parameters um that mean our model outputs um really good values um and then finishing up an ml project um use some testing set that our model has never seen before some set of data so that you can report your results um and and frequently you can use this to see how you stack up to other systems like state-of-the-art systems um so you can use if you have your model and you want to see how well it's doing you can use a common data set like mnist we saw those digits earlier it's from mnist um and you can see how well your model um your training procedure your loss function snack up um and it's the way that we standardize you know our results across ml um and check and make sure that you know we're moving the ball forward everything um so yeah um questions comments concerns before we jump on to another topic all right um so let's talk a little bit about generalization um so we've just seen performance on training and testing data uh how well our model generalizes is simply how well we're doing on the data that we were training on versus how well we're doing on New data um because really at the end of the day if our functions can't or if our models can't do well on data we haven't seen before all of this means nothing uh we want to make sure that in the real world on data that we've never seen before we can still do a good job um if we make our model more complex we might find that it's really good at like memorizing the data uh and that's not what we want because then when we see new stuff it's not going to know what to do because all it did was just memorize all of our training data um so you want to make sure that you're not doing well just on the data you've trained on but you want to make sure you're doing well um on data you've never seen before um so with that in mind we're going to talk a little bit about bias varience or just do anyone have questions on on the idea of generalization what we're trying to accomplish all right um so bias variance um this is something that you're going to run into not just in deep learning but everywhere in ml uh2 189 every class you're going to take on ML um when you're talking about things that are learned um there's always this Biance bias variance tradeoff um on one hand we have bias which is just our model tendency towards certain predictions um normally when we have really low complexity models um that aren't super expressive they're going to just generally bias towards certain values um they're not very good at learning small changes in our data set means our models still going to be off by that same amount and our predictions are still going to be off by some amount um and on the other hand we have variant if our model is really good and can just like memorize all the data um we're going to find that if we're too good at matching our data um then we're generally not going to be able to generalize very well uh we're not going to do very well on new data we haven't seen before um and this is generally seen when you have small changes in your data set that result in Wild changes to your model and how it behaves it basically comes down to that we want our model to be able uh to capture the complexities in our data but we still want to make sure that it's robust if our data set changes slightly or if the kinds of things we've seen when we train very slightly um we don't want to just memorize our data we want to make sure we're learning meaningful patterns about it um and this is sort of like a really good example my opinion so the green polinomial um or sign function is where our data came from and we added a little bit of noise to it in glute um and that is is our our data set that we're training on and our model in this case is just a different kind of polinomial um on the top left it's just a polinomial a zero degree polinomial effectively um that's just a constant uh it's super high bias uh it's not matching our data at all it's biasing towards certain values and it doesn't matter whether our data is noisy or not it's still going to do really bad it's just biasing towards the constant value on the top right maybe a little bit better we're now learning a line a step up um still kind of high bias doesn't matter whether our data changes we're still going to predict about that same line um we're still going to buas toward certain values on the bottom left that's sort of the golden Zone you know our model is as complex as our data was uh and we're fitting it pretty well uh if our data becomes slightly more noisy we're still probably going to do pretty well um but you know we're not we're not NE necessarily biasing towards bad values we're doing we're in the goil loock zone and then on the right our model is way more complex than our data uh you can see on the edges our our predicted red polinomial is just wilding out um its values are are insane and don't correspond to our data and the reason that happens is because we're trying we're perfectly able to fit to the noise look we're going through those blue points we're not going near them we're going through them uh but that causes everything in between all of our predictions in between just be wild in order to accommodate that noise in the data uh our predictions everything in between everything we haven't seen before is just going to be garbage so that's that sort of that's that sort of tradeoff um and this is this is something you will see often um if your model is way more complex than your data set you might just overfit and not do very well on new data um this is the bias variance trade-off this is pretty much support ml you're going to see this everywhere you go um and hopefully you have a little bit of a flavor for what that looks like at this point in time does anyone have questions because this is like a really important concept questions comons concerns yes friend yeah so the bias again in our top left example uh it's biased towards just one value it's just a constant right like doesn't matter what we put in like we're always going to be off we're always going to be sitting out you know just a constant value we're biased towards certain values um and variance again generally refers to uh like on the last slide uh if we change our data set slightly because we're overfitting so much to this data um we're trying to match it so much in order to minimize our loss um that you know we end up having wild and erratic behavior um for values that we haven't actually you know seen before does that kind of make sense yeah there's like a more mathematical way to do it but I didn't want to do that I don't want to scare yall off uh it's but it basically corresponds to how much your model is able to match the data if your model is able to match the data too much like in the bottom right hand corner we have no error from our from our Blue Points that we're you know training our model on that we're fitting on we have no error but that still means we're not doing a good job right because it's clear that everything that we've never seen before is still just like insanity uh we tried so hard to match the blue data that we ended up shooting ourselves in the foot and our model was crash and on the left hand side you can see it's that sort of goldilock zone between we're we're fitting to the data accurately we're not perfect it's not exactly going through blue points but it's it's getting the gist of it and it's sort of learning the the core you know pattern um in that data does that make a little more sense okay um anyone have more questions comments concerns yes friend yeah uh so the loss function is so say we've chosen um polinomial so say we've chosen like a degree free polinomial and we've chosen a loss function which is like how far is the red line from like the blue dots um our loss function is what's going to tell us what all the coefficients of our polinomial are that are going to define the shape of that polom um picking which polom is referred to like hyperparameter tuning um because the you know um the degree of our polinomial is a hyperparameter it's not something that we uh learn ahead of time uh and you're going to have to it's G to just end up being trial and error to be honest um hyperparameter tuning is is difficult um and it generally involves holding out you know a segment of your data um you have your training data and you have your testing data you're going to train a model and you're going to see how well it generalizes and you're going to try and look for a class of model um that is going to a selection of your hyper parameters um that's going to generalize well to your new data does that make a little more sense yeah so you're your loss function corresponds to actually training the model once you have selected what it looks like um yeah yeah so overfitting and underfitting I think I might have accidentally used the words earlier in trying to explain this stuff um but it it just sort of explicitly um says you know thing we care about is generalization um we want to hold out a little bit of our data I realized I jumped the gun there on the last slide uh we want to hold on a little bit of our data to test and see how well we're doing on values we've never seen before um what we really care about well we do care about you know uh the full accuracy to some degree but another thing that we really care about is the difference between the accuracy on the data that we trained on and the accuracy um on data that we've never seen before um because again it's really important that on new data like if we're deploying a system in the real world it's really important that our model is able to generalize because in the real world you're going to be seeing data that you've never seen before that you never trained on and if you can't do well in that data then it's really bad situation um so this is again the balance between bias and variance um if we're overfitting then our model's way too complex we have a lot of variants and we need to scale things down um and find ways to make it more less complex sorry and make sure that we can't just memorize our data blindly like we can't just we can't just intersect with every blue point exactly uh and get like zero loss um we don't want to overfit like that and on the other hand we don't want to underfit we don't want to just get a line um with like a crazy amount of bias uh that's not helpful either um we we we need happy medium it's a trade-off between being able to have a model that's complex enough to capture all of these rich patterns in our data um and having a model that is still robust enough to generalize um and again we're going to have to try a lot of different models we're gonna have to try a lot of different models uh play around with our hyper parameters until we find something that that works pretty well and doesn't have a huge discrepancy between data that we trained our model on and data that we're now testing on that we've never seen before um so wrapping up um does or does anyone have questions before I try and cap things off questions comments concerns yes friend like bias how good you on set and you set um your your bias uh generally models that have really high bias um will have very close accuracies between your training and testing data because it doesn't matter what you feed into the model it's always going to spit out the same thing right um yeah if our model always spits out like you know the value one um for some for some input value x uh it it does it's going to do the exact same thing on training and testing data so it's gonna have about the same error there um and variance is again fitting too much to the the data so that we have like we have our loss here the distance between the red and the blue points is nothing uh and we were able to do that by just memorizing the data um so that's sort of that's again the sort of uh idea of of variance and generally when we test on values that are you know in between our blue points and our training set when we test on new points we're going to get huge loss um and that's you know so generally when you see a system that is um there's a huge gap in between our training and testing error or training and testing loss um that's generally a sign of high variance whereas if we're getting really trash accuracy but our model is doing about the same on training and testing data that's generally an indicator of high bias so everyone feel okay with that awesome um so uh wrapping up today's lecture where are we on time oh golden um so the important things I wanted you to take away from this because again like this is meant to be like a really introductory uh course not just to deep learning your computer vision but also to like ml um and I want to make sure yall kind of have a a feel for what we're trying to do here um ml is just template creation you know we just created some python function with some empty value a and we're going to need to use our data to try and fill in that value it just comes down to template creation um and then ml you define the problem you got to prep your data make sure it is it is um expressive enough as we talked about like one hot vectors um we need to make sure that it's clean um that it's usable that we can actually like learn stuff from it um you need to Define what kind of model you're using the loss function all your hyperparameters choose them and then train the model um and I also want to make sure you understand that there's this trade-off and you're going to see this a lot especially in deep learning there's this huge trade-off between the complexity of your model and how well you're able to generalize and this is the bias variance trade-off you're going to see this pretty much straight off the bat as soon as you start training models um so I want to have you I hope you you have somewhat of a flavor um for that idea between complexity and our ability to just memorize our data um versus our ability to you know be robust and and generalize um so yeah does anyone have questions comments [Music] concerns all right um I think that about wraps it up for today thank you all for coming out here I I appreciate your time um I hope you have a good time in this class I will be down here if you want to come down chat it up I can explain things a little bit better maybe yes friend uh it's not but it's really better if you show up Bren I really enjoy it more this is I I should actually before you leave I want to just emphasize here like I'm fully aware that this is a decal this is not your priority and should not be your priority um so this is hopefully hopefully we're giving you the resources you need to learn about machine learning and deep learning um but if you if you know if you don't want to you know put in the work I can't stop you I'm not here to hand out NPS like candy I want youall to pass but like but please show up again like this is my effort to try and like give back and make sure that there's openings for people um to like get into ML and get into deep learning because I think this is really cool and I think y should too um so attendance isn't taken but like show up please thank you friend yeah this
CS_198126_Modern_Computer_Vision_Fall_2022_UC_Berkeley
CS_198126_Lecture_17_3D_Vision_Survey_Part_1.txt
yeah definitely interrupt just ask questions uh yes yes uh so yes this week I think you guys already got an announcement but we will be talking about 3D instead of self-supervising that's all supervisor will be moved to next week uh cool if you have any questions at any point just like interrupt me uh and I'll be more than happy to answer them yeah so talk about going to three dimensions so first grade 3D kind of like we've talked a lot about 2D like the entire vehicle was talking about 2D computer vision and that's cool and it makes a lot of sense it's like the way our eyes also see is also three-dimensional and we just kind of guess the 3D structure based on like our stereo vision and just like what you know about objects and so sometimes because of that you can draw like 3D images like this one by MC Azure and it will kind of mess with your mind because you're just guessing it's a 3D structure but it doesn't make sense because it's a flat Big Image and so the reason we need to go to three dimensions we need to kind of extend is to prevent this meter pad the constellation structure structure that makes sense um the reason we need to do this is because our entire world there's a three-dimension so pretty much there's unlimited applications of three dimensions from the metaverse stuff self-driving detecting Carson click if you want to drive around them you need to know a three-dimensional position of those cars uh shape optimization virtual environments Avatar generation so on so and so on so there is Limitless applications for three dimensions and that's why I like it so much yeah so hello Graphics primary uh how to 3D and that's just like how do you actually represent 3D data because like we can just use images and so one example would be 2.5 D so instead of just starting the pixels we also started in depth kind of like how far away the object is uh Point clouds which is pretty easy you just have a set of 3D points in space and that becomes a 3D representation and then if you connect those points with like edges and faces then it becomes a mesh and so those are kind of used in all video games you've definitely seen meshes before um another approach would be Volkswagen voxel grids which would be like the image equivalent of 3D and then lastly we'll also talk about regions fields and so given those kind of structures one thing we want to know is how do we visualize it how do we render them how do we see it and so that brings us to like what is an image and what is a camera and so one common model that is like commonly used to teach this is the pinhole camera model and so the pencil camera model assumes that life is at a uh and it there's like light everywhere it goes in all directions and to actually capture an image what the camera needs to do is it means the filter out there is such that one point in space corresponds to one point in an image and then we create a sharp image of it um so as you can see on top if you take like the top of the tree it emanates like in all directions all over the barrier and if we just try to image the better we'll get like just a blur we won't see anything so we use a pencil a tiny hole in the better to filter out light rays and then on the film we can get an actual image so that's kind of the basics of how Imaging Works in approximately how our eyes work as well and this is important because this drives us to a kind of a model that we use for rendering 3D objects in space yeah in simulation so how do you do trying to simulate all of the libraries would be invisible there's infinite libraries in all directions it will be incredibly expensive and inefficient and just impossible to do so what we do is we assume all the light rays that don't make it to the camera just don't exist uh and so that would be forward great racing we take maybe all of the points or maybe some points in space because if you'd write random we pass them through the pinhole or to the pinhole which becomes like our camera through the image Grid it's just like the pixel cloud and that way we can calculate where each point kind of shows itself in the image uh one detail that's important in simulation is that we no longer have this notion of a film we kind of don't place it behind the barrier we place it in front of the barrier uh and the barrier is like actual pencil is where our camera is now so when it has the camera origin and the redirections the pinhole becomes our camera origin like where the camera is and we have radio actions going from the camera through the image grid does this make sense wonderful but uh one problem with this is like how do you know where to start the race your scene can be very huge not all points can make it to the pinhole to the camera and so we can do the opposite of that we can do backwards right tracing so on this model instead of going from the scene to the camera we started the camera we go through the image grid through pixels and we see where those arrays intersect on the scene uh and so this is backwards requesting or great racing uh so we'll see why this is useful later does this still make sense one shorter side is I'll use the words rate racing and recasting interchangeably there's slightly different things but you don't need to worry about that uh wonderful so 3D data versus 2.5 gig which is also known as rgbd and basically all this is instead of having RGB image you add another Channel which is the tab and this can be dapped from let's say some plane that you defined so kind of like a monster graphic projection sort of thing or if you're taking like an image your phone can actually take an RGB camera by sending out praise and like getting them back so you know that the depth on the camera to the scene uh that's kind of boring every single CNN that you've used so far can be just like converted takes three four entry channels instead of three and work exactly the same way so nothing interesting here we'll move on to point clouds Point clouds are cool so if you every like if you're not familiar with sensors like glider or radar they output Point Cloud so what they do is they send out a bunch of phrase in all directions uh and they get them back and they measure the distance and then knowing the direction in which they send the Ray and the kind of distance that they traveled they know where that point is in space and they take all of these samples in the reconstruct kind of a lot of points a point cloud statistical kind of gives us a general idea of what is tough where the surface is but it still Sparks like we don't actually know what's in between of those points and so to actually reconstruct the surface we need to make some assumptions and the Assumption we typically make is our Point cloud is dense enough to the point where we can just linearly connect them and create a mesh uh so this is how it works it's kind of similar to your typical like function interpolation that we learned just like connected with lines uh and then connect those lines with triangles in your build mesh uh does this make sense perfect [Music] yeah so you cannot quite do it randomly you have to pick points that are close to each other and there's algorithms for triangulation for example Theologian triangulation uh make sure it's kind of like one of the best translation algorithms it clever Olympics sets of points to make triangles between them oftentimes you create the mesh by hand in like blender or something or after desk on your users that way but you can also go from point files to matches some properties of measures is that well first I'll let the graph right you have points they're connected by edges and between those edges you can process kind of just like an extension of a graph into your Space Storage Wise It's All N squared usually and that's because like the surface of an object is in two Dimensions so it's uh and square storage but it of course depends on like specifically the object you have how simple or complicated it is and so on one thing is that measures work well for smooth surfaces but if your surface has a lot of like beeps and valleys and it's like video rugged and high frequency those typically would not be represented by meshes you would need to up the resolution a lot and therefore the storage consumption of your mesh uh one more thing is that they're generally difficult to work with in machine learning settings because we don't have the best techniques to work with uh kind of graphs uh at the moment so typically other representations can be beneficial depending on what need does this make sense wonderful um one go outside is like how do we add color the measures and so if you're familiar with kind of like the globe projection right you've all seen like a map of a globe the common problem is like how do we actually dried do we dry this like the circle but then like some parts get distorted and so on um and so this problem cannot like be perfectly solved but the way we typically do it is the take the polygons on the surface in the unwrapped them and we warp them to like a flat 2D image and so here we have a mapping between like every single polygon in an image to a polygon in a mash and so when we color the mesh we see where it intersects we take the polygon we know where it maps in on the UV map image and we take colors together cool so now we'll move on to voxels so you're familiar with images kind of it's a discrete 2D space and you have pixels so with pixel analog of in 3D would be a voxel or kind of just like Legos so instead of having a 2d gradient 3D grid and it's for this voxels into the pixels you can think of it as like a Rubik's Cube or Legos or maybe Minecraft it was only like monochromatic blocks on it so just a single cover um does this in the previous structure make sense to the ground and so one quick note is that uh you can box the lives in a mesh so meshes typically represent the closed volumes so if you want to convert them into a voxel grid you just place it on the box okay and for each voxel you can see is it inside there is outside of the mesh and we can set it to either exist or not exist and so from this we can kind of that are like that voxel has several properties including like color and maybe existence so does it exist which is one or zero if it doesn't exist in the voxel guide uh another quick aside is the translucent voxels is how do we actually represent objects that are maybe like only sort of see through maybe like stained glass or yellow or something like that and so then we can derive something called density which would be a value between 0 and 1. so instead of just having a binary value for a voxel like if it's they're not we have density like kind of a 0.5 would be we have a translucent voxel here uh and so on yeah so some properties each voxel has a density in a color its storage would be in cubic because it's a greater than 3D space so if an image is n by n is just transferred the boxes and by n by end so it's in Cube which makes it kind of prohibitive to use voxel grids at high resolution whether it's like now we can process images at about 1024 by 1024 pixels we can't really use voxels at 1024 by 1024 by 1024 because that's just too big not very efficient um yeah and one cool thing is we can extend 2D convolutions to the three dimensions so instead of a convolutional kernel being like the two given those are just like it can become a three given position slide across the voxel grid and uh you can convert almost any CNN architecture to be a 3D CNN any questions about voxels wonderful so now how do we actually increase the resolution we've talked about measures which are you mentioned not very good for high frequency details we mentioned box office which also explore cubically with respect to like the resolution how do we do we actually have like a representation that can um represent the high frequency details and so one solution that we came up is is called the radiance field and so Radiance field is just you can think of it as a function that takes XYZ coordinates and it outputs the RGB recover and sigma the density at that point at that point and so f is just a continuous function that exists everywhere or maybe we can constrain it to some like space that we care about um and it's sort of you can think of it as like a continuous voxel plane does this representation make sense there's a currency in this function wonderful and so one thing that happens when we do this is we lose the notion of what a surface is so previously with voxel grids like we had a defined surface we could intersect it with meshes we can easily see like there's a defined surface that we can intersect but with this Radiance field there is no notion of a surface there is nothing to intercept uh and so this kind of raises the question of like how do we actually render it how do you work with it and so the answer is we sample it so when we cast array instead of looking for an intersection we sample the different intervals and then we use those samples uh to create an actual image and we'll talk more about volumetric rendering a bit later it's a scale wonderful and so kind of from all of this what is the best 3D representation well it's hard to say right we have Point clouds but there's bars we don't know what's like actually in between the points uh meshes other but how do we actually take advantage of the fact that we have edges and triangles uh and then voxels are great but they're kind of over parameterizing the in cubic uh they have stuff inside even though we usually just hear about maybe the surface that we work with um and then Radiance fields are great but we can't intersect them they're kind of we have the sample so there's kind of pros and cons to each one of those and it is really important to think about the representation because for example if you think back to the MNS digits the representation matters a lot if we set it to a one hot label versus kind of a scalar between zero and nine it can make your task more difficult or much simpler and so it's really important how we choose it and there's more 3D representations that we did not talk about uh you can talk about like spherical coordinates maybe some other disk but I suppose maybe some combination of the two and so on so and so on so right definitely encourage you guys to think more about like novel ways to represent 3D data any questions about 3D data representations I will move on to rendering now so going back to raycasting so we have the forward recasting remember going from the scene with the camera we have the backwards raycasting which is going from the scene from the camera to the scene and it kind of just approximates it because we don't have every single point going going back yeah so forward the requesting we take let's say a good example via point Cloud for every single point in the point Cloud we go to the camera origin to the PIN code yes you have for uh whenever we render we have the camera origin and we have kind of let's say directions of the raceview castle Yeah and then we can calculate kind of like where's intersect on the image in the pixel space uh one note for backwards recasting which is very important is it we take whatever we see first so when we go from the camera to the scene whatever we intersect first is what we actually see so for example I'm seeing all of you guys laptops but they don't see what's actually on the screen because first they intersect is the best of the laptops and so kind of when do we use forward precasting when do we use backwards or custom well so when we want to render Point clouds we want to see kind of every single point on the scene so we have to use forward raycasting we take starter array at every single point in the point cloud and we go back to the camera and we see where the uh where we actually want to draw the pixel for the point in the point cloud uh backwards recasting would fundamentally not work with this because we're basically getting to almost never fit any single point uh in space just because of slight implications foreign yes does rendering Point clouds make sense and then when we want to render for example a mesh or voxel grid we do the inverse so we'll start at the scene we start with the camera we retrace it and wherever we intersect First We Take the colorized application and that becomes the colors that we actually like render on the image um to do this for every array we can calculate three triangle intersections and it's uh somewhat efficient technically you can also use for the requesting so you can take the points in the mesh you can re-custom for with Ray custom but I will not talk about forward requesting for meshes because it's category kind of collaborates let's assumes yes so if you're a geometry with a mesh changes which is kind of very common in video games uh you can't really do those tricks and doing background straight Customs just better um yeah and so so far we talked about taking the first point but if your objects are translucent uh we have to keep going so we are cast from the camera receiver intersects first if this point this translucent we just keep going and so now we don't just consider the color the first point we can sort of two things we consider the density of the first point and the color of the first point and also the density of the second point that we hit and the density on the color width so and then we can do some maybe like clever uh rated averaging or some other kind of function to combine those two parts and so in this diagram you can see that we started the camera we go through the image grid through the frame uh we have the object it is translucent so we keep going uh yes so for voxelgrid it's explicitly we have like a density for a mesh you can say like for every face you can have also a density value and so on yeah and then I don't need to worry about why you need to practice there is a different angle [Music] yes so you cannot do backwards very casting because Point clouds because you'll just never hit the points is there anything decimal in space you can technically hit a point just because of like if you get numerically lucky right but uh yeah you can't really hit an infinite decimal point in space yeah exactly so most of the time you hit a face uh yeah and can similarly technically have like Reflections still instead of if you're add some of activity while you can like also bounce back but like your image yes exactly and the pixel value can be calculated is like an average of all the of these first point and then the point behind it if you do like oh yeah okay uh and so we did that but now we want to go to Regions Fields now we want to go to the continuous case uh where we don't have a surface and so Radiance Fields can be used to create maybe something smoke how do you intersect smoke that's impossible because it's like not really it doesn't have a surface it just kind of like has variable dents at different points and so this is the answer for this is kind of Ray sampling we sample different places and we aggregate all of these samples to create one pixel login so for example in the way it works is we have a backwards raycast into the scene and we go all the way through the scene and then we sample at different points on the straight um and we have let's say n samples right and so we take each point we pass it through the radius field which remember again is just a function that takes x y z outputs rgbd RGB and X2 uh and so now we have let's say n sets of these colors and densities how do we actually put them together into one pixel volume and so the way the answer for this is volumetric rendering and we'll talk more about it uh next lecture but today I would like to give you guys just an overview intuition for what it actually does and so we have this equation CR which is the color at a given rate is the summation for all of the points for the accumulated transmittance for that point the beta for that point and then the color to that point so what does that mean so TI is the like accumulated transference which is like the inverse of the stuff in front of it so for a given point if you think let's say it has a wall in it it will have very low accumulated transmittance because it has a lot of stuff in front of it so Ki will be really low for that pixel value so for the example so it won't have a high you can have impact on the output that you see beta I is just a function of density at that point so if density at some point is very low it's kind of like air or just smoke it won't be very visible if density at a given point is high for example if it's like a wall or just the surface of the object then the samples of how high visibility for the like high impact for the final color that we see does this equation make sense do you get the high level intuition for what each of these terms represents yes if you in the iGo case you can take an integral over the array but because we are limited uh to kind of just like limited by compute we're doing samples yes exactly right in the continuous case yes it is an integral in the realistic case we have to sample along the way yeah yeah typically the number of samples is actually like relatively high it's around like 256 samples uh which does become kind of very slow when we consider like oh we have in images like a thousand pixels by a thousand pixels and then for each pixel we have to do 256 samples so that it does become expensive to run the radiance fields okay wonderful I hope this makes sense and lastly we'll talk a bit about differentiable rendering so uh pretty much everything we talked about for rendering turns out to be differentiable uh but one thing you do have to keep in mind is that it's only differentiable for the stars that we actually see so for mesh the surfaces that we see the we have positive ingredients for those right for voxel rendering the boxes that are actually visible in the rendered image are going to get like how non-zero gradients Radiance Fields you guys maybe think about this but there's also actually solves a problem with it because we don't just we don't have a notion of a surface so realistically every single point that we sample will have some impact on the final color so it will have some output gradient uh yeah and so that's about it you don't need to know like the details of differentiable rendering just now that it works only the things that you see actually for graders do you guys have any questions about the lecture so is the ratings built yeah it has existed for a long time it is it's just a few for your scene right it's instead of explicitlyn uh modeling if it's a voxel Grid or smash we say what if we make it a function and for every XYZ point it has some density in a physical color so that's all the radiance field is right so but it's here yes [Music] so you can explicitly create a function right you can if you make a function for like a sphere you can make it so that like inside the sphere you have them still one and then it has some color right nice you can do it backhand right yes and yeah yeah you can definitely can't engineer it in next lecture we will see how to actually not hand engineer it so we model f as a neural network right and it starts out as kind of just noise and then it learns like where it density is not zero where the colors if I uh and we can we'll show you how to actually like make it learnable awesome if there are no questions uh yeah the lecture is done any announcement session yes so that's the against homework right the Transformers homework will be pushed back one week was not against awesome thank you guys for coming and for watching till Thursday
CS_198126_Modern_Computer_Vision_Fall_2022_UC_Berkeley
CS_198126_Lecture_3_Intro_to_Deep_Learning_Part_2.txt
oh it's on the website oh it's not linked oh my God I'm so sorry I oh it's at the bottom it's at the bottom yeah um uh come here come here come here uh down at the bottom sorry yeah the slides are the slides have a hyperlink attached to them but not this I apologize um but yeah it's down here it's down here at the bottom yeah it's gonna be a sun grade scope it's just it's not gonna be like uh like difficult or anything it's just gonna be uh um just like did you proof of work effectively um just to make sure that you actually like tried to run a training just submit some kind of uh the log effectively just a screenshot of that um because this first assignment is more for your benefit than anything else um just because it's going to teach you the tools that we're going to use for the rest of the course um okay foreign all right um yeah I guess some quick announcements we have the coding assignment due next Tuesday we got it out a little bit late so I won't be grudgy if you turn it in slightly late the first quiz is also due tonight again same story if it's a little bit late that's not the end of the world um but it should just serve as some kind of a comprehension check just to make sure y'all like paid attention like stayed awake during lecture um yeah uh it should be it should be integrated with feedback shown and if it's ever not like please yell at us um because yeah it's it's not meant to be hard and it's not something you're really graded on it's just completion what was that it was supposed to be due tonight um but I'm not going to be grudgy if you turn it in a little bit late because we got it out kind of late too um so just on completion it corresponds to last week's lectures so uh hopefully it should you should be able to complete it um yeah there are more questions and then we can um all right and let's jump into this um so this is the first half of lecture is going to be on we had some questions about like how how does one actually like calculate these partial derivatives this is just sort of some culture as to how this actually happens like under the hood um if it goes fully over your head and you space out for the first half hour like it literally doesn't matter um so long as you can still use the tools at the end of the day um but this is just sort of to answer like how it's actually done under the hood how we calculate all these partial derivatives of our gradient uh and then the second half is going to be actually important stuff um what different different sort of tools to add to your toolkit for uh modern deep learning stuff um and then if we have time at the end I can show you guys pytorch the tool you're going to be using all throughout this course to actually code up neural networks and and use them um so again just a warning for the first half of lecture we're going to be discussing back prop or the back propagation algorithm uh and if you don't get it if you only catch like the general gist of it like that's totally fine um it's it's all good the only thing is really again just that you understand the high level idea of like what it's doing um and and why it's important uh the second half of the lecture though I really do want to make sure we we spend enough time on um because it's all of the all the things that make modern deep learning work really well um yeah so anyway um the first half so deep pointing review before we get started um this is from last week's lecture uh our neural network is just a function of scalar parameters that are contained in our weight matrices and our bias matrices and we can take the gradient just the direction of steepest Ascent or if you go in the opposite direction steepest Descent of our loss function with respect to all of our just scalar parameters um and effectively step our weights and biases in a direction that slowly hopefully decreases the loss of our Network simply just subtracting a small scaled version of the gradient is what we're going to be doing the gradient being evaluated for our current data and our settings of our parameters um so this is going to be kind of the core idea of back prop is just sort of creating a computational graph for our functions so say we have a function C which is a function I'm sorry a function e of two parameters C and D but C and D are in themselves functions of other parameters in this case A and B we can write this all out as a computational graphic appear on the right um where every single function is depicted as a node and all the parameters uh required lead into that node and the operation that little function is written on the Node itself um so we can create this little tree here for functions of functions of many more functions if we want um so this is sort of what it looks like we call this a computational graph it tells us how we can compute e given our starting values Just A and B um and if you want to do calculus so say you want to know the partial derivative of e with respect to something else on this tree you can use the chain rule which simply tells us that if we look at every single path to get from say in the case of this down at the bottom to get from B all the way up to e if we go on every possible unique path and multiply by all the partial derivatives between nodes along the way we can get the partial derivative of any node any final node with respect to like any input node so this is just a way to write out the chain rule um that many of you all learned in like math 53 um it's it's pretty it means that we can just calculate the partial derivatives um with respect to adjacent nodes instead of having to calculate partial derivatives over long distances we can use all these little individual components like the partial e with respect to C and the partial C would affect a b because all these little individual components um to get a larger partial derivative that stands a larger distance on this graph it's just a more um interesting way to think about the chain rule really um so are there uh little questions on that on on the Chain rule for multivariable calculus all right um so that's that's the primer required basically for back propagation um so let's talk about what my propagation is we have like a little example here this is a toy neural network our W's and our B's are just they're not even matrices and vectors it's just single Sailors so we have a neural network with exactly one unit at each layer one bias term to a company that um so our W1 our B1 they're just simply scalar values they're not even matrices not even vectors one neuron per layer and we want to compute the loss with respect to our brothers but the derivative of the belongs to protecting all of our parameters so we need to calculate the derivative of L that node on the right with respect to B3 with respect to W3 E2 W2 E1 W1 but we don't really want to do like a ton of redundant computation like this is this is six different partial derivatives that we need to take and we want to make sure that we're doing this efficiently um because the one more generic case where we have many many parameters this is important so we want to make sure we're not doing six times as much computation to calculate six gradients um so if we want to write out the chain rule for all of these um we'll end up saying that this allows us to kind of optimize like save some computation um because again this is just a computational graph right just like we saw before and you can take partial derivatives with respect to individual nodes that are next to each other um so if we do this and this is this looks a little bit complicated but I I agree it's not just a chain rule again so if we want to take the derivative of L with respect to W3 their personal weight made our first wave parameter uh that's simply the partial derivative of L with respect to the node Z3 multiplied by the particle derivative of Z3 with respect to W3 there was exactly one path on our computational graph from W3 to l and we just multiply all the partial derivatives along that path and we did the exact same thing for L with respect to W2 and L with respect to W1 you can see that the deeper we go into this network or the farther back we go from out all the way back to W1 the more things we have to multiply out but if we look here a lot of these multiplies are like the same so this value box in red on the left there gets reused as soon as we go to calculate the partial derivative of L with respect to W and then you'll notice that entire thing box in blue is reused when we recalculate uh we're not recalculated when we calculate the partial derivative of L with respect to W1 so there's a whole bunch of redundant computations being done the farther earlier in our Network we go and back propagation is very simple um it's just that as we go and calculate all of our partial derivatives rather than refinding rather than recalculating the stuff in blue over here let's just hash it when we're calculating the partial derivative of L with respect to W2 um that's that is the the core idea of back propagation rather than recalculating values and doing repeated multiplication redundant repeated multiplication we should just save compute from the first time that we went and and calculated our partial derivatives um and it saves us a lot of redundant multiplies um when we go to calculate our partial derivatives we simply work our way from the back of the network all the way to the front um and you can imagine in general that now W be our matrices and vectors instead of just individual scalar values in that case all of your derivatives are now going to be derivatives of scalars with respect to matrices which is scary so I don't want you to think about that too much um but my hope is that you understand that now all these partial derivatives involve matrices and our simple scalar multiplications have now become major multiplications so this patching becomes even more important now because now as opposed to now we're saving we're saving Matrix multiplies which are very expensive operations in general um and it's important to kind of cache those where we can so it's the same idea as we saw before only now these derivatives um are going to be of scalars with respect to matrices and matrices with respect to matrices which again if you don't if you don't want to think about that don't bother it's all good um but again we're now saving matrix multiplication operations uh as we start caching all of our multiplies um so that is that is the the big thing I wanted y'all to take away from this back propagation is simply a much faster way to get all of our partial derivatives required for our gradient um without having to do a whole bunch of redundant operations um simply caching things as we go working from the back of the network from hell all the way up the front um to our initial weight matrices does any of questions comments concerns about this um I know last week there were like questions on how do we actually go about calculating all of these derivatives for our gradient vector and hopefully this kind of lets you kind of peer under the hood a little bit and see how this is working how do we know what values to Cache uh so it's you can see that as we're going back we have just like this running value like we have the value box in red and then we're going to multiply two more terms out we have a we're going to teach you itorch which is a tool that will do all of us automatically for us cache all this stuff as we go um but it's it's yeah that's that is effectively it pie torch will figure out what it needs to cash in order to do this efficiently um but the hope is that you understand you can see the need for caching things as we're going back and you can imagine how this gets even worse if we have even bigger networks and so on are there any more questions or comments or concerns about backdrop um and again this is kind of weird and if you if you're not super familiar with the multi-variable chain rule uh that is like totally okay um yeah it's not going to it's not going to affect the way in which you uh create neural networks because again we have tools that take care of all this for us for free um but the the idea of a computational graph and graphing out how all of our different parameters are related how they all interact with each other um is a useful tool when you're thinking about how to build systems um so with that unless there are more questions I can take um we can run through Modern deep learning um which is which is definitely important all of these things I think in here are prolific throughout the fields and they make the the standard version of a neural network the most basic version of a neural network that we saw um worked far better um yeah and hopefully hopefully they answer some more questions that were had from the previous lecture as well um so yeah let's jump into it then um so optimization and we do a little bit better than the only gradient descent there were questions last week uh like someone someone asked what happens if we are if we're if gradient descent is descending a hill what happens if there's a little bump in the hill if we're going down a hill and all of a sudden we hit this little local Minima how are we not going to know that there's an even better place we could have landed down here and with vanilla gradient descent the answer is this is definitely a problem um or just flat spots where we can't seem to get past um because our gradient here is zero we have no direction we wish to move so these are like problems that vanilla vanilla gradient descent has um oh and also to emphasize with these things it's fully okay I have the equations so that if you want to go back and look at it later you can um but if you don't want to look at the equation that's fine just make sure you understand the the high level takeaways as well I would like to add um so momentum um is just one way in which we can improve gradient descent so we'd hate for these little small local Minima to screw us up so we're going to take inspiration from an actual ball rolling down a hill um if you roll a ball down to Hell it has momentum so as soon as it comes down to the bottom here it's carrying with it plenty of momentum and objects in motion tend to stay in motion and it has the energy enough to climb up the hill and hopefully get down to a better local minimum um so what we are going to do basically is take all of our past gradients and simply add on to whatever gradient we're about to do the weighted average of all of our previous gradients and technically we are no longer doing like true gradient descent uh because if we have hit the bottom and we start climbing up the hill a little bit more we're moving in the opposite direction from the true gradient um but that's sort of okay uh the hope is that by adding this little thing onto our normal gradient updates we're going to be we're going to be enabled hopefully to climb over little Hills push past any big flat spots and again the core idea here is that we're simply going to add to our gradient update just the previous weighted average of all the gradients that we've been taking so you will continue to move in the direction that you have been moving in the past um with the idea again that you can power through uh little local Minima and flat spots um are there questions on this yes scale it down oh are you talking about the betas yeah yeah so that's just the uh that just lets us do a weighted average so what that line is there the first line so before previously we had we were subtracting off just the the gradient component of this parameter the partial derivative so now what we're subtracting off is the weighted average so if you think about it our our previous gradients say they try to think how to describe this well but the bottom line is if your previous gradients are like non-zero if your previous gradients are like they've been averaged out right if you multiply them by like a scale or say like one half and then you multiply your current gradient by one half the resulting value that you get you've scaled two components uh by a half so when you add them both together you're getting something that's about the same size so say your previous gradients had a magnitude of like one say your current gradient has about a magnitude of one you're concerned if I understand correct is that we've now just shrunk our gradient even more if our previous gradients on average had about a magnitude of one and our current gradient has a magnitude of one let's say like beta is equal to uh in this case like 0.3 if beta is equal to 0.3 we're going to multiply our previous gradient our history of gradients by 0.3 so that quantity has a magnitude of about 0.3 and then we're going to multiply our current gradient by the remainder of that which is 0.7 and if our current gradient has a magnitude about one the resulting quantity that we're then going to subtract off of our parameters that our resulting sort of effective gradient or gradient with momentum will have about the same magnitude does that sort of answer it if not I can triangle again explain differently um if we're thinking about how yeah how is that I'm like showing um the the betas are simply to make sure that we're not like making our gradient too big because say like we didn't have betas and we've been rolling down the hill uh and we have a lot of momentum like okay say it's not even uh say it's just a straight line down the hill uh the hope is that so we're gonna start here we're gonna make a gradient update and we're gonna move down to here the hope is that the next step we make will be about the same size if the hell says concept um but so I guess yeah you're right if if this was true to momentum the next step we take would be even bigger and then after that even bigger but the reason that we have those betas is just to control that so it doesn't get like way out of control we just want to take consistent steps if our Hill stays the same but that momentum will still be used if the hill instead went like this we would have the history of taking step sizes about that big so even after we hit this little local minimum we'll continue to climb so yeah you're yeah I see your point so in that sense it is a little bit different from momentum uh in that our step sizes don't keep on growing like crazy um the betas are just to kind of control that a little bit and make sure things don't get too wild um but the momentum thing still applies of if you're if you have been moving in One Direction you will generally continue to move in that direction over time um does that sort of answer your question yeah okay uh yeah good point um so you can see on the top we have a lost surface so say this is a hill one both of these uh Graphics are Hills that we're trying to descend um the kind of Darker area is lower and the lighter areas are generally higher it's basically like a topographical map if you've ever seen like a map that tries to have little elevation lines um it's the same idea so you can see without creating a descent we're going to move in the direction to see if it's the center um which jumps us over this little you know Canyon um and it keeps ping-ponging us back and forth back and forth back and forth momentum we're moving slightly in the direction towards our local minimum so the next step we make we're going to move slightly towards that direction a little bit more a little bit more a little bit more um the direction in which we have historically been moving over time average is the direction we're going to continue moving here um and you can kind of see that in how with momentum we move towards the local Minima uh far quicker uh instead of just oscillating back and forth moving in the direction of the true gradient yes friend um [Music] you're taking the uh the weighted average of all of our um gradients and yeah as we start to yeah say we hit the say we hit the actual bottom rock bottom we're we're going to start going up again but because historically after we've started moving this way our gradients will point this way right we still have momentum so we're going to go a little bit further but the more and more times that we sample over here and take gradients from over here we're going to start sliding back a little bit and the hope is that we kind of slowly settle in there is that answer question okay um are there more questions all right um so the next kind of optimization method um we can use that's a little bit different the tldr of it will be on the next slide but this is just explaining what it is it's called RMS prop um what we're basically going to be doing is instead of storing a weighted average of all of our previous gradients we're going to be storing a weighted average of all of the components in our previous gradient uh squared all the components squared um we're just going to keep holding on to the moving average of our squared gradients our extraordinary components rather um that is the the basic idea and when we actually make our update so you can see down at the bottom this is basically our vanilla gradient descent except for that we're dividing by the square root of the square of our triggering which is a little bit unintuitive but we will explain sort of the the reasoning for why this would work here in just a little bit um and we also add this little Epsilon before we take the square root of it because if uh we really want to make sure we're not dividing by zero or before we before we divide we're going to add a little Epsilon um just to sort of a computational note Epsilon is just a really small arbitrary number um again just to make sure we don't get divide by zero errors um because that's frequently problematic and will cause our program to crash um but it's so small that it doesn't really affect the value um so that's that is the that is what we are doing mechanically so why would this work so think of like two specific cases here with gradient descent in one case our gradients have historically been very small you can think of that as this example right here say we've had a flat spot and our gradients have a history of being super tiny whereas here they were much larger so the square of this history of super small gradient components will be even smaller if you square a small number it gets way smaller is the basic idea and when you divide I said increasingly small number that'll sort of boost our gradient a little bit so if we have a history of really small grading components as soon as we Square if we keep this history of the squares of this history of small gradients that number it just becomes incredibly small even after you finish taking the square root of it it still is just absolutely tiny um so once you divide by that number it's going to boost our gradients so our gradient becomes much bigger because when you divide by a very small fractional number that has the effect of increasing value in the first place in the second case if anyone has questions on that and wants to interrupt me I can take them too um the the second case is say our grades have been super big maybe we have something that just looks like that our gradients are huge because they're so huge we are about to wildly overshoot this minimum different ingredients have a history of being extremely big when you square all of those components they're going to become even bigger right and then as soon as we divide by that number uh we are going to get a much more reasonable scaled down value so if our gradients are way too big and out of control RMS prop as soon as we divide by the history of square grading components we're going to get a value that's a little bit smaller and a little more reasonable and keep us from wildly overshooting our minimum so RMS prop is really good at handling cases where our gradients have a history of being really extreme um either we're in sort of a flat spot and they need to be boosted or gradients are way too big and are going to cause us to wildly overshoot um the value that we want and it sort of allows you to previously with vanilla gradient descent if you wanted to get around this the issue of a flat spot you had to have a really high learning rate which might be sub-optimal it might it might help you get through these flat spots to have a really high grade or have a really high learning rate um to be updating our parameters by a lot uh but then as soon as you get out of that flat spot now your learning rate is way too big and in this case if you have a history of having a lot of pills um having a really low learning rate might be nice but it might often take forever as soon as we've gotten off that huge help so RMS prop means you know your choice of learning rate the amount that you're updating are gradients by that constant Lambda um isn't really uh as big of a deal and it sort of you can think of it as like an Adaptive learning rate yes yes you jump the gun uh Adam are there short questions before I move on to fantastic question you read the slides didn't you [Laughter] so uh there are no questions I'll keep rolling on Adam is basically the best of both worlds it gives us the benefits of both and out of the box uh when you're using pi torch and you can select an Optimizer just shoots atom and leave the parameters as default it works good pretty much everywhere uh it's just fantastic it is the the preferred method of gradient descent um it basically combines RMS from and momentum we keep track of all of our previous gradient updates the weighted average and we also keep track of our history um squared grading components looks like we didn't already stop and just like with momentum we're going to sub out our gradient update with the momentum included gradient update and we're wanting to find that gradient update by the square the square root of our weighted average history of squares giving us the benefit of both is the idea um and again if it's complicated like that's fine if it looks complicated no biggie um the thing I want you to take away is that we get the benefit of being able to roll through little local Minima just like with momentum while still being able to account for issues like this where we have really flat spots and we want to boost through them we release heat spots and we want to slow down because we want to wildly overshoot a minimum so that's sort of the the idea of Adam um and it just just use it is the takeaway uh it's fantastic um are there questions on this uh it doesn't change the direction of your gradient all it does is change the scaling of it so so yeah so if you're right here it's gonna yeah it just changes the scaling but but not the direction whereas momentum will actually update the direction um good question good question very good question um are there more questions or comments yes friend um oh yeah uh I mean it can be it can be um I I suppose if you're actually at the minimum and your gradient has a history of being zero yeah yeah that's a that's a good point RMS prop is good for getting to the local Minima but after it gets there uh it might be problematic so if you generally just keep a history if you're keeping track of all of your your loss after every iteration you can actually just like do something called model checkpointing which is sort of important which is just save your model every once in a while after each iteration after you've gone through your data set once um just see how well it's doing um evaluate it figure out what the loss is um on like some some validation some new data that our model hasn't seen for um save it right so yeah if if say we've hit a point like this our loss has like gone down over time but now it hits this value of zero and it starts acting erratically and our gradients just go nuts right because our history of gradients squared is so tiny you might see your loss function like jump up again and start doing weird uh so just if you're keeping track of your model's parameters after every iteration and just saving them at the end of the day all that matters is that you have a model that works on new data so if you have your history your model's history saved you can just pick the best model basically but yeah fantastic Point um RMS profit is good for getting you there but once you're there it will definitely start acting erratically yes friend yeah uh second order optimization methods can be used and some people definitely do use them but it's not super common because Adam works pretty well out of the box um but yeah you definitely can um High torch can calculate uh second derivatives for you um but again it's just not as common just because it requires more compute to be able to do that um for like marginally better gradients um but yeah there's there's definitely plenty of other uh fun different tricks um that you can do to to get better gradient descent like that um are there more questions yes friend oh like when to stop training yeah so like again like say uh rewards getting kind of cluttered uh like if if we're descending this hill like say up here like on the Y value is the value of our loss right so if we're keeping track of our loss at each iteration so our loss is going to be like pretty high right and we we make a bunch of smaller steps after we've run through the data set say we end up kind of down here right now we have lower loss right so our loss is decreasing it's going to continue decreasing for a while and at each one of these dots here you've saved the model right it's continuing to decrease you've saved the model again you've saved it again all of a sudden it gets down here and it starts going crazy right and the loss increases again you save the model here so you know just use that instead um so it's important to keep track of your your training history how well your model is doing on its training data the data that it's training on and how well it's doing on New data that hasn't seen before so that you can effectively figure out which one of your your checkpoints is the best um does that answer your questions all right are there more questions all right um so that is that is the the fancy optimization trick um that is prolific like just everywhere uh in deep learning Adam um so moving away from optimization this is just something you can add in you can think of it almost as a layer like we had linear layers before um batch enormous is something you can do at the end of every linear layer um that just helps your network train a little bit better so in general um with our Network when we're inputting the data it's sometimes important frequently important to normalize your data so that all the values in your input are in about the same range um and the reason for this in neural networks with neural networks um is because when we run gradient descent if our data uh isn't generally in about the same range you get sort of like lost surfaces that look like on top uh where it's it's very elongated and we're going to get some of that kind of like bouncing behavior um whereas in the bottom if you have normalized helium you're going to get much more sort of regular looking lost services that are far easier to just send down with gradient descent um yeah it just it makes life a lot easier when at each layer um or not any Slayer when with our input data our features are normalized that helps sort of the Lost surface um that helps our gradients in the first part of the network but the problem is like halfway throughout our Network we have no way of controlling whether our features predicted are of like wildly different orders of magnitude um and we'd really like for our activations like halfway through the network at the end of each linear layer to be you know like somewhat normalized so that when we calculate our gradients and take all of our partial derivatives they aren't crazy and erratic um so we're going to basically at the end of every single linear layer subtract the feature or the the per neuron uh mean and divide by the standard deviation so for each neuron we're going to look at that entire uh that batch we're going to look at all the activations across the entire batch for one specific neuron subtract the mean of all of those neuron activations that neuron's activations divide by the standard deviation so that we effectively normalize across the batch for every single activation on our neural network sort of a weird concept um and after that we're going to then allow our Network to basically rescale those values as it sees fit by multiplying each weight by some value gamma which is a learned parameter that our our network is allowed to control and add the bias so this is this is a very strange concept but the thing I want you to take away here what's happening is by default uh before our value of gamma and beta are learned what we're going to be doing is normalizing all of our activations so we have a neural network [Music] this neuron right here we're going to look at its value across our entire batch of data and we're going to normalize it and if our Network wants down the line if it feels that it's valuable you're gradient descent we can say you know what you know what we don't actually need this guy to be perfectly normal it can be it can be scaled up by a little bit we can multiply it by a value we don't actually want it to just be uh normal we would like to be a little bit bigger or maybe we would like it to bias towards a certain value um because when we add batch Norm we effectively kill the bias because we've normalized our outputs from a certain layer and it's like well maybe we kind of want it back a little bit um it effectively makes the default for your neural network to have super well-behaved activations so that they're kind of normalized um this is sort of a strange concept um are there questions about this I imagine there probably are um yes friend uh yeah generally generally when you when you the value that we compute at each neuron is like called the activation of that neuron sorry it's kind of confusing uh it's not good terminology the output of Any Given neuron is what we're going to be normalizing so we have an entire batch of data that we're going to turn on and we can compute right the average uh activation the average value due to this neuron as well as the standard deviation computed for that neuron across our batch of data um and we're just going to try and normalize uh that value and again that might not our Network might not think that that's like ideal so we allow it to learn to kind of like scale up and add back a bias to that value does that kind of make sense it's sort of a strange concept yes friend oh yeah did you raise your hand oh yeah yeah again like we're not we haven't actually added like any expressivity to our model by adding by normalizing all of our activation and then allowing it to learn to rescale them and re-bias them we haven't actually the set of outputs that we can compute uh at any given layer is like literally no different like we've had redundant parameters effectively um the important thing is that now with this the default for your network uh is to have well-behaved neuron outputs that's sort of the idea but it doesn't it doesn't mean our Network can like Express new things or anything like that they're the input to the next the the next layer when I say in this in this context specifically I apologize with the terminology that's my bad the activation is the output of the neuron so the neuron takes in a bunch of stuff and then it computes a value like the five five is our activation um you can also do batch Norm after you've taken your your real activation like the relu or or um the any any other kind of activation you would like to NH um you're also allowed to do batch bomb after you've done like the Layla or whatever um people go back and forth on which one is better you can just sort of pick one randomly that doesn't really make a huge difference um but yeah does that make sense the terminal is clear okay yes friend yeah just after every single layer just normalize every single neuron's output across that batch um and then again allow it to rescale it if it wants but um but make the default well-behaved outwards are there more questions okay yeah so again batch enormous is something you can throw at your network just after every single layer um it just it simply gives us nicer gradients because our lost surfaces will just look a little bit more like this because all of our values uh being passed into every single layer are going to be about in the same range at least by default um and it doesn't allow us to represent anything new our function doesn't become any more expressive we've just made the default setting for our model a little bit nicer um ensembling so if you have a model and it's topping out you think this is about the best you can do on uh validation data and you don't think there's really anything you can do uh something you can try and do if you have the compute is trained a whole bunch of model a whole bunch of models um and hope that you get sort of the wisdom of the group um if you have a model that's working really well you can initialize that model a whole bunch of different times train it up and simply average the predictions across all those models this doesn't get you a ton of extra performance and it normally takes a lot of extra compute because now instead of training one model you're training like five or six different models and then averaging their predictions um but it can give you slightly better results if you want to squeeze a little bit more performance out um and it's again just this idea of wisdom of the group um decreasing the variance increasing the bias of your model by a little bit um yeah this is this is something you're probably never going to do but it's just I feel like something that you should know about uh happens um there's enough questions about this little weird thing that people do if they have a lot of compute um Dropout this is used a little bit more so overfitting is a problem um before you go and train that batch of data you can simply choose a whole bunch of neurons to Just Apps and send to zero and why would this work so say a neuron during gradient descent um isn't sure which one of the neurons before it is going to be sent to zero or not it's going to have to learn to use all the features that came before it all different kinds of combinations of them um so say you're like classifying an image and say like the last layer of neuron corresponds to like different things like maybe something that has legs or something that is like red hair or whatever whatever if some theater acts out you're gonna have to learn uh to figure out classify your output based on different kinds of information different combinations of information you have to learn how to tell the same thing in multiple ways because you're not going to have complete information from the layer before you um so it just sort of yeah it makes you learn the same it makes each neuron have to learn the same output um you have a different combination of input and only um those combinations so this is something you can just do at the end of every single layer um and it increases regularization it increases the bias of your model it makes it a little bit harder to overfit on specific specific neurons it makes it more difficult for one given neuron to depend heavily on the output of just one specific neuron that came before um other questions or comments on this okay um so it's sort of another way to get the wisdom of the group effect um and that you have to learn the same thing multiple ways um yeah uh skip connections are the the last thing I believe um and if we have time I can show you a little bit of Pi torch but um skip connections are the one last thing so say you have a super deep network network of like 50 layers um and say about halfway through you have enough information to accurately classify the input for instance um the problem is that you have half the network to go and you have to very carefully select your weights and biases for all of those layers that come after so that your perfect information doesn't get like obfuscated or confused um or or permuted as you keep going it doesn't get noisier as you go um the yeah it it's a it's a question of like at a certain point with our with our standard model of the neural network you can only go so deep before you start getting worse and worse performance um and we want to make it really trivial so that as you you can just sort of Stack arbitrary amounts of layers with like no consequences um and Skip connections sort of allow us to do this because if you look at what this formula is doing so we have this block right you're taking an input you run it through a bunch of weight layers and then after that you simply add the input the identity so if you want to make it really easy to continue to pass a message without it getting without getting confused if you want to learn the identity layer to be able to just output whatever you took in without adding noise to the information you have you can simply learn all the weights and all the biases to just be zero it makes it trivially easy to figure out how to continue to propagate information forward if you feel like at a certain point in your network uh we have we have all the information we're good to go um so the important thing about a skip connection is it allows you to learn in this block here you simply just take a bunch of blocks and just stack them on top of each other to create your network or or stack them one after the other rather is it allows us to to learn the identity function oh I jumped the gun a little bit when I was talking but yeah um if you're if you're ready to classify halfway through your network if you have skip connections all you have to do is learn zeros for all the weights which is trivially easy and then you can pass your your good information all the way to the very last layer which will classify for you correctly we can build arbitrarily deep networks with like no consequences now you can just hail marry it and just keep adding layers big networks go Burr uh it's it's that simple with skip connections um and they are used everywhere so basically what today was is we wanted you to learn like these tools which was prolific um and you will see them in all kinds of models we're going to see them lots in our uh computer vision models but they're used all over the place in like natural language processing all of these different tools um and these are just things that you have for your toolkit now um if you're building a network you can just add these things to your existing Network to try and like boost performance and and make it better I didn't have enough time to get to pytorch but it is on the homework which I think explains it pretty well um if y'all want to to disperse um that's totally fine I will continue to stay here and talk about this for a little bit more um it's only a couple of slides um the basic idea backdrop is super hard we want something that will compute derivatives for us um what pi torch can do is it will calculate the derivative evaluated at a certain point it cannot calculate like the symbolic derivative it doesn't know what the derivative of x squared is but I can tell you what the derivative of x squared is at a certain value um and if you know how to use numpy pytorch is going to be really really easy for you um because it basically just looks like numpy with extra bells and whistles on it so by torch just like do an important numpy you're importing that rather than creating numpy arrays you're going to create porch tensors but it's literally the same thing um so we can create functions and stuff uh and it literally it looks exactly the same things where them add them subtract them whatever but if you create a function that takes in a value where this and returns it we can actually calculate the value of the gradient on a certain input um that's basically what pipes would be doing and it's like fantastic um so with that I think we are we're done we got through all the slides um yeah each other I will stay up here for questions
CS_198126_Modern_Computer_Vision_Fall_2022_UC_Berkeley
CS_198126_Lecture_7_Object_Detection.txt
there it is okay recording in progress also spread all right can you guys hear me I don't know if the mic's really working but it's working all right all right y'all oh let me clear this out of the way all right yeah uh so today we're gonna be oh my God why is it there we go Zoo give me one second you all right there we go all right so today we're going to be talking about uh deep learning for object detection um so essentially just you know stepping a little bit more into the kind of cool stuff that we can do uh with CV uh I'll get into what that means in a second but I guess yeah so to get started uh we all have seen classification already right it's like when you know classification problems are essentially asking answering the question you know what is in this image right it's like if you have a photo of a cat it tells you it's a cat right um next we have classification with localization um which is what we're going to be starting with right now before we go into full object detection and classification with localization essentially answering the question of what is in this image and where is it right like specifically you know like what kind of bound of the image is it a part of um so to start off before we jump straight into classification of localization we're going to start off with Landmark detection so again so far we've only talked about classification but let's start with like a simple classification CNN and add in the like localization of one feature right so maybe we're looking for like the location of a specific feature in an image right so assume you have this classification Network that you know takes in photos of you know cat dog or something else uh and it's you know maybe a convolutional neural network and then outputs it has these maybe it's a one hot encoding right these three classes like maybe class one is one if it's empty class two is one if it's a cat and then class three is one if it's a dog right so how would we like expand this network to essentially pinpoint the exact location of like the animal's nose right so let's say we wanted to like have it output a little dot right on the nose in the image right so this is kind of like halfway to uh localization so it's just blend right section so essentially in order to do this we need to add some sort of output to the network right so one potential solution to this uh is we have the those classification outputs but now we can also add an X and Y output right just two additional outputs from the network um but in this case these X and Y values are essentially just going to be like the X and Y coordinates of the nodes on the image right and one thing to note I guess we can you know like it's not necessarily like in pixels right it's like it could be a normalized value so maybe for the x coordinate like zero would be the the far left of the image and then one would be the far right of the image right instead it'll pick some you know normalized value in that scale um and the network is is you know maybe able to Output that now right and I think you guys could all see how we can make training for that right essentially we could just have you know uh specific points paired with images and then we could kind of do the training based on that um let's make this a step further right so we have the uh you know point on the cat's nose does anyone have any guesses as to how he could take that Network right the C1 c2c3 X and Y and we could expand that by adding more outputs to essentially come up with a bounding box for the cat as a whole so I don't have any ideas yes yeah that's that's one option does anyone have any other ideas oh that totally works but does anyone have any other ideas yeah that's a good idea as well um yeah so there's like lots of other options but uh but the one I went for in this example is something a little closer to that where you can essentially have like an x and y coordinate um either of like the center or of the corner or anything and then a width and height of the Box essentially just the solution I went for but both of those totally work um yeah so now all we have to do is just you know we add those components to network again we can you know add some extra rules for our training and right we have we have a network that's able to kind of localize certain object in the image right um so talk about training it though right it's like we maybe know how to collect data for this but how do we actually do our training right what would our loss function be um as an example right it's like we have this image on the left is you know a photo of a cat and has the desired bounding box to that cat right and let's say our model outputs this blue square right here right how like wrong is it right it's like it's a little bit harder to quantify because we're not just checking like in the in the past when we're doing classification right it's like it outputs one for cat if it's a cat so our loss function can just be you know like how close would the you know output and probabilities to the actual you know one hot encoding that we have for the image right but this time it's a little bit more complicated because we have two boxes right so it's like what do we what do we do from there um so one option uh which is pretty popular is IOU uh which is intersection over Union so the idea behind this one is that we're trying to maximize the overlap between the predicted bounding box and the real bounding box right and that correlates to something or that that is essentially maximizing the intersection over the union so as you can see in these diagrams right here right let's say we have the two boxes uh the intersection of those two boxes essentially just like where those overlap right so the area shared by both and then the union of those two boxes essentially just like you know the two boxes like all of the space that either of them take up um and so essentially if we calculate the area of this intersection and divide it by the area of this Union we get this value that's pretty useful right and I guess maybe some intuition as to why this is useful is you can think about it as if for example the predicted output and the real output we're totally separate from one another the intersection would be zero and the union would be pretty large right so this number is just going to be zero right whereas if they perfectly match one another right their intersection and Union are going to be exactly the same so it's going to be one right so the closer we get to one the better we're going better we're going right so we can essentially have a loss function be consent to essentially train the network to maximize this intersection over Union value uh so they don't have any questions before going to the next part I'm going to go into detection anyone no all right perfect okay great so the next thing I'm going to go over is extending this a little bit right so we have classification with localization right we're able to kind of detect you know specific object but this task becomes a lot more complicated when we have a bunch of different objects in the same image right because for example like you know we don't know how many objects they're going to be right we don't know you know do they overlap with one another are they different types if you have like a bunch of people standing next to each other and you have a network that's like classifying people right it's like is it going to get them all as like different images or different people or is it just going to be like you know one giant like person buffet there's a lot of kind of considerations to uh to come into play but there's a couple kind of ways we can approach this so uh to start off the most basic I guess possible solution that you guys might have you know come up with if you thought about it for a second um pretty simple solution is essentially an exhaustive search also called sliding windows so if we want to detect multiple objects in an image we can have an image we can select a bunch of boxes on the image at different positions different sizes different aspect ratios and then we can classify them right and if it classifies as being a good match to like a cat then we just say okay that box is a cat right so essentially I have this uh diagram right here actually I can express and be a little bit easier uh so basically Israel we have you know like if we have a image right maybe I don't know there's a person here right what we can do is we can start off we can say okay we're going to draw a box right here right we're going to crop the image to this box we're going to run it through your classification Network it's going to say there's nothing here we're going to be like great now do it again right we're going to move over by some little value called The Stride Right essentially the offset and we're just going to keep doing this over and over and over again all the way down the image right we're doing it like hundreds of thousands of times possible uh depending on the value um and then in this case right these boxes don't really fit the person right so maybe we can even try with like multiple sizes of boxes or multiple aspect ratios so maybe eventually we'll have a box that looks like this right and we'll run that over and we'll run it over and run over and then eventually it gets here it sees the person it's like yeah it's a person we're like great you got an apple right so this works but it has some pretty significant problems does anyone have anything they can think of right off the bat it might be an issue with this approach yes well depending on how big your stride is you're just going to keep repeating period that yeah that's definitely definitely one thing they don't have any other any other ideas yeah it seems very inefficient yeah it's pretty slow especially if we're doing it with a low stride um anyone else all right no worries yeah those are both great points um so the the kind of problems that I listed were essentially that uh you have this trade-off with like strive and like the amount of aspect ratios and sizes that you try which is that if your stride is very low which means that you're like each image is just stepping a little bit then essentially you're it's too slow right it's like we're running you know hundreds of thousands of locations checking this if we're trying like hundreds of thousands and then we also try it with a different aspect ratio like a really skinny tall box right it's just it gets it's so many classifications right we're just running this over and over and over again so much processing power um if we try to counter that and we make the stride a bit larger um or have like fewer aspect ratios then we get the issue where we could you know skip over potential boxes the boxes aren't really going to be a good fit um for the output right like if we like have this box and it was like here and our stride was like this like we just jumped over the whole thing right but I mean more trivially or less trivially right it's like it might not fit perfectly if we're moving at set increments um and so essentially we're kind of forced to have that strategy pretty low and waste the time computing so a proposed solution to this or one solution to this uh is the system called rcnn so the basic idea is what if we could guess likely bounding boxes and just run classification on those right so the idea is instead of trying like these hundreds of thousands of different you know potential box options what if we could just somehow magically guess good options and then just run classifications on that right so how do we do that well uh one approach is to essentially use classical like non-machine learning algorithms to segment an image and then propose a bunch of bounding boxes for objects so as you can see in this little example on the right right I believe this uh like this is the base image and this is kind of like an output from the classical segmentation approach where essentially what it's doing is it's like picking a point on the image and then it's like flood filling out for similar colors um and then essentially just repeating that until all the spots in the image are filled up right and so the idea is we can use um these kind of segmentation boundaries um to do some you know like decrease the amount of Windows we have to consider um so the intuition behind why this works right because you may think oh isn't this the same as just picking random boxes like not considering it well the reason why this works is the reason why this could work is that like edges in an image are likely to be like the bounds of a bounding box right like when we have that Cod example like the edges of the cat were touching the edges of the bounding box right so it's like if we can kind of get a vague sense of like an edge classically then if we take bonding boxes around there we're pretty likely to have a good estimate right so uh yeah so essentially like the way this works in a little bit uh more specifics I guess uh is you initially do that segmentation right so you can see here uh we have this is like the initial segmentation of this image with a bunch of sheep right and essentially on each of these segmentation regions we're essentially going to draw a bounding box around it we're going to crop the image to that bounding box and then we're going to run it through classification right and it's going to say it matches or it doesn't match et cetera right it's going to kind of go through there um and then that doesn't necessarily work right actually it probably doesn't work for a lot of images because like as you can see over here right these sheep are not a single color right it's not like a solid color object not like every object you want to detect is a solid color so this shape is actually like 15 different segmentations right so that doesn't really work on its own so essentially we do multiple runs right we do the segmentation once we do the classification all the bounding boxes there then we merge similar segmentation components right so for example like as you can see there's a step from this leftmost image to the one next to it where we're essentially merging like segments and then again the next one and again to the next one and essentially on each of these merges we're once again running the classification on all the boxes right so the idea is like it's still quite a few runs of the classification but we're getting better guesses right we're not just having to do that sliding Windows approach where we're doing way way too many uh guesses um a couple little clarifications on this uh because you guys might be curious you may be wondering well if we have all these weird sized boxes like how do we run those like the CNN can't just take in some arbitrary sized image right so usually this is done like in the papers suppose that they just scale they warp them so it's like if you have a bounty box look like this and you crop the image look like that you just scale it to be a square and then throw it into the CNN um so that's also a little bit gross because if you're looking for like a pencil for example it's like the classification network is going to be running on like a really stretched version of that pencil or something right so you might have some issues with that as well um another point that I think someone brought up earlier uh is it's possible this could like classify the same object multiple times because we're considering like multiple boxes over the same area right this was also an issue from sliding windows uh there is a way to kind of remove this redundancy with something called non-max suppression which I'll get to later in the slides but that is 17 mine is that as an issue that we can deal with it's not really a problem um yeah this is just another example like you can kind of see it's going through the classification and it finally ends up settling on those green applications over there uh uh yeah so problems with this so it's still really slow right because most images are not like a solid color right they have like thousands of different you know segmentation regions depending on how you define your like classical algorithm and which algorithm using but again we have the same trade-off where it's like if we make our algorithm like you know a little bit sharper and a little bit less likely to just blur everything together then you know we're going to be doing too many passes right and then if we make it you know less likely than it rates we have the same trade-off that we have described just this time with like essentially how fine-grained our uh segmentation so um so how do we get around doing so many passes right it's like we ideally wouldn't want to have to run thousands of classifications just to like tell me there's a couple sheep in that image right so how do we get around this well one hint is that our classification network is re-running on similar parts of the image many many times right if you guys think about the way a convolutional neural network works right where we have that like you know kind of filter thing passing over right it's like we're re-running that kind of convolutional Logic on the same regions of the image over and over and over again right so one potential way that we can kind of make our cnns a little bit faster um is by essentially running convolutions on the full image to create something called a feature map which I'll get into what that that means in a second but essentially the idea is that we're removing some of the redundancy in the word right so this is a fast rcnn essentially the idea behind it is that we are taking the image as a whole we are running a CNN over it to essentially like create this feature map thing which I'll explain in a second but essentially we're extracting key information from the image and then when we want to do the classification all we have to do is like Define that box around the image but instead of pulling the image out like the crop section of the image out we just pull the section from the feature map and then we run classification on that right so the idea is we don't have to rerun the full CNN thing every single time right so uh what does this actually like mean because I know this sounds a little bit confusing um but essentially we've seen like CNN's with you know like we have a bunch of cnns and then there's like an end of the CNN layer and then there's a fully connected layer afterward that does like the kind of classification on there so the ideas we're essentially chopping off that fully connected component to it and we're essentially just running the convolutional neural networks to Output that kind of volume at the end that has like ideally has some like feature information about the image right so it is because again it's a CNN there's kind of like local information like you know we're passing over the different rows so the idea is like maybe you know like when this is run right like this you know maps to some point in like the feature volume right and so maybe this little area you know is holding some key characteristics that the CNN has picked up on um from that section of the image right so the idea is that then we just one classification separately using this as an input instead of running the image every time um I know that's kind of confusing because I don't have any questions for that um all right and feel free to ask after it as well but uh all right so this you know leads us into a little bit of a cooler option um which is Yolo so you'll always like let's take this idea and let's just run with it right let's throw out all the classical stuff right let's just replace everything with a single convolutional neural network right so instead of you know running you know like the thousands and thousands of different classifications with like a bunch of different options what if we could just take an image pass it into a CNN and then that CNN outputs you know where our objects are and their bounding boxes and what they are right like all of these different things right and this this may seem kind of impossible because like how does it know what to Output that kind of thing but um we'll get into the into the details of that it's it's really cool uh so this is the architecture for YOLO so as you can tell we're not like running things a bunch of times it's literally just a bunch of convolutional layers and then like a little you know again for the classification um or some some other things as well but uh yeah so how do we get this well let's go back to our model that we were talking about earlier right this is the model that we solved classification plus localization with so essentially the like this is how we figured out you know detecting one object's uh bounds so how can we expand this to kind of detect multiple objects right that's essentially what the question we're trying to ask is does anyone have any ideas and this is a hard question but does anyone have any ideas of like how we could kind of expand this to detect multiple things yes maybe like after you like run it the first time brought out that part of the image and then running it oh it's interesting idea I haven't thought about that before um that could actually be uh it would be something interesting to try out I don't think I've I've seen that before that's cool yes you'll get the vector so yeah we could we could just that that's definitely an option right we could uh have two copies of this right to maybe just act two two things or something like that um there's a little bit of nuance to that but that's a very good idea does anyone have anything else yes are you just combined those two ideas sorry can you repeat that yes like whichever object I find appears I can just kind of expected to make a separate programs interesting yeah yeah definitely could work uh yeah okay those are all great ideas um we're gonna go with something a little closer to your suggestion um but yeah those are they're all really cool so the way that YOLO works is essentially that but like on steroids a little bit crazy so yeah so the basic idea is let's take our image and let's chop it into a grid right and actually this is actually kind of combining all of your ideas because we are kind of doing we're not running multiple programs but we are splitting the image that we can kind of classify things within certain ranges and then letting the rest of the network classify in other areas so that actually is kind of a combination of everything you guys said um but so essentially we take an image we chop it into a grader for each of the grid tiles we're going to Output one of our classification vectors or like with bound and box vectors that I talked about earlier right so the idea is like each of these little grid marks on this image has a corresponding Vector in the output right so this may not really make sense of like why this works just yet because there's a lot more details to this but uh I think you guys can see at least know the general sense right now that like this means theoretically that we can have like a number of objects equal to the number of grids in this image this does also seem to imply that we can only classify images or like objects that are like smaller than a grid Mark but that's definitely not the case and I'll get into why in a second um but before I do that uh oh wait actually yeah I don't know okay so one thing I want to kind of explain about this as well um is that the way that the way that YOLO gets around that problem that I was just mentioning um where it's like you know what if it's like the object is larger than the grid cell right well the way that it all gets around that is that this XY width and height in our previous example it was just the x and y coordinate of you know that point in the image and then like you know maybe a width and height of like what that bounding box looks like within kind of the bounce but in Yolo this x and y coordinate are the midpoints of the bounding box and the width and height are the width and height of the bounding box but that bounding box doesn't have to be contained within the grid square that is like currently being classified so what that means is for example if we look at this dog down here right this Square for example could somehow figure out that it's like okay I'm the center of the dog right I'm classifying the dog so it would output you know this point as the center right and then it would maybe output the X and Y coordinates of this kind of boundary right here and then it would be reaching outside of its grid and classifying this is a dog right so one of the issues with this is that you can now kind of think of it as we're not only forcing the network to you know like classify things and come up with the Banning boxes but it also somehow has to like know which section is like the center because it's possible that if the center is like really close to a grid boundary then maybe like multiple cells will all think that multiple grid points will all think they're the center right and same at all output classification and so we get that same problem that we were talking about earlier where there's multiple bounding boxes that are representing exactly the same object right so now we're going to actually explain how to how to deal with that I know I touched on this alluded to this earlier but we're going to explain how this works so we have this thing called non-max suppression so oftentimes multiple cells will detect the same object in this example these two grid cells right here are both thinking that they're the center of the dog spanning box right so you can say in yellow they're both outputting you know bounding boxes that are maybe about the same but that are uh you know both centered in their grid side so you can see like the left cell output of the pink box and the right cell outputted the yellow box right so they're in conflict so to solve this we're essentially going to come up with this process that we can repeat for each classification plot uh that essentially gets rid of those overlaps right and is able to kind of pick out the best one right so uh yeah the first step is that we essentially pick the bounding box with the highest confidence so if you guys remember from the like Network earlier right it's like it's outputting X Y with type but also the classification classes right and so those are all you know like maybe if one of the outputs has a really really high cost like it's really really close to one in the class that is dog right it's like I'm really certain that this is a dog whereas maybe the other batting box is like I'm like 0.8 you know certain this is a dog right so we obviously want the one that's more certain because it's probably more likely to have an accurate kind of bound as to what we're looking for so we're going to pick the bounding box with the highest confidence and then we're going to remove all boxes that have high IOU with the high confidence box so again I'll use that intersection over Union thing it's essentially just a measure of how overlapping two boxes are so we can essentially come up with some threshold value right like maybe if like 75 of these boxes overlap pick the one with a higher confidence right and we can just remove the other we can just ignore its classification output and be like this doesn't matter this is not relevant we want the good one right so in this example maybe it's this box that has the higher confidence right so we're going to clear out the other yellow one because it uh it has too much overlap all right so another problem that we have with yolo is what if we have two objects who are centered in the same cell right this is quite a bit maybe a more challenging problem to solve right because if you look back at our YOLO architecture earlier right it's like we have you know a vector for each of these grid spaces right so how can we like have you know multiple classifications uh within one cell does anybody have any ideas no no worries this is a hard question so yeah it's okay I would say the way that I I would think about this is imagine that you were trying to you know invent YOLO right it's like the trivial solution which I I'm assuming a lot of you thought of but then we're like this doesn't really work is why don't we just like copy that Vector again right but just have two vectors two like classification and bounding box vectors for each set right so it's in like for you know this specific cell of the image we're outputting uh classification and then an X Y width and height for one mounting box and then a classification X Y width and height or another bounding box right so then we can have two images or two objects in the same cell right but that doesn't really work right because it's like well wouldn't both of them just output the same thing like if there's a dog there like wouldn't both of them just output the bounding box to the dog and you just duplicated your batting box or like if there is like a person holding a cell phone and they're both centered in the same grid cell right which one of the vectors should output the person and which one should output the smartphone right it's like it's the network doesn't know right it's like how do you it's kind of arbitrary right it's like how do we how does it know which Vector to kind of throw it's that output so there's this idea uh oh yeah so we are doing the the multiple things right so we're gonna output two vectors for each uh for each grid cell but there's this idea that's taken advantage of which is that different objects are different right like if you have a person standing in front of a car right the shape of the bounding box of the person is different from the shape of the bounding box of the car right and so we can take advantage of that to essentially solve that ambiguity and essentially tell the network have the network have like a deterministic way to determine you know which object should go into which classification Vector right so the idea behind this is an idea called anchor boxes so essentially uh uh essentially what we're doing is we're creating these generic bounding boxes called anchor boxes so then before I'm running classification before I'm doing anything I'm just going to say okay we have Anchor Box one which is like a square and then Anchor Box two which is like a rectangle that's stretched vertically right like before we do any classification we just we just arbitrarily Define well not opportunity but we just Define that these are these are anchor boxes right now when we're doing our classification right so when we're doing our you know like making our uh data set and that kind of thing well all we have to do is we have to say Okay like for example take this image right there's the person and then there's the present maybe we're both we're classifying them both well we can say the person's bounding box is very similar to Anchor Box two and the present spotting box is very similar to Anchor Box one right so we can just say the first output Vector for that grid cell corresponds to things that look like Anchor Box one and the second output Vector for that grid cell corresponds to things that look like Anchor Box 2. right and so now we've kind of removed that ambiguity right because if we see a present that present is pretty close to the square right so it's going to be classified in into the first Vector right Etc um so there's this is kind of like an overview of like the the architecture as a whole right kind of combining all these things so YOLO it's fully convolutional right it's you know not doing a bunch of different runs it's just doing one pass over everything uh it is very fast um it's this number like arbitrary like 10 times faster something could be could be more depending uh but the idea is that it's essentially we don't have to run the classification over a bunch of times right we just throw the image in once get one output that's all right so it's going to be a lot faster um and with some you know extra tricks like the anchor box and some other fancy things that they go over the paper um you can match the rcnn accuracy um and it's actually performed really well this is just really groundbreaking uh at the time that it came out because it essentially showed like you can do some really really powerful stuff just exclusively using neural networks right it's like getting rid of all the classical components right it's getting rid of you know the sliding windows it's just doing everything with just one run right um yeah so it's it's pretty cool so yeah I guess the key idea for this one is that like yellow transformed you know this section on the right which is this mess of fast rcnn all just into one little neural network did you add that animation though nice okay uh yeah so we're actually we're actually done we finished really early I was trying to go a little fast but I went a little too fast um do you guys have any questions or anything do you I don't yeah just I guess if you guys want me to go over anything in a little bit more detail because I know it went fast uh yes absolutely yeah so it's a little bit harder um so be like just basic idea of like the Anchor Box thing on its own uh like just like shape wise is not really going to be super helpful to you um you can do things with like you know like size you could consider sizing it as well maybe it's like if you have two things that are the same shape but one of them is like really tiny um separate from that it's a little bit hard because also the the way that you can think about it I guess as well is that if you have two objects which are exactly the same shape and exactly the same size how can they really be visible if they're overlapping on an image right like it's like if I'm like oh I might have like an Android phone and an iPhone that are overlapping well if I'm holding the Android phone and the iPhone in the exact same spot it's like you're not going to see the one in the back right so in most cases the things that you're trying to classify are going to be different shapes or different sizes because otherwise it's like how would you even see that they're there right if they're taking out the same area um the other thing as well is that like these Anchor Box is something that I didn't mention but like for like if you're actually using this a lot of times you're not going to be like manually being like Oh I think I'm going to classify these shapes right it's like you can take in a training set and then there are actually algorithms that you can run that will essentially look at all of your objects and it will essentially come up with like mathematically optimal anchor boxes that like match like that are most likely to match and kind of be separate um so there are kind of solutions to that as well but in general yeah I would say like it's actually very hard to think I right now at least cannot think of an example where you'd have two things that are the same shape that are going to be overlapping because at that point it's like what do you see right what are you seeing there um I don't know that income options really will really solve that issue uh yeah it's a good question anyone else got some time yes very specific for like a segmentification but if you have like a million things to classify does that mean for your output Vector you know has a million volunteers are more efficient ways to do more time okay so you're saying if you want to like just like you have a million classes or whatever yeah I don't actually know how like you ask my people to answer that more than I do because that's just a classification question um but not just about Google Lens but just in general like if you have like an insane number of classes that you're trying to classify it's like if you're just doing one hot coding that's so much space all right and so they have nine thousand classes they're still uh things imposed I propose that the restructure for foreign foreign foreign yeah I would actually say too uh the yellow paper is a is a cool paper to read if you guys are just interested in like trying your hand at reading a more technical paper I think it's a cool one to kind of look into um I would not say the same for rcnn avoid the rcnn avoid our CNN that paper is unreadable but the yellow one is cool if you also check that out um and if you have any questions about anything you guys probably asked one that's them we'll try to answer that too um but it's cool cool favor to read if you want to check some of that out yeah uh if no one has any other questions Jake are we gonna head over to the yeah like I'm just gonna Meander on over yeah sounds good yeah you guys are welcome to leave you're welcome to come with us you're welcome to stay a couple minutes ask some questions um but passing y'all have a good day foreign foreign because thank you
CS_198126_Modern_Computer_Vision_Fall_2022_UC_Berkeley
CS_198126_Lecture_20_Stylizing_Images.txt
okay let's get started um thanks for coming everyone we'll keep this pretty chill and today should be pretty fun lecture it's on stylizing images um so just one brief announcement so we're planning to release the last homework pretty soon so keep your eye on that and um hopefully we'll send out an ad post very soon for any updates as you probably all heard about the upcoming strike and whatnot so um yeah okay Okay cool so today's lecture is on stylizing images and so what we'll mostly talk about is image to image translation so what it is and some of the techniques for doing stuff for image to image translation and in particular I'll go a little bit more in depth about a subtopic um as you can think of well then image to image translation so specifically neural style transfer and then I'll go into a few models that sort of utilize scan architectures and then some other examples of how we might want to approach some image to image translation tasks cool so what is image to image translation um so the general task is you want to take images from the source data set from one domain and you basically want to transform them to this corresponding image in another domain which is what we call the target domain or the image in the target so the idea is that you want to preserve the contents of a particular image but perhaps you might want to do something like change it into the style of another image so I'll show you a lot of fun examples soon but that is one example and that's what we call style transfer and so that's sort of where the neural style transfer original paper sort of um is based on and some other common tasks that are related to image translation include sort of like image synthesis or if you want to change a black and white image to a color image that might be something you're interested in doing or you might want to restore a sort of damaged image um cool so those are some examples um and so I'll talk a little bit about neural style transfer so the question is can we create images based off of the style of another image um and the answer is yes so we call this neural style transfer and you can sort of think about this again as a particular task within image damage transition so generally it's you can think of it as a general optimization technique used to take two images and you want to blend them together so you have three inputs so the first main one is the content image so that's the image that you want to sort of keep the content of so in this case you have like a dog um and so you want to keep the general form and the content of the dog but perhaps you might want to change the style while still keeping its General shape and idea that it's a dog and so in order to generate an image like that what we call the generated image we'll need some style reference image and so that's where we have a separate image in which we sort of Base the style off of to create this new image um so how do we actually go about implementing this so the general idea is that you want to optimize the output image so the generated image to match again the content image in terms of the content and then the style of the style reference image so we use convolutional Nets generally to extract the relevant information from both the content and the style image although there are some other techniques but in this case for the original neural style transfer paper they basically just took a very deep typical classification CNN um to do this so you have like your pre-trained CNN with some additional added loss functions to basically do all of this um to transfer the style from one image to another um so this is going to be a lot um based off of what we've already previously talked about so there's not going to be too much that's really new in terms of the overall architecture um but I'll kind of talk a little bit about how it differs from like a typical CNN that we've seen okay so again you have three main inputs um you have your content image your reference image and then your generated image um so that's sort of your trained image um so as you can see here you have in this example we have um the style image is starry night and then you have some other content and then this is sort of what you already seen before in terms of CNN and sort of we'll talk a little bit about what the output might look like and what happens here but the main idea is what we've sort of try to emphasize a lot previously which is when you first start in your first few layers um you have your low level features you sort of get um and so that concept Still Remains in which you have your edges um your main um sort of low level features and then as you sort of go through you abstract and you have your higher level features um and by the end you have your main like objects as you can sort of start to see from here um so that Still Remains um specifically with the original style transfer paper they used um a particular deep CNN um BGG 19 so I won't go too much into it but it's just a very deep classification um CNN so the idea is you basically want to take your middle layers of your CNN to get both the content and style representations of the images so that's sort of where all the key information is and I'll talk more about how we sort of get the content information or the style information into more in more depth but you start with your input layer um and your first few layer activations again are your lower level features like the edges and textures and then you sort of abstract away into the higher higher level features and so the idea is that with your input image you want to try to match the corresponding style and the content uh Target representations um in the middle layers so we choose particular middle layers to or from the CNN to represent both the style and the content and we get the intermediate feature Maps um and we'll basically calculate the loss from that okay so since we have two main parts we have the constant and the style um our loss is going to be slightly different for the two so for the content image um it's pretty simple we get the intermediate layer outputs um and then we basically just get the euclidean distance um but the style loss is um a bit different and a little bit more involved and complicated um unlike the content in which you just basically return the intermediate layer outputs um and then you just calculate the euclidean distance and then just back prop on that um the style loss sort of incorporates what we call a gram Matrix which is basically a bunch of dot products and you basically sum over all the different layers um so I'll talk a little bit more about what this is so the idea is how can we actually measure the style similarity um and sort of throughout all the different layers so we could basically calculate the correlations across these different feature Maps throughout um to understand the style of the image um so the idea here is you can basically um take the dot product um as we've talked about in the past to sort of basically see how similar our vectors are and so the more similar our vectors the larger the dot product is so we can basically calculate the gram Matrix per layer um and then get this final final Matrix so you take all of your um in this case I don't think oras in this case yeah using C um that's sort of the I think it's the depth um and then you have so you have your convolutional feature Maps so then you take for each um take all of the Latin feature Maps um and then of depth C and then compute the dot product uh which is what's going on here and then you will basically end up with this gram Matrix of size C by c um and then this basically gives us the information about the content or the image the style um of the image okay so once we've sort of gotten the gram Matrix and the euclidean distance from the content image we can just do our typical background so we minimize our loss let's say using something like atom um that's fine although I think the original paper used something different um you add both the content and the style loss to get the output image um with the content from the content image and the style from the style reference image and then run your back prop in an optimizer okay so this is sort of an example of what's going on here you have some content image and then your style image and then this is your final generated output image um I can just um and then you can see like throughout what's sort of going on here it's a bit small um but you can see that this is the original content image um and then as you iterate through you sort of take the style of the style image but you kind of see how like you have the lower level features um and then you sort of abstract in terms of the style um with the higher level features so uh that sort of concept remains throughout um and it's pretty important when we talk about CNN's especially okay any questions about this so far okay cool so next I'll talk okay here's another example um a little bit more about some Gans um so I know a few lectures ago we've had some in-depth an in-depth lecture about some the general architecture against um we talked about Silgan cyclegan um so I'll briefly review those again and then get into some um different models that sort of focus more on this task of image to image translation okay so the basic yam has two separate parts so you have the discriminator and the idea here it's it basically checks um for what's generated and that's just your basic classification convolutional neural net so it's very similar to what we've used and um for neural style transfer as well so the idea here is we basically down sample and then we need to up sample and so we have the generator portion which generates the new instances and um yeah I think that's all I wanted to talk about for review for Gans um okay so we also talked about Style again a few lectures ago um the idea here is that we basically pre-process the latents um for a network to use um in the convolutions and then we normalize which is over here and then we incorporate the style vector um over here um into the synthesis Network and so the generator basically it's pretty similar to a traditional again except it does make smaller adjustments to the style parts um so for the style of each image it basically just makes a small adjustment at each convolution layer um the other idea that I kind of wanted to mention um I've sort of emphasized this already but um it has the ability to basically separate the higher level and the lower level um attributes within the image so you can basically change some of the attributes without changing um others so here's a fun example and again just to be clear the main changes are only to the generator not to the discriminator okay so now talk about another paper and model it's called pics to picks so this uses a conditional again that learns the mappings from the input image to the output images so the idea here is instead of an unconditioned again it basically conditions on some additional information um in order to generate the corresponding output images um they so a little bit about conditional Gans um they're pretty suitable for tasks like image to image translation because it learns a conditional scan so we basically condition on and input image and generate the output image um so you're able to sort of add some additional information um like maybe some class labels for example um to basically or data from like other modalities which I'll talk about later in terms of multimodal outputs to basically generate um is it very much Okay so I think that's mostly it for picks to picks but the question is here um pics to pick so you can sort of think of it as like a supervised technique in which you have paired images so you have like um you have like your um contents and final limit and then you have like the generated image so it's these this kind of data um is sort of hard to get sometimes um and it's not as flexible in terms of what we can do and so the reason why I bring this up is because pics to picks is a bit limited and that it requires this paired data um so the idea is how can we sort of build upon this idea um still using for example the conditional Gant architecture but with unpaired data and so this is sort of what we talked about last time let's cyclegan um so it's still similar in that it uses the conditional Gan but this time for unpaired image to image so unpaired data for image to image translation tasks and so the idea here is okay you have a horse and let's say you want to create um a generated image in such that it's a horse but in the style of a zebra so the idea is that if I um sort of convert it back I should expect a horse again uh so that is the cycle consistency loss that cycle again utilizes um so this is a little bit more flexible right so the idea here is that we can basically train um without needing paired data so we can translate from one domain to another without this one-to-one mapping um and so that's sort of where this kind of idea comes into play so you don't need to have a picture of a horse and a picture of what the horse should look like so like this image um in the the data set um you could have just a bunch of horses and just a bunch of zebras separately so it's a little bit more flexible and there are some more possibilities for um what we can do in terms of the tasks that we're interested in with the nearest standard transition so we just need the source in the Target data okay um any questions about these few techniques and what they're trying to accomplish here before we get into some other techniques to sort of improve upon this or offer different perspective besides using bands okay so there's something called stargan an idea is um can we sort of take from multiple domains so instead of just like one source um domain and then one target domain can we incorporate information from multiple domains um so we can basically train the ideas that we can basically train multiple domains from different data sets um and so this sort of tackles the issues of some of the previous models that I mentioned in which they're pretty ineffective when it comes to dealing with multi-domain image transition tasks because we would need to train basically many many generators to accomplish that um and in addition some of the previous models I talked about they don't fully incorporate or utilize um the training data so the question is how can we basically take all this data and additional data um and incorporate information so the key idea is instead of learning a fixed translation um the generator this time takes as input both the image and the domain information so what that means is like they basically in this paper um they labeled um or so like either a zero to one binary label or one hot encoded um domain information so to be clear what I mean by domain information is like um sort of different domains in which you can sort of characterize them by different attributes so for example in this case you have like inputs of people's faces and then you have attributes like blonde hair gender age pale skin um so instead of just learning like uh black to blonde hair can you sort of incorporate um other information so you take information or take as inputs both the image and the domain so uh the different kinds of images that share this sort of attribute okay um the other thing that I wanted to talk about is multimodal outputs so what does this sort of mean so I sort of with previous models um they're sort of limited in that they only a small number of samples are represented in the output so this is what we call mode collapse um so the idea here is that like cyclegan and picks to picks for example um they sort of fail to capture in this for example if we use this as an example um the different kinds of scenes that you might notice here so uh if we have like an input night image um the idea is like can we sort of create pretty diverse outputs so picks to picks and cycle again that was sort of like a limitation that some people noticed um so the idea behind multimodal unsupervised image to image translation um is can we produce multiple translations that um are one pretty diverse and two still pretty accurate and faithful to the original input um so an example uh of a model that is able to sort of accomplish this um is bicycle again so um the idea is like we can basically model um a distribution of possible outputs in our conditional Gap so we can produce outputs that are pretty realistic to the original input image um and also pretty diverse which is sort of what the others were lacking but the question is like mapping from high dimensional inputs which as you know images are pretty complex into high dimensional output distributions is pretty difficult so how do we sort of Remedy this um we basically uh we can represent multimodal um so like we can represent multi-modality uh with low low latent um low dimensional latent codes um so we can basically try to represent all of the possible outputs um with this sort of distribution and we can basically use our generator from our conditional Gan um and let it take in both the input image and some randomly sampled latent codes and basically try to produce a randomly sampled output so that's sort of what this paper was trying to accomplish um so instead of just mapping the original Lane code with the emit or with the input so remember um I think it was with here [Music] um with Style again for example we just have the latent uh and the input um the idea with bicycle Gan is can we basically also learn an encoder um that Maps the output back into the latent space and so the idea here is that you can minimize the probability of two different lean codes um sort of generating the same sort of style or the same kind of output um so this sort of tries for basically helps with this issue of motor collapse okay any questions about this so far or what it's trying to accomplish okay um and just one more thing um I wanted to keep today's lecture pretty sure we can sort of just talk um casually about some of this um but the final idea is that we can um you know so far we've sort of seen architecture is mostly based off of our traditional like CNN classification networks and um using Gans conditional Gans um but we've seen Vision Transformers and what they're able to accomplish but using Vision Transformers for tasks like image to image translation isn't very easy the nice part of again as you can imagine is the generator aspect of it and so the question is can we sort of Leverage the benefits and what Vision Transformers are able to do for these tasks um so there are some examples and techniques and models that have recently come out um of like sort of leveraging both the benefits of vision Transformers with Gans so there's something called like um I forgot what it's called but it's like basically Vision Transformers plus scans um to sort of tackle these image Chambers translation tasks um so yeah basically there are ideas about applying them um using transformer architectures for image Generation Um though it's not it's not as easy and straightforward as using games for example okay uh I think that's sort of like most of what I want to cover today um yeah I thought it'd just be fun to sort of see some of the fun um images of what these techniques are able to accomplish um so yeah are there any questions all right I know today is shorter but we can also just like stay over and answer any homework questions that you may have or if you just want to sort of talk about um whether it's related to stylus images transition or not happy to talk
CS_198126_Modern_Computer_Vision_Fall_2022_UC_Berkeley
CS_198126_Lecture_5_Intro_to_Computer_Vision.txt
foreign hey um and welcome to lecture five of the introducibility cow so this lecture is going to be on introductions introduction to cnns we have already given this lecture in person but due to some technical difficulties on our end the video of the lecture did not record so this is a recording of a lecture that we have done later in time so that students who did not who were not able to show up can still go through it later right so uh this lecture is very much going to focus on the most basic sort of ml model that you see in computer vision and before we motivate what that model can deal with the model looks like let's try to ask ourselves what computer vision even is in the first place so at a high level computer vision is simply the field of AI that deals with anything that involves sort of extracting information from an image at a low level it's a set of tasks that revolve around Concepts like classification detection segmentation Etc so anything that requires a machine to have a semantic understanding of what's going on inside an image so again there are multiple ways in which this can sort of take place so the the most canonical or I guess the most famous task that team might have heard before taking this class is uh is out of classification so the problem of classification is to Simply you know just classify what kind of object is in an image now that is all well and good and there are many models that can do that but what if you want to go a step further so instead of just classifying where or what what an object in an image is maybe you also want to um sort of point out its location with say a bounding box so we know that here we are looking at an image of a cat but instead of just classifying that as a cat we also point out where that where this CAD in particular is now again you can take a step further and instead of you know detecting a single object you can try to detect multiple objects because that is usually the case in many real world images so um if you have like say different kinds of animals in an image you could try to um stop you try to predict the positions now with not only the cat but also say the dog and you know the rubber ducky at the very bottom and again you could go even further instead of you know just making a box you could predict their outlines now these are different kinds of CV tasks that we will sort of indulge in over the next couple of weeks but the main thing that is in common between all of them is all of these tasks will require a machine learning model to have a higher level understanding of what a dog or what a cat is right when it just sees that image to it it's just a series of pixels and the way it can sort of interpret meaning from those pixels is what we will be focusing on so computer vision has its roots in cognitive science and psychology so uh there were there wasn't there was an experiment performed in 1959 by neurobiologists Hugo and weasel who were experimenting on cats they wanted to see if they could sort of probe the visual cortex of a cat and they showed this uh and they showed it like different kinds of patterns what kind of activations they can they could like observe and then they noticed that um simple edges like um or like simple patterns like edges led to the highest activations and they were able to figure out that a cat is trying to break an image down by looking at different kinds of edges first and this led to many developments in computer vision uh computer assigned to the strategy check inspiration from this experiment and they decided and they decided to make feature extractors that were say based on edge information so um you know one common example in in this image that you would be seeing is that of hog and when they're extracting these like different kinds of features they would train like you know different kinds of classical machine learning models on top of those and this is how CD was usually done for a very long time but then deep learning came in now remember back to lecture four we talked about how the difference between shallow learning or classical ML and deep learning is that you allow a model to not only learn a mapping from you know the features to an output but you also allow the model to learn the feature extractors themselves so um in In classical methods we were sort of hand programming these feature extractors like sift or hog or Daisy again you don't need to worry about what these like specific X factors are because we will not focus on them uh the core the focus of this course is on deep learning not classical CV and it turns out that he could like simply replace them with you know neural networks and learn something even and learn like even better and better models and and in particular this breakthrough came in 2012 when a certain computer Scientist by the name of Alex krajevski demonstrated that you could train huge neural networks that could massively outperform you know these like classical models so the way you typically use deep learning for computer vision is through the special kind of model called a convolutional neural network they were first developed by John Lagoon and the in in the early 1990s I believe that he first tested um this approach in 1989 I think it might be an ear or two off but it was something it was something around that ballpark and again um this was a massively successful sort of experiment by him he was able to show that he could use neural networks to say recognize handwritten digits from an image and at that point deep learning started to become more and more widely adopted and it finally exploded in 2012 when again Alex grazewski came up with this model called the Alex net he demonstrated that a few years like large models with huge with a huge amount of compute and huge data sets you could train really high performing models and uh and he was able to sort of train the best model uh until then and and and in in this challenge called the image net visual recognition challenge I will touch on that slightly at the end but you just need to know that this model was a huge turning point for deep learning and it sort of led to the Deep learning Revolution and let the state of deep learning research as we know today again um so the imagenet classification challenge is a is an annual competition and and you can see um sort of the massive transformation that Alex not had in 2012 the previous approach before him had a really high error rate and alexnet almost reduced it by a very significant margin and it is only you know since going down as better and better models come out um we will talk about some of these models that you see here like vgg um I believe in substion net or Google laneet and resnet in the next lecture on Advanced architectures but all of these models have something in common is that they are all based on convolutional layers which is going to be the focus of today's lecture so before we delve into um you know these like CNN based models let's try to answer a very crucial question we are trying to work with images but how do we represent images in the first place like to a computer what what is what what exactly does a computer read when it sees an image and the question we want to ask is how do we you know represent these images digitally and the answer is well simply as matrices so you know when we talk about an image it has an Associated height and a width you could simply view that as the number of rows and columns and in a make of a matrix and each say value of this Matrix represents you know some kind of um brightness um from you know white being the brightest color and black means you know you know the darkest color and this is typically how you would represent grayscale images um again um you would use a brightness value that ranges from 0 to 255 um why is it 5055 so that we could have they could represent each pixel which in just eight bits because you have 256 possible numbers in that range which is represented which is um which can be represented by a single byte so as essentially what's going on is that each pixel is can be represented by a single byte and I think that's a pretty good analogy if you have taken something about computer architecture classes as you might see why this might make sense um so you know just to recap um each image in grayscale is simply a matrix with a pixel value in the range from 0 to 255 but we don't always work with racial images we typically want to work with color images because it was a sort of you know more realistic now there are many ways to denote color in the sense that you could break a color down into different kinds of components you could break a color down into say or RG um RGB RG and B values like each color can be composed of like different shades of red green and blue you can also break a color down into other color spaces like cie lab or yuv for example we will go ahead with RGB because it is simply you know the most common color space out there and pretty much everyone uses it so we know that each pixel each colored pixel can be broken down into three different values which means if you have a matrix um which means that if we were to consider your original colored image with some height and width you know that each page pixel in that image is going to have three values so you could construct three different matrices for each particular you know component and in a sense you can like just stack those on top of each other so if you're working with RGB you could create a matrix for the color R you could make a matrix for the color green you can make a matrix with a color blue and sort of composing them with each other will you know yield the original image back um this is a pretty important concept um as to how we are breaking down an image into you know different kinds of matrices and and it turns out that um when these like matrices are Saturn signed on top of each other so let's say that each image is has it says say that your image is a height of 256 and obviously 256 this would mean that you have 3 256 by 256 matrices so what you're doing now is instead of looking at a single Matrix you're looking at more of a 3D Matrix with Dimensions 3 by 256 by 256 and there's three accounts for the fact that you have three different matrices right um in in deep learning literature we call this generalized 3D representation of a matrix a tensor so in a sentence you could break each image down into a you could represent each image by a tensor and just like three at the very beginning is referring to the number of channels that the sensor have okay um just want to make sure that you guys pause this video make sure you understand what's going on in this picture make sure you understand the concept of color dimensions and channels because this is going to be very important going forward okay assuming you have done that we can finally motivate why we want to use convolutional neural networks so before we motivate um why a CNN might be helpful let's try to see why a regular nn or a multi-layer perceptron as you have encountered but as you may have seen in the previous deep learning lectures might not be too helpful so say that you're working with a 200 by 200x3 image again uh essentially since we just came up with a discussion of colored images this means that we uh this image has a height of 200 a width of 200 and this is probably an RGB image because it has three different channels or a depth of three um in order to pass this image through a dense layer that we might see in a fully connected Network you would have to convert it to a vector so if you look at where my cursor is pointing um this is exactly what a dense layer in a neural network does it's basically a matrix multiplication and bias addition operation right so we would need to somehow convert this image into a vector that can be multiplied by a matrix of weights now one way to do that is like simply squash this 3D like tensor into a single vector and since the original 3D tensor had 200 by 200 by three images your final Vector would end up having um 120 000 elements now let's suppose that the output of this 3D let's say that the output of this layer that we're passing this like image Vector into is supposed to be 10. so we are passing in a 120 000 dimensional vector and we are expecting an output that is 10 dimensional I want you guys to pause this video and answer the question how many parameters will this fully connected layer have again this fully connected layer is taking an input with uh is taking a 120 000 dimensional input and it's outputting a 10-dimensional vector okay assuming you guys have paused and given it a go let's look at the answer so since you have a 10 dimensional output your weight Matrix would need to have Dimensions 10 by 120 000 right so this would mean that your weight Matrix has 1.2 million uh weights and and similarly you also add a bias Vector to the output right this y Specter is supposed to be 10 dimensional so there's going to be 10 other parameters that come from the bias vector and which is exactly why we have 1.2 million plus 10 or 1.2 million and 10 parameters now this is way too much and again we are only looking at a 10 dimensional output 10 is not even that big normally you would want to look at maybe say a thousand dimensional output right so if you're looking at that housing dimensional output then this layer itself would have 120 million parameters instead and this is only a single layer and we are already exceeding like not necessarily exceeding but we already have but we're already at like this such a big capacity right um and and you can imagine like you you already have such a huge Network that you would have trouble training and you would also have to store it at some point it's simply like q1 BLD and as you add more and more layers because you know as as we saw in the previous lecture on on pre-training um is that you know typically adding more layers allows you to like learn better and better representations adding more layers will simply add to the parameter count and simply make this network this already big Network even bigger that is not ideal another reason why this formulation might not exactly work is you are treating each individual pixel as a different feature now recall each Vector that goes into a dense layer or a layer of this form over here um is assumed to be some vectoral features if we are com if we are flattening out all the pixels into a vector we are saying that each pixel can be represented by a as a separate feature and in the case of images this does not exactly make sense again I would encourage you guys to you know pause the video think about why this is you know kind of unorthodox or or weird by you know treating each image pixel as a separate feature okay assuming you guys have given it a go let's let's go through some of the reasons so before we delve into you why this might be a bad idea let's think about how we as humans classify objects so say that I present to you with an image of this Swan um the way you would recognize this image as you know as a reference one that's building on water is you would break this image down into different parts look at each part separately instead of like assemble like those parts back together so you would you know maybe focus on it's you know orange beak first and then maybe shift your focus to the to its like long neck or maybe it's like all over body or the fact that it's like floating on water right and you're using all of these like different kinds of like there's these like different patches of this image to construct to to sort of like construct this mental model of what a swan is and and based on that classifying that image as a set of a swan right um let's look at exactly what's going on over here we are saying that instead of looking at the entire image by itself it might be better to look at different patches or regions of an image at once and sort of combine the information from those patches together at the end so uh again the the idea is that we want to look at local regions instead of like individual pixels and I I think this sort of would make sense would make more sense with with another example say if you guys can see my uh cursor if you're zooming in to this patch over here you can kind of make out that this is does that this patch contains a sunflower but if you were to say zoom in and consider a single Pixel at once say something on the pedal then you would just see this like yellow Dot and that doesn't give you any information right you need to look at the neighboring pixels to figure out what's going on um it turns out that in images these like neighboring pixels are very very strongly correlated to each other and they contain information that is not that that you simply cannot link from a single Pixel and this sort of structure of an image is not exactly it doesn't exactly lend itself to the um neural network template that we have been seeing so far because again those will treat each image pixel separately we don't want to do that we want to look at regions of images regions of pixels at once and and this idea is not exactly new to deep learning it's very popular in classical CV2 so this is this the slide over here shows an example of a very famous feature extractor called Kenny Azure detection it simply it retrieves the edges from this image now if you think about it you can't really retrieve you can't really form or like sort of extract edges from an image by just looking at single pixels at a time right you would have to look at regions of pixels to find say the orientation of an edge and yeah this idea of local regions is more relevant to an image then is is more concerned with the structure of an image than it is with you know say something then then it is with a particular technique like deep learning um this is an aside the feature extractor is also called the Kenny and strategory was actually made by Berkeley professor John Kenny uh if you guys don't know who he is I believe he used to be the head of the department a few years ago um another another thing that we might desire when building some sort of model that processes images is we need to extract representations so recall that deep learning is a process of extracting hierarchical representations from an input we talked about this idea in the representation and learning part of the pre-trading lecture so the set of terminology doesn't make sense I would encourage you guys to you know pause this video go back to it review that lecture um and come back to it later and come back to this lecture later so so what this means is that we want you to learn representations that sort of depend on each other so we would start with you know the very Basics uh which can be something like you know information about edges textures colors that you can retrieve from you know just see pixels themselves then we want to learn the presentations that depend on these like basic level of representations such as you know shapes and patterns you can combine information about edges and textures to form different kinds of shapes that might be an image and I think this is sort of more prevalent in the in the example to the right so the very bottom layer layer one is is trying to look for different kinds of edges and once it has sort of retrieved that information it can combine Edge information to look for different kinds of shapes in an image and finally as you sort of go deeper and deeper into your network you would combine like different kinds of shapes and patterns into a abstract representation what a face looks like and and the reason this representation is abstract is because we don't know we can't really interpret what's going on or what this representation looks like but your model will be able to and you know that's kind of the important part again if you can sort of look at look look to the example at the right once you have to the form just like different kinds of patterns in Layer Two you can combine you know parts of them and make out like different faces and your model is then going to be looking for those in an image let's see what else we might require um from a good you know image-based model so there are two important Concepts that I want to sort of um touch upon so one is called translational equivalence this is the idea that if you have some set of pixels and you have some and you have this representation if you are to you know say translate the pixels you should also translate the representations so uh if you if you look at this example down below to the right or to the left we have this image of the number two and say we pass it through some unit for clear I don't know what kind of layer this is but let's say that there is there exists some layer that gives you this representation um in in the top left now if you were to say like translate this to you to the right you would ideally also want their presentation to move as well right it's not like the symmetric meaning of the image changes of the two shifts position this image still has a two and this representation should not really change based on position um another concept that we want is something called transitional invariance so this is simply saying that you know even if we translate this number two the semantic meaning of this image does not change so let's say that we were training a model to classify the digit in an image um the left uh image would yield an answer to but even if we were to say translate this two to the right we would so like the answer to pop out at the end right because the fact that you still have the same digit does not change so these sort of features are can't really be worked out in the traditional density or network architecture which is why we turn to this new kind of model called the CNN and and we will sort of see soon enough that a CNN can sort of fulfill instead of satisfy all of the um criteria that we just laid out okay so let's move on to the concept of convolutions uh the CNN is uh is is short for a convolutional neural network and this idea of convolutions sort of lie at the base of this uh of this like new kind of model okay so we laid out some uh some like uh requirements that we want this model to you know have so we first of all we laid out the fact that this model should instead of you know processing entire images at once we should be processing you know different kinds of patches instead and we could assemble the process output of an image by Pro by uh by sort of like assembling the processed output of each patch and you know I don't know like concatenating or something um them together at the at the very end so uh what was what's in in other words we're forcing the model to pay attention to local regions of an image instead of the whole you know sort of oh instead of the whole image at once finally we also talked about this area of like translational equivalence and translational invariants so the idea is that if we were to say sample some region of patches in one position if we are to encounter that same region of patches in different positions in a different position we should get the same representation out so say that if we are processing each patch this would mean that we want each patch to go through the same layer in order to yield the same representation uh or in other words we will pass each patch through a layer with the same weights and biases this idea is called weight sharing and we will see soon enough how this idea sort of manifests itself in the CNN so okay with these initial ideas in mind we are now ready to Define the convolution Operation the convolution operation I'll first go over how this operation is performed before trying to explain what's intuitively going on so the combination operation involves two different components you have an input image and you have something called a weight filter this weight filter is simply a tiny Matrix that you can slide over your image and check dot products along the way so what's what's happening is um you you line up this image in the top left corner you take the product you take the element wise product of the filter with the patch of the image that this filter is covering and you you sum those up and you can repeat this process by sliding this filter across and concatenating the outputs that you get again um just to that's just to go over this process once more I want to make sure that you understand what's going on here mechanically because this is gonna like underlie the entire this this is the main foundation for the entire lecture so we we take this weight filter we slide it along the image by a certain amount and at each time step we compute the dot product between the entries of the filter and the input that it's covering right and the reason we call it a DOT product is because we are element by summing the filter and and an input patch and something and every we are taking the element-wise product of an of a filter on an input patch and summing the results this is exactly how a DOT product is performed between vectors now um again keep in mind that there is also going to be a bias term involved so like after we you know sum everything together we would add we would tack on a bias term at the very end in our examples it's not present but in a real CNN this would be this would this would be something that you have to keep in mind so um at the moment this um the sort of like process might seem very arbitrary but and and but it but it does satisfy our Initial Ideas right so this process of convolutions is making sure that we're only looking at single patches of the input at once instead of the entire image we are building up the process output of the image by sort of concatenating the process output of each patch and furthermore we are using the same filter for each patch so we are essentially sharing the weights right it fulfills all of our initial uh desiderata right and it turns out that this process isn't exactly something that was arbitrarily chosen it actually has a very um it's doing something very particular and I'm hoping that the next example will sort of make that clear so let's say that we have this input image on the left and we have this very particular weight filter in the center we want to see what happens if we convolve this input image with this particular weight filter what would what would be output that we get so say we overlay this filter on the top right patch of the image what would the output be so again we would have 10 times 1 plus 10 times 1 plus 10 times 1 plus 10 times 0 plus 10 times 0 plus 10 times 0 plus 10 times negative 1 plus 10 times negative 1 plus 10 times negative 1. so all the cancel like sort of cancel out and you would just be left with zero here now we slide this filter uh by one unit to the left and we repeat this process we would you know sum up all the tens uh along this row because we have because the filter has a a column of ones here it would ignore all of these lines because the filter has a row of zeros here and we would also like know or sort of like this column over here because the image has a row of zeros here so we would get 10 plus 10 plus 10 which is a 30. again we could repeat this process again we will get another 30 and similarly once we slide on to this like region of all zeros we just get a zero and you can repeat this process for the entire image and this is what the convolved output looks like let's actually see what's going on over here um this convolved output has a bunch of zeros along the left and right Edge right and it has a bunch of 30s along the middle let's forget what the actual numbers in there are for a minute and instead of you know focusing on the zeros and 30s let's just say that this convolved feature has low values on the edges but High values along the middle let's look at what's going on in the image in the middle uh in in between the second between the third and the fourth columns what we are seeing is this transition between this like column of tens to a column of zeros what's what this means is that there is this like very explicit boundary or an edge and this filter has high activations along this Edge but low activations elsewhere so this is exactly what's going on over here this filter is trying to detect vertical edges in an image right it's trying to give you High values were things that it has a that there's a high likelihood of being an edge and it's going to give you low values where things that it has a low where the image has a low likelihood of of having an edge so and we we just noticed that we turned this like very arbitrary looking process for convolution into something that's returning something semantically meaningful right they can extract Edge information by this very carefully designed filter and it turns out that edge is not one of the information you could sort of extract you can Define different kinds of filters in a very specific way to extract different kinds of information so you could um so we were only looking at like um vertical edges you could look at different kinds of edges at once but you know a more General Edge detector you can also try to extract and an image that is say sharpen by you know sliding over this particular filter over um over an image and and it turns out this idea of convolutions actually dates back to classical Series this is not a new idea that deep learning research is proposed um people have been using convolutions in digital image processing and classical TV for a long time but this class is deep learning where does the Deep learning part come in so we just saw that these filters can extract different kinds of features from an image these filters are basically feature extractors recall what we said in lecture four deep learning is the process where you convert the feature extractor to something that can be learned so you know if like the fully connected dense layers that we know and love what if we try to learn these filters so you know and so instead of you know hand programming um a edge director filter we say that okay this particular filter is just a bunch of parameters and we will allow the network to decide what kind of filter it wants to use on that particular image right now if you did go to lecture four you might have seen these um like images um in there and at first it might not have made any sense this is uh this image is representing what learn filters might look like again if this doesn't make too much sense don't worry about it um sort of like trying to interpret what's going on in a CNN can be very challenging and I would not want you guys to like waste time over trying to interpret like every single detail but that's that's the image I included this image here just to sort of give you an idea of what's really going on inside of CNN you're trying to learn these different kinds of filters which are ultimately you are feature extractors now something that you have to keep in mind is that we have been only defining this concept for 2D images or rather we were treating an image as a matrix but as we've saw earlier if we are working with something like an RGB image that is more of a tensor because it has three different channels right how do we sort of generalize this process of a convolution to an input image with multiple channels well we were earlier only considering a filter that was also a matrix what if we also add a depth to each filter right so this would mean that we could still slide over the filter along the height and the width and take her dot products as normal and I think there's um animation over here does a good job sort of illustrating that point um uh this like highlighted white part represents what the filter is looking is what the filter is in this particular image and you can still see that you can like slide it over because this filter has the same depth as the as the input image right and and and this is something that you have all the stuff to keep in mind that if you're working with like 3D inputs you you would want your filter to have the same depth as you know your input now this single filter and it's like process of convolution forms the convolutional layer we take in this like 3D input we slide a filter along it and we get something called a convolve feature out it turns out that in in the in the CNN literature this convolved feature map is also called an activation map because it sort of represents the activation of different kinds of features again um the way you can think about this intuitively is that if you have a high value it's this activation map is sort of saying if you have high volumes in this activation map that is sort of saying that there is a good chance that this feature extractor activated or rather oh there was a feature in the input image that activated this feature extractor and gave and resulted in a high value at the end so hopefully that gives you some intuition as to why this output might be called an activation map um don't worry too much about why it's called that that it's just something I wanted to put out there because you might notice this in a paper or something but yeah now earlier we were only convolving an image with a single filter and that gives you one and that can extract a single feature from the image right what if we want to extract different kinds of features from from an input and use information about all of them well we can simply do that by using more filters so you know instead of using a single filter what if we had six different filters we would get six different activation apps which would represent the activations for each different or for each of the features right now recall that each activation map is simply a matrix and we are getting six of these matrices what if we concatenate those what if you like sort of Squish these together what does this remind you of recall back to our discussion of representing images in an RGB format we were saying that each image can grow each color can be broken down into an RG and B component and we could build matrices for all three components and squish them together and this gives an image a depth similarly we can give your activation map a depth and in a sense this activation map is also going to have a 3D structure and this is going to be a tensor so you know another way to think about this process is you want to extract different kinds of features from this like 3D image and the 3D comes from the fact that it has like multiple channels and the way I can do that is by looking at different kinds of filters convolving and and convolving that image with all all of those filters uh right so let's say that you have four different filters as in this you know as in this example given over here you would sample each patch from the image and take this dot product with all different filters to get an output and you would slide and when you're sliding your patches across the image you would sort of keep repeating this process for all all four possible filters right and so in the end instead of you know concatenating a single feature together into an activation map you're concatenating four features at the same time and this also gives zero 3D output this is exactly the same process that's going on over here but uh this might be a different way to think about this so you know instead of like instead of convolving each input image by a filter separately and like you know stacking the outputs after the completions have happened you could also view this as a single convolution that's going on but with four filters at the exact same time and you're building your output in in that particular way so in a sense we could generalize this convolution operation so instead of saying that we are taking in an image as our input we could take in any 3D tensor as an input and this 3D tensor would have you know some sort of height with an a depth which is represented by the number of channels it has um we know that if you're working with a 3D input we would also want to have a 3D filter so this filter would have this filter this these filters are usually Square so they would have a height and width given by I don't know some letter K because another name for a filter is a kernel and these would also have a depth of C because that is what the that is because that is the depth of the original input image right and say that you're working with f different filters now this would mean that your output is going to be uh there's also going to be a 3D tensor but with a depth of f because the number of channels in the output is simply decaded by how many filters you use in the first place and you would and this output would also have some height and width given by H Prime and W Prime we don't know what those numbers are yet we will get to those in just in in a few minutes but but the main point you take away from this discussion is that this convolution operation is sort of very general and instead of just being applied to images it can be applied to any 3D input and what this allows you to do is if we sort of keep this abstraction of an image as just a 3D Volume we can start stacking convolutional layers on top of each other and this is exactly what leads to deep learning it recall that a deep learning or a deep neural network is simply a stack of layers that each learn representations right and this this like abstraction is what allows us to do that now I simply could have ended the lecture right here because I think I've gone over all the big ideas that you need to know to you know start working with cnns but I'm also going to go into some implementation details that I think are important to know when you're working on projects and something and and and such so um and cnns are actually there there is this concept of something called the receptive field and this idea is actually not limited to just CNN's this is actually more uh prevalent in neurobiology and and psychology and I think computer scientists like simply stole that term from like those areas and applied it to CNN in fact I believe if you were to like a search Google search like a receptive field you would probably see more results from biology literature than CS literature and I I don't know if this is true but I think there's a pretty good chance that it is so um on a subject field simply refers as a concept that is defined for each element of an activation map and it's simply saying um and it's simply saying that it's simply describing the region of the input that each element of an activation map is influenced by so if we look at this example over here say that we have this like five by five input we convolve this 5x5 input with a three by three filter and this gives us a three by three activation map just assume that we are working with the depth of one here just like make things simpler but of course this concept will apply it to you um that's greater than one as well so we we get this activation map and what we can do is we can apply another convolution on top of this activation map to get another activation map with with a single element now the single element over here is connected because or is rather is is influenced by all nine pixels in the first activation map and each pixel in the activation map is further influenced by nine different pixels often of an input so if you virtually I think about the single element in the second activation map as as a single as as a whole we are saying that the entire 5x5 region is going to influence this single element right because if you know something were to change in this very first patch over here that would change this element in the SEC in the first activation lab over here and that would also influence the result that would also influence the element in the second activation then so we are saying that this input has a receptive field with height 5 and with five because that is precisely the region of the of the original image that is that it is being influenced by um and it turns out that there are many ways to get the same receptive field so you don't need to sort of Stack to different three by three convolutions on top of each other if you were to say apply a single five by five convolution instead Shooters like five or five input you would also get an activation map whose element has a receptive field given by a height of 5 and the width of 5 as well now this receptive field is a is a pretty important concept because we ideally want the um so so let's let's let's look at what's going on in this example over here again we say that the receptive field of a signal element in the force activation map is going to be three and we know the deceptive field of the element and the second activation map is going to be five so what's happening is as we add more and more layers we are increasing the receptor field of our Network right and this is pretty important because we don't want the network itself to have a smaller subjective field but we also don't want it to have a big receptive field so let's say that you build a CNN Network and the very last activation map in this network ends up having a receptive field that is too big or rather it's it's big enough to be similar to the size of the input that's not really any different that's not really anything different from just passing the input in as a single dense layer right because what's really happening is you're instead of like looking at patches in the end you still have an element in your network that is being influenced by the whole image and that's not that's not particularly different from the dense Network formulation that we had earlier but on the other hand if we have a receptive field that's too small then it can miss information inside an image I think this sort of idea would be are clear with this example over here say that this orange just like this region marked by this orange square is represented by this like five by five region of pixels in this like first activation map and this um blue patch or just like smaller blue patch in the top left corner of this like orange box is represented by this like three by three region of pixels in this activation map so let's assume that we convolve this um layer one with a three by three convolution and we get our Layer Two now the lay this the second layer has a receptive field that is kind of small right if we were to say cut off or network after layer two if you were to pretend that layer 2 does not exist then each element in this activation map is only looking at a region of the image that is the size of this blue box over here let's say that our goal was to classify the brand of a car in order to do that we must know what the whole card looks like right but this blue box certainly doesn't contain enough information however if we were to say add in a third layer by convolving the second layer then this third layer is an activation map whose feature is being influenced by this entire Orange Box in a sense right so we are saying that this particular element in layer 3 captures all the information inside this orange bounding box for example and this is much more helpful for our car classification task than uh if we were to say look at the pixels and activation layer 2 instead in activation map to instead right so this can give you hopefully this gives you some intuition as to why small active small receptive fields are bad but on the other hand if we were to say have a network that we have an activation map with an element with receptive field sort of covers the whole image then that's kind of akin to passing that image as a vector right that's sort of the intuition behind this idea again don't worry too much about if you don't pray too much if you don't quite get what's going on over here it's a pretty it's it's a concept to keep in mind but it's not particularly too important in fact um I think there might be a different way to be this concept so earlier we said that increasing the number of layers increases the receptive field um so what this is really saying is that if you add too many layers you're you're making a receptive field too big and this can lead to you know overfitting however if you're not including too many layers in your subjective field is kind of small then your your model is like not looking at big enough patches and you might not get good enough performance so your model is sort of underfitting so in in essence this idea of perceptive Fields kind of leads back to this idea of overfitting and unfitting that we saw in in an earlier lecture and it does align over their intuition we we kind of expect that if we make the model too big then the bottom might overfit whatever the model is too small then the model might underfit and it's it's sort of the same idea over here it's not exactly that but I think this might be a more helpful way to think about what a receptive field is actually doing in in a CNN so we saw that two three by three convolutions have the same receptive field as a five by five convolution so does it mean that they're the exact same like will they reveal the exact same output um and and the answer is no so the reason why is um when you pass and and 3D tensor through a convolutional layer and you get like this new like 3D output you typically apply a non-linearity to this like 3D block so what you would do is you would apply say something like a Rayleigh or a sigmoid to each element in this like 3D block to introduce non-linearities into your network in order to learn non-linear decision boundaries so if you have uh two different convolutions you're going to have to return on linearities instead but if you were to have a single 5x5 convolution you would only introduce one Noggin energy after this like convolutional layer right so yeah so this means that the output of two convolutions is not the same as that of a single combination even if they have the same subject field because it's possible to stack in more not more activation functions with with more convolutions right and intuitively that is what we really want we want to learn non-linear functions right that is what a neural network actually does so sort of going back to this idea of a receptive field we saw that as we add more and more layers we are increasing the receptor field uh by uh uh we're we're increasing the receptive field like incrementally every time so if we you know stack tier difference three by three convolutions we move from an RF of 3 to an RF of five if we do like three different three by three convolutions we might move to an RS RF of seven right so in a sense we are increasing our receptive field linearly what if we want to increase the receptive field like faster and why might you want to do that if if you're increasing the field linearly it's possible that by the time you get to a receptive field that's you know kind of big enough you have you have used way too many layers to get there and this can make your model like too large and trying large models can be very tricky so it turns out that there is sort of a hack to like speed up like this like increase in size of your of your RF and one way to do that is by something called a pooling layer now a pulling layer mechanically what it does is you look at some Square region of your input and you apply either you apply one of two different operations you can either take the maximum element in each square and you can like Slide the square along your original input or you could take the average of of each Square does the the function where you take the maximum is called Max pooling and where you take the average is called average pooling right so well what's really happening is when you look at like these two by two chunks of of of your input you are and you're only picking like a single value from each of them you're essentially hauling the dimensions of your input by two again notice over here how we started with the four by four input but we ended up with a two by two output instead because we are only picking a single value from each chunk of four pixels right um and and in a sense what we are saying is that each pixel and and another way to phrase that is each pixel and the output of the pooling layer is connected to four different pixels and the in is the in the input layer so in a sense you have increased your you have scale your receptive field by a factor of two now you can also do um three by three pulling layers underwear instead of like breaking an image geometry two by two chunks you break it down into three by three jumps instead you can also do four by four five by five although I feel like at that point that that might lead to degradation and performance I think two by two is very very common I have seen three by three before but it's it's not as common and I've basically never seen anything with 4x4 or something bigger so this is one sort of hack to increase the receptive field and it turns out that this pooling these pooling layers also have um an intuitive meaning so say that this activation happens example is going to give you a high number if it's detecting a circle and it's going to give you a low number if it's not just in a circle right so let's say the the presence of this 12 means that there is a circle in sort of this region of the input there are those 13 means that there's another Circle over here there's 14 minutes there's another Circle over here but these like low numbers like five three and seven means that there is no Circle in this region of the input right if you take the maximum and you sort of like uh and and you look at the information that's passed over what we are saying is we are we are preserving the most important information from a local region right we are passing this 12 over because this wall was representative of the presence of a feature because it had a high value similarly when we take the maximum of five five three and seven we pass along the 7 which is a low value and we know that this region did not have say a circle so this low 7 would so indicate that in the in in in the output right so you can view Max pooling as a way to sort of like extract important information from a localized area hopefully that intuition behind Max pooling makes sense these Max pooling layers are very common to to uh to first of all reduce the the spatial dimensions of the activation map at each at each stage so you need like less and less layers in order to make your model like more compatible with the compute you may have um there is also this idea of something called padding so recall that we are convolving an image with a filter we are getting an output that is typically smaller in both height and width what if we want to preserve the Highland of them well so if you look at the very left over here we are commoding a 4x4 image with a three by three filter and that gives you a two by two output what if we want to have what if we want to make sure that the output um activation map has the same height and width as the input image now one way to do that we know that we're always going to be decreasing the height and width is we could simply increase the heightened width of the input image such as when you perform the convolution you get a reduced height and width that's the same as the height and width of the original input so uh in the in the middle you have this five by five um image you know that if you were to perform a three by three convolution on top of this you would get a three by three output instead but if you artificially increase the dimensions of this 5x5 image to a 7x7 image and you perform a convolution on that instead then you would get an output that has that is also 5x5 and hence the same Dimension as the as the original input now why might you want to do this there are many reasons um in fact this process is actually so common that it actually has a special name so when you are artificially increasing the size of your of your input one way to do that is by simply surrounding it with a bunch of zeros because zeros are a very zero is a very neutral number and it doesn't which means that it's not going to really affect your output in in any way or it's going to have a very small effect on the output right so when you pad your inputs Dimensions with a bunch of zeros or you know any other constant value I just use those because that is the most common but you can also start with a bunch of ones or twos like no one's going to stop you but that's like very very uncommon in order to like preserve like the spatial dimensions of your of your of your activation map this sort of padding is called same padding because you would have the same dimensions right however if you simply don't have any padding and just perform the convolution operation as is you would get valid padding because this is a valid output this is what the convolution operation is supposed to do before you padded the input with something else um you might so um there is no particular reason why one sort of padding is better than the other it's just a preference that people have and I think the Deep Learning Community as a whole as a large tends to prefer same padding even though there is no particular empirical evidence that one leads to better performance over the other right it's just a matter of preference and I think a good channel the community supports uh likes having same padding because I it's I think it's like slightly cleaner because if you are say starting with a 56 by 256 image and your activation map is also 256 by 256 it feels less awkward than saying at 254 by 254 activation map right so I think that's kind of the only reason why practitioners prefer having the same padding there is also another concept called The Stride so recall that a convolution is simply just like sliding window where you take the dot product of the filter with like different patches of the in of the of the um input and you slide this filter by one pixel every time however no one's really stopping you from sliding it sliding it by an amount higher than that you could slide the convolution you can slide the filter every two pixels instead and that is a perfectly valid operation um why would you want to do this so it turns out that strided convolutions actually sort of have the same effect as a pulling layer because if you are sliding a falter every two pixels instead this would mean that you're skipping sort of every other pixel along the height and the width and this would mean that your output activation mappers roughly have the it has Dimensions that are roughly half the original I don't have the original width um it's not going to be exactly half it's going to be like roughly half which is why you could be a striated convolution and set up an approximation to a pulling layer um is there a reason to use this added convolution over a pooling layer again not particularly I think there was a big debate over this idea a few years ago but I don't think there was any particular conclusion that came out of it I think people just use whatever they find I guess like more convenient I guess again using a stride of bigger than one is akin to um as a coin as akin to combining the convolution and pulling layers together but there is no particular advantage to doing it you could essentially treat the stride as a hyper parameter in your convolutional layer now this so earlier I mentioned that ER output is going to have some Dimension that we don't know yet and it turns out that this Dimension depends on like many factors it depends on what the dimension of the original input was what the dimension of each you know of of each filter is what padding you are using what size you are using and it turns out that there is this very Nifty formula that gives you what the uh what what the upper dimensions of your of your activation that could be like so again if you take in an input Dimension with of say width W um filter size F padding size p and strides s you would get an output with a width given by this formula over here where the co function is simply the ceiling function Scenic function is the greatest integer that is um or is is going to be the it's going to be an integer or rather the smallest integer that is greater than or equal to the input uh yeah this formula is really helpful for uh trying to decide what kind of strides and paddings and what sizes you want to use because what you can do is say if you want to try to use um it's if you're designing a convolutional layer such that your output activation such as your activation map has the same Dimension as the input you can equate this equation equal to W and solve for say P or s or F etc etc to determine what size you want to make certain components of your convolutional layer and so that's sort of the main details that you want to know behind what behind the anatomy of a convolutional layer um and we'll sort of delve into some more technical stuff now so let's discuss what accommodation convolutional layer really is doing a convolution is basically just a bunch of multiplications and additions right and we discussed during the Deep learning lecture that if you're just doing a bunch of additions and then you know products and stuff like that you could back propagate gradients through these operations so what what this means and and we we saw this in the context of fully connected layers because um recall that a fully connected layer is simply doing a matrix multiplication followed by an addition by a bias Vector right since a convolutional layer is nothing more than this process but in a different form you could also back propagate gradients through a convolutional layer now how this is done is not something you have to worry about because your favorite Library pytorch will handle this for you um this command that is on this page is going to be very very important because you're probably going to use it in every single homework that you come across in this class along with any project that you work on that involves CNN's so the way you define a convolutional layer in pie church is you you have to pass in like multiple parameters into it so you're in channels parameter is simply the depth of your of your input tensor right so we know that if you're passing in an RGB image the same channels is just going to be um three but if you're passing in some other 3D block then you have to like look at how many channels it has and pass it in instead the out channels is going to be the number of channels or the depth of your of your output so essentially the out channels is how many filters you want this combinational layer to have your kernel size is simply the dimension of the filters like the height and the width and again you can Define different parameters like the stride the padding there's also other terms like dilation groups you don't know what that is because you don't really change the default values is sort of go along with the defaults given here the only parameters that you really change in a c in a computer layer are in channels our channels the kernel size the stratum padding depending on what you want your CNN to look like again the documentation for this layer is given by this link I would highly encourage you guys to visit us at some point read through it once to understand what's going on Okay so we talked about these like classification layers and these cooling layers and what we can do is we can pass an image through you know Stacks and stacks on and and we can like pass an image to a network that is simply a stack of these con and pulling those on top of each other now the input and output to these layers is still going to be a 3D tensor right so let's say that after your path and it's like to create the image through a bunch of layers and it gets like through the output out how do you turn that into say a prediction if you're trying to work on a classification task now what usually happens is by the time just like huge input has been processed by these multiple layers that's really output at the very end usually has small like it's usually like very low dimensional right so you're not going to have like this like 200 by 200 by three image that you had earlier you might have something that's like 64 by five by five instead and that is a much more manageable number if we were to say convert this pretty Block in QA vector and pass that in to a fully connected Network right so essentially what's going on is you can view these CNN layers and these pooling layers as a feature extractor that converts this High dimensional input into some low dimensional block of features that can be passed into an MLP for a final classification task to you know yield the input this is exactly what we discussed in the representation learning part of the last lecture we could break a machine learning pipeline into the feature extraction and the prediction parts and that's exactly what's going on over here everything outside of the spread box is simply official extractor everything inside of this red box is a is a classifier and we can simply combine both parts into a single neural network and this idea is not that foreign qscnn it's simply the same idea that you applied to density networks but with images as well again just to sort of recap um a convolutional layer has many different hyper parameters that um sort of make up this Anatomy so you have you have to define the stride that each vulnerable like Slide by you have to Define if your extend if you're extending the input by any particular amount of padding and you would have to choose what the kernel size of the filter is that you're sliding over your input um you would also have to Define the polling layers you ever have to Define how big you want to sort of like pull how how big you want the kernel of the pulling layer to be typically a size of two is is pretty common and you might decide between Max fooling and average pulling I think that the norm at least then in the Deep Learning Community uh right now is to go with maxwelling I haven't really seen average pooling being used that frequently but it does exist I want you to keep that in mind if you're trying to you know read online implementations of different kinds of cnns or you know reading different research papers you might come across this at some point and finally these two layers make up the feature extraction part of a CNN once the features have been extracted you could flatten them out and pass those through your classification layers which form your basic cookie cutter MLP that you are already familiar with from lectures two and three so we have been sort of describing a CNN at a very conceptual level let's actually look at what a real life CNN looks like so this is a network called Lynette this was developed by Jan lacun for um classifying handwritten digits so I don't know why this has the letter A I think a number here would have been more appropriate but really what's going on is you take an input you convolve this input into feature Maps which means you you must have used six different filters you pull here um sub sampling is just another name for pooling you pull these 28 by 28 activation Maps into 14 by 14 activation Maps you basically have the size of the width and the height you apply another convolution on top of those and you apply the pulling um operation again until you get a much smaller until you get like a smaller representation of your input so here the 16 by 5 by 5 um tensor block contains um about 400 elements but this 32 by 32 input contains a thousand elements and you can see that this is much more wielding than this like housing dimensional input but you can then pass into like different kinds of like fully connected layers and get an output at the end we are just 10 represents you know one of the 10 possible digits that you see from zero to nine um in fact all classification architectures that are based on CNN sort of look like this in a sense so usually once you get your input you stack convolution and cooling letters on top of each other again recall that you apply a non-linearity after each convolutional layer in this case I've chosen really it can be any nonlinearity for that matter but I think relu is very very common it could keep stacking these like con rail pool combinations on top of each other flatten the result and pass those through like a fully connected Network or another common design pattern that you usually see is instead of doing a con really cool you do a con relu con relu and then a pool so you go you go through different combinations you go through two different convolutions for applying a pooling operation and after that it's the exact same story or recall that you learned about this layer called a batch Norm in lecture three this is a normalization layer that makes the training more stable and in modern architectures you might you might see this this like batch Norm layer more and more frequently and typically this is inserted between a radio and a cone layer so you could also do like a concrete Bastion on convert a bachelor and cool repeat this as many times as you want Laden and palstad through uh yeah classification layers these design choices are very common in fact a lot of popular architectures that you will cover in lecture six are going to use one of these patterns okay so we have designed we have finally gone over what a CNN is at the end let's talk about some more practical details so when you're training a network you would ideally want to train it on some sort of data set right there are some common data sets that people use very frequently and the field of CV and I think you should be familiar with those if possible so one of those data sets is the mnist handwritten digit classification data set these are I actually can't read all of this as supposed to be 28 by 28 I think this might be 32 by 32 instead but these are basically like grayscale images that uh each contain a single digit from like zero to nine and um I think this data set is like I think this data set has a it's it's not a particularly big data set I think it's relatively small but it's also like relatively easy to work with there is another popular data set called c410 this has um 60k examples with like 50k training examples and 10K examples that you can set aside for validation or testing these are 32 by 32 RGB images so unlike MNS which only had gray skill images you actually have multiple channels in a in a c4 um data set the 10 stands for the fact that each uh each image can be classified into one of ten different classes so mnist and c410 are actually very commonly used as and and when and proof of concept um implementations say that you are a CV researcher and you come up with some new algorithm there's a very good chance that you'll try to test it out on a simple data set like mnis or C5 first before moving on to something more complex like imagenet or coco so yeah so those are our toy data sets Now we move on to some serious stuff the msnet data set is a million images and a thousand different labels where each image is a 224 by 224 dimensional RGB image uh this data set was collected by a team of researchers across many different universities and it led to the birth of the imagenet visual recognition challenge which is how this the model that we called alexnet earlier came into being that started the Deep learning of Revolution so this data set actually has a huge history behind it this data set is also very popular in the sense that when whenever a new architecture pops up you typically test it on imagenet to see how well it performs because this data set is actually fairly challenging and it actually designate state-of-the-art status status to a model by its performance on imagenet in fact when we say that a model is soda or sit at the art in CV we are usually saying that okay it's probably it probably has the highest accuracy under visionnet as of right now but imagenet at its core is is only a classification data set we also discussed other problems like segmentation and then logical section at the very beginning right and turns out there are pretty common data sets for those tasks too um there is the citizer called the Pascal VLC data set which has different kinds of images for detection and segmentation there's also the common objects in context or the cocoa data set that is also used for these different tasks um I think you will cover more of Pascal and cocoa when you actually go through the segmentation and detection lectures so don't worry too much about those yet um other practical details so typically when you we we mentioned earlier that each image has a value that is in the range from 0 to 56 or 255 right typically when you are passing an image into a neural network you would want to normalize it to a range of zero to one and it's pretty much the same reason or the same intuition as to why we use batch Norm is because using these like small values for the for the overall range can give you like nicer gradients that don't really explode and are kind of like well behaved and and in a sense it doesn't really hurt to like normalize your images it may not help but it almost certainly never hurts when we didn't do this normalization is through this function called Q tensor that is part of the torch Vision Library um I have linked the documentation for that over here so remember to check it out in your own time finally uh this is another pretty important concept when it comes to training CNN models so a general rule of thumb in deep learning is that the higher Dimension your input is the more of it you will need to clean a decent model we know that images are high dimensional um at 256 by 256 by three image is 256 or 256 by three dimensional right that's that's that's a huge number and this would also mean that you would need a lot of data to train a good classification Network a good classification Network for example right however collecting data is expensive collecting data is hard and you also want to make sure that when you are collecting and labeling data it's all coming from a somewhat similar distribution because if it's not you're gonna run into problems with multi-modality and then that can make writing difficult so the idea is that we we want to get more data but it's hard and and having more data I can never heard more data can help you sort of prevent the problem of overfitting that we discussed in in lecture to your lectures great and yeah it's it's always a good thing now luckily you can artificially create more data from a single image now let's say that you're trading in a network for an animal species classification task so let's say that this image in the top left is the original image and you would predict the label Lama for for it right if you were to say flip the image or you know maybe crop some part of it or maybe change the brightness or the contrast or maybe change the coloring slightly it doesn't really change the fact that the image it doesn't really change the fact that Allama is so present in the image right the semantic content of the image is so preserved despite these augmentations so what you can do is you can create like these new images from the original image and assign them the same label and voila you you essentially created new data this is a very cheap and easy way of creating new data from your from your pre-existing data sets and there is no real reason to not use these at all in fact every uh CV pipeline that you'll see out in the real world will use some sort of bit augmentations in order to train a somewhat decent model um yeah there are many different kinds of good augmentations possible like changing the color changing the lightning lighting flipping damage um if you want to go through a more comprehensive list I would recommend visiting the torch vision documentation online because that is typically how you would Implement these in pi torch and finally model checkpointing so cnns are typically huge models you're also working with huge data sets this means that you're training your trading time is usually very long and you're also requiring a a good amount of compute like uh you know maybe you're possibly training on Nvidia gpus for example right but even then it can still take a lot of time to like train a decent model now uh this could take anywhere from you know all minutes to even hours to days to weeks and I'm not even exaggerating but I've also heard of models that were trained for months at a time and what would happen if you're training relatively like suddenly stop in the middle you do I don't know maybe the your your machine crashed or your server that you were training on shutdown for example you've basically lost all of the progress that you have made so far right and that would suck uh I'm saying this from personal experience I um didn't really notice how big of a deal this was until I started losing my progress when I was not checkpointing so one way to prevent this from happening to you is to like simply store your model rates in a file as your training let's recall what's really happening in the training process your parameters of the model are updating right through some sort of like gradient Mis optimization process what you can do is you can store like snapshots of the of the parameters at like different points of the training process and if your machine does go down and your trend is interrupted you could go to the latest snapshot and pick off from where you left it's almost like nothing really happened right um this is a very good habit to roll up again I didn't when I started working on deep learning uh I did not get into the habit of checkpointing my models until I sort of learned it the hard way and I don't want any of y'all to go through the same experience that I did so again remember to set up checkpoints uh CNN training can take a very long time you don't want to lose your progress if something goes wrong in the middle and yeah well that is about it for the CNN's presentation um hopefully it made sense to you guys this is a pretty foundational lecture because CNN's under law a lot of the models that we will be talking about in the upcoming lectures so it's pretty imperative that you understand this lecture to your to the best of your ability of course feel free to like stop by office hours if you have any questions we will always be there to like discuss any of the concepts that are covered in here and thank you
CS_198126_Modern_Computer_Vision_Fall_2022_UC_Berkeley
CS_198126_Lecture_4_Intro_to_Pretraining_and_Augmentations.txt
presenting okay share your screen that's what I'm doing oh you are okay hopefully it is yeah stop sharing yeah you should be sharing my screen under your camera until I can decide if I click on slideshow this is still show my camera uh it does I guess I can minimize it do screen sharing are you recording too yeah great baseball back yeah I mean it's my first time giving my lecture so I'm as good as I can be do you want that cheers I mean I have to write something to hear me okay okay you know that it works I think like unless you get really close to it it doesn't just pull an ariancoil combustion a few more minutes about you foreign this was a good idea I should have done that it's fine I guess yeah I'm so nervous I mean I am a kid but I have like admins responsible because I don't need a sections foreign [Music] thank you okay let's see what's going to happen what are we supposed to be in the cereal and we are both presenting our place it's not happening are you going to be here and when you're presenting am I going to be here as well it's almost 7 10. maybe like maybe two minutes at least yeah yeah we will get started in a few minutes [Music] foreign foreign probably a good time to start so hey my name is Aryan I'm one of the facilitators for this decal and I'm going to be presenting today's lecture on pre-training with Verona today so I guess uh for important announcements uh I think homework one is going to be due next Tuesday so probably start that if you haven't already um the quiz for this week will probably go live tomorrow uh not quite sure yet but you'll try to get it up as soon as possible so I guess like without further Ado let's Jump Right In so I guess before we start the official pre-training part of this lecture um I'm gonna go somewhat into detail into what representation learning is and I think this should sort of cap out the last few weeks of deep learning um and probably give you a more comprehensive understanding of what deep learning actually is doing and yeah hopefully the last lectures will make more sense after this context so I guess before we jump into deep learning let's talk a bit about shallow learning so say you have the classical machine learning problem and the way it is set up is you have some input X and you extract the features from this input X and you pass it into some model that is going to be parametrized by some data to get an output y so keep in mind that this sort of theta is going to contain all of your learned parameters so if you have a neural network this would contain all of your weights biases any other things that you might want to learn if this is say a regression model this would just contain the weights and maybe a bias term if it's if if one is there but yeah this this data is is going to represent all of your rates and uh instead of passing in the input X directly we would extract some features from it using this function Phi which is what we're going to call a feature extractor now why don't we push why don't we input the term X in directly it's it might not be something that you can so let's say that you're working on a problem of predicting the price of a house from a house so your X can be a house you can really put that into your model right you would have to extract some information about the house which is then something that you could input into a model so this information can be you know things like the number of rooms in the house uh the size of the house how old it is that can be categorical variables like does it have a pool back here or a basement any of that so once you extract these relevant features you can get your output y or you can get a prediction Y and your goal is to learn these weights in such a way such that your predicted label your appropriate output is as close to the true level as possible so just to sort of recap uh the machine learning pipeline you start with an input X you extract all the relevant features from it um and then you push those into a machine learning algorithm should get an output Y and you sort of optimize based on that so I guess now we defined this feature extractor or something that we need to we this is something that we need to Define right so and you might imagine that different kinds of problems will have different kinds of feature extractors so if your data is arranged in say a table so if we go back to the housing crisis example you could maybe say that each row is a single house and each column is one of the features now getting these features is pretty easy because you can just take each row directly right or you can also maybe take a column depending on however the data is arranged in this tabular format but what if your input is something complex like it can be text audio images right how do you extract all the relevant features from such a complex input um since this is a CV class I'm going to go over the CV example and turns out that there are special feature extractors for images so this is sort of what classical machine classical CV look like you people would come up with all of these like different kinds of feature extractors one really common example is something called hog or histogram oriented gradients it sort of captures the edge information in an image and based on that you can train some sort of like class fire model on top of that so say this is this can be an svm with learned weights for example now something you have to notice is that this feature extractor is something that you have to program yourself this is not something that's being learned right now it's something that you come up with yourself based on your intuition about the problem whatever you think might be the most relevant features for this problem right and if you think about this this can be a very challenging task if you you can't really use the same feature extractor across multiple tasks right so if you want if you have a task that has to do with the colors in an image this sort of these features won't really do anything because this gives you edge information right you have to define a different feature extractor does that make sense so this process of choosing the right features can get really complicated really fast and this is also kind of a compromise solution in the sense that you are learning the weights of your model but you're still hand programming the feature extractor yourself why don't we we want to make this whole process automatic right but as of right now only the second half is automatic the learning of the weights we are still defining the features ourselves and this is sort of where deep learning comes in so deep learning says that hey we don't need to hand program feature extractors we can all we can learn those as well in fact we can learn the entire pipeline from feature extraction to um training and you could you just need to pass in this like raw image input and it will spit out an output and you won't have to hand program anything specific and uh in in either some of the pipeline so one example of this is you can use a something called a convolutional layer again don't worry about what a con layer is that is something that we will teach you guys next week but it is a neural network layer that can extract features from an image and this extractor has parameters that can be learned sensor of um something like something like hog which is sort of a very stationary in the sense that it doesn't really change you still have the same thing uh you can learn this feature extractor and you can then pass these features into a learned algorithm so in in Post-its in both steps of this process you're learning something right and and you might think that okay we have two different steps and we're learning something but why keep those steps separate why decouple those two then what what if you combine them and that's exactly what a neural network is a neural network is a model that combines with feature extraction and output prediction and it learns everything from the data so hopefully it just sort of gives you some context as to why deep learning has been sort of taking off and classical machine learning is not as used and areas like Vision anymore because deep learning allows you to sort of automate this entire process from end to end and in a sense what you're really doing is you're learning a representation of your input right so your features are a way to represent what that input looks like because a model doesn't know what an image is a model will know what the features of that image are because that's what it's receiving right so in a sense what deep learning is doing is it's allowing you to learn good abstract representations from the data itself and without having to manually do anything and the main idea is to sort of like relinquish all control to the model and let it learn whatever it needs to learn for whatever task it is trying to solve and in a sense you can view each layer in a neural network as a learned feature extractor and you are chaining these feature extractors on top of each other right so a single layer might receive some kind of input it learns how to best represent represent that input and it can pass that representation along to another layer that comes after it so a deep neural network is basically learned representations that are stacked on top of each other and these representations are going to be kind of hierarchical so if you look at the the same age right over here we have a neural network that whose goal is to sort of Step predict something about a car so then if you input in the image of a car you can see that the earlier layers what they're trying to do is they're trying to look for so like these like modules that you save like black and white well what this is doing is is trying to look for different kinds of edges in the car so if you have a straight line it's trying to look for a straight edge right so it's trying to look for different kinds of patterns and like low level and like low level information like edges from the image but as you go further along instead of looking at these low level details you're trying to look at something more concrete so a later layer might be trying to see if this image has say a veal maybe a door maybe a window something else that might be in a car and by the very end the final layers of trying to see if the input image had occurred in the first place now this model has a mental model of what a car looks like it's not something that we know but it's some sort of abstract model that only the model knows and as you can see that as you go deeper down the network is trying to become more and more abstract so I guess the thing I want you I want you to take away from this is that depth refines representations you start with course information like edges and you go all the way down to like find information like this mental model of a car so I know that this was a lot of information does anyone have any questions about any of the any of this uh in case nudge I'm gonna pass the my country Verona who will talk about transfer learning okay yeah so now that we've sort of gone over an overview of what representation learning is we'll talk a little bit about what transfer learning is and what the benefits might be so when we train a model from scratch which we don't usually do a lot of the times uh it takes a lot of time compute and training data so to just to give you an idea of how much data is often necessary to train a pretty good model even just a few thousand examples is often not nearly enough it takes a lot of data for a model to learn properly and of course it's also very expensive to fit a model um but luckily huge models have already been trained before so the sort of question is can we leverage them in some way and the answer is yes absolutely so you might have heard of what we've called pre-trained models and many of these pre-trained models are frequently used all the time and so we can take a look at how and why we might want to use them so if we train a model from scratch our model parameters or our weights are randomly initialized in the beginning and then we update them gradually through an optimization algorithm such as stochastic radio descent or something like atom and so let's say for example that we have two separate tasks that we want to solve using some sort of deep learning technique so without transfer learning we might have to train these two models separately from scratch which as I mentioned earlier is very costly in terms of time compute and data so the idea is can we do better and so when we learn something we often find in general you know just like us ourselves what we can use what we've learned already and apply them our skills and our knowledge to other domains right so just to give a little example you know once we've taken a sort of intro programming course like 61a we don't continue to learn programming from scratch each time we have the program where we take another course like 61b or 6.c we already know the high level important ideas and we only need to focus on the new stuff Factor so we can see basically apply the same idea to our neural networks so as an example let's say we have a convolutional neural network um trained for a single or a simple computer vision task like a cat versus dog classification or some sort of object detection and again no worries if a convolution is sort of familiar we'll cover convolutions and CNN's in much greater depth very soon um well notice that even though these tasks are slightly different from each other our models should be able to learn some sort of suitable representation of the respective inputs so for example um each of these models should be able to for example capture how low-level features as Arya mentioned such as the general shape the edges the patterns and the colors um R for this data in the lower layer so the more earlier layers of our neural network and then once we've moved on and we've gone towards the later layers they should be able to capture some higher level features such as the abstractions of the cats versus the dogs people's faces or some of the major objects in the later later layers so here is a small example of some of the layers for some sort of object detection or classification task so in the first layer we can see that we have features like colors and patterns that are similar uh tightly close to each other and then we have shapes and maybe some textures and then once we've gone to something like layer 3 usually just to give you an idea there are we don't usually train like a five layer they're all network but like a lot of the times they'll be like 100 layers for example depending on how deep we want it to be um so but in this example we start sort of abstracting in layer three um where we see objects and humans and then once we reach layer five at the end we reach to a close to final level of the attraction that we fought and just for a slightly more concrete example let's say we train a residual neural network on the imagenet data set um so again don't worry about what a residual neural network is but basically we can train a deep residual neural network with many many layers or tasks like classification or output protection and good thing this has already basically been done for us so how can we use this resnet imagenet classifier and transfer the knowledge that we've learned from this two other tasks so we want to figure out what aspects of this network is already shared among the other ones and we want to keep that shared information and then basically use that kept information for our other models so how might we actually go about this um and so in other words like how do we actually transfer or do transfer learning um does anyone have any ideas perhaps of how we might want to keep certain information from a previous pre-trained model and transfer that knowledge yeah that's basically the right idea what about you you're just copy the weights Maybe yeah so very similar um one of the more common ideas is that well basically our neural networks are just stacks of layers so the idea is can we keep certain layers and basically use that on our next model so there's an idea called freezing so that's basically what you guys both mentioned one idea is to freeze certain layers of our neural network so basically we use our pre-trained Network which is like the already trained resnet classifier this card a few of the layer layers and then freeze the remaining earlier layers and so we can add and train the later layers to basically customize them in a way that sort of satisfies the tasks that we want to perform on uh for the second task and also remember how the later layers are often the higher level abstractions and the earlier layers are the general shapes edges patterns um so we generally want to keep those General shapes textures patterns but our abstractions might be a bit different depending on the type of tasks that we you know we want to solve on our second task so we can add and train our new custom layer later layers and the layer weights of a trained model are not changed when they are reused in a downstream task and by freezing layers you might imagine we're not really modifying the weights or parameters and so that backward pass that we talked about earlier in the past few lectures can be basically avoided so the speed of our model increases by a lot by doing this um just sort of a note is when you're trying to freeze certain layers be careful of where you're freezing your your model so if you freeze layers too early on um that's basically sort of useless like this can lead to pretty inaccurate predictions because you're not really understanding the low-level edges for example um and just to sort of go back to the CNN example so a CNN model with several several layers that's trained on a pretty large image data set like imagenet it can be reused by removing the last few layers depending on the task that you're you're trying to do the second task you can either remove just like one or a few layers and then have the model classify some new image categories for example so instead of cats or dogs maybe let's say you want to I don't know understand like how to classify another animal um so you might want to just change the last few layers um or you can also use like a larger data set okay so another common technique is called fine tuning so instead of freezing our layers we can fine tune our pre-trained neural network so instead of discarding the later layers we can basically train our pre-trained model a bit more on some output layers and non-frozen layers from our original neural network so the more data that we generally have for our Downstream task the more that we can unfreeze the layers of our original data original model and then fine-tune them again for our specific second task um and sometimes we might want to fine-tune the whole pre-trained network rather than just the unfrozen layers um so essentially we sort of initialize our new neural network to the pre-trained network the weights instead of initializing them at random so remember when we sort of train our network from scratch we're initializing our weights initially in the very uh randomly in the beginning so now we can take the pre-trained the weights from the pre-trained network and use those instead of initializing them at random and so again this speeds up our training process as the parameters of the neural network is taken from the pre-trained parameters Okay so um one other question that you might have is how do we sort of decide between whether we want to freeze or fine tune um so there are a few main factors that you might want to consider but the two main important ones are the size of your new data set um and the similarity of the new data set to the original data set so for example in the case one where you have um a small small data set or a lot a large data set and it's pretty similar to the original data set since it's larger we have more confidence that we won't overfit if we fine-tune so we've briefly talked about overfitting in the past and this is a topic that will continue to come up when we train um uh an hour and so um on the other hand if we have a smaller data set even though it's similar to the original model it's not a good idea sometimes to fine tune because you can definitely overfit um and then in the third case in which you have like a small small data set and then it's pretty different from the first task um you can probably um fix some of the initial layers but you don't want to really fine tune and then at the very end you have a very large second data set but it's very different from the first task so you can probably just change it from scratch but in practice it's mostly beneficial to initialize your weights from the pre-trained patient model okay so um before we sort of get into embeddings um just some practical advice so just a few other things to keep in mind when you're performing transfer learning um so think about the constraints of your pre-trained models um think about what sort of tasks it was sort of trying to accomplish in the beginning um the data that it was trained on and then also consider your learning rates so we'll talk a little bit more about learning rates and greater depth again next week when we talk about some of these more particular models okay um in terms of neural networks embeddings are pretty important so they are often described as lower dimensional learns continuous Vector representations of discrete variables so no network embeddings are useful because they can reduce the dimensionality of your categorical variables for example and meaningfully represent them in the transform space so in this photo example they've decided to use teasney for the dimensionality reduction so taking the embedding Vector dimensions and mapping them to a 2d space in this case um plotted they also plotted the embeddings and then they color coded based on the genre of the book so they took like book data and then basically did some dimensionality reduction um so just as a summary like embeddings are pretty useful for like represent representing High dimensional data okay and very very closely linked to embeddings to something called a latent space so we often prefer to work with lower dimensional data so a common task is transforming them in from high dimensional data into lower dimensional instructors so we can capture this through something called latent space of features which basically encode all of the important information that represents a high dimensional the high dimensional data and so to be clear on the terminology uh we often say that high dimensional data is embedded in a lower dimensional space latent space and a lot of the time latent space is sort of interchangeable with like an embedding space um and so something to note here is that what we offer is what we often call the curse of dimensionality um one of the reasons that high dimensional data or high dimensional spaces can be bad is because if the data is naturally a lower dimensional structure it's going to be very sparsely spread out in your latent space as you can imagine um in high dimensional spaces so um a lot of the times you want to be careful about yeah I don't know you can sort of draw something yeah but like you can imagine like if something is naturally a lower dimensional structure in high dimensional spaces it's going to be like very very sparse so this is an example that I came up with so say that you have the 3D space but your data is along this line now your data is basically has a one-dimensional structure right you don't need to work with the entire 3D space because you don't need to and this sort of goes back to the idea of course dimensionality because you can see that all of these points occupy a very small region of space in 3D right so the idea is that if we can directly work with the significant somehow find a way to represent this line using just one variable instead of three and so here's let's say the XYZ coordinates that's going to be better because like a high dimensional data can be very complex we want to avoid that as much as possible thanks okay and so obviously one of the big questions is how do we actually get our embedding so well we can basically learn them and so one way is to learn and embedding as part of our neural network for our Target task so this sort of allows us to get in a bedding that's nicely customized for our particular task but it may take longer than training the embedding separately um and the idea here is that we can basically reuse our embeddings for other tasks again embeddings they're sort of just like what we talked about earlier the broad ideas that we're trying to um represent our data in meaningful ways um so here's sort of an example um honestly it's sort of like I don't know we talk about we don't really quite talk a lot about soft soft Max but um you can sort of see how there's different ways like we have um one hot Target probabilities um so it's like pretty sparse as we talked about earlier so um I didn't do this do my run this part what's up the target Boston oh yeah um oh so this example uh it's an example about uh training something called an mnist classifier so feminist is a data set of hand uh ROM digits it's a 28 by 28 image which you can actually represent using a single Vector as a 70 784 dimensional Vector plus 25 to 784. so what you can do is you can train a model that would classify um what digit the image contains um using this and and it's going to be one of the 10 digits uh from zero to nine this is what those target class labels sort of um identify over there and you train this using a soft Max loss because soft Max is the loss that is used for classification I don't know if we talked about this before I guess not but it's basically because it's a loss function don't worry too much about it so the idea is that once you train this model you can take um can you see my cursor you can once this model has been trained these three like set up neurons in the middle of the model can then be used as an embedding for this mnist image so once this model has been trained this the weights must have learned something meaningful right so this means that we could just take some of these layers in the middle take the output of that as a representation of this 20 dimensional 28 by 28 dimensional image okay and so here's sort of an example of some of the results from training networks from scratch versus applying some sort of transfer learning um from a paper so in this example the authors compared pre-changed convolutional neural networks for audio classification using transfer learning and they found that the retrained models with transfer learning applied actually achieved better accuracy classification accuracy than retraining the network from scratch so not only is it let's computationally expensive but it also helps achieve better results so here's sort of another summary of some of the major advantages of pre-trained networks so a lot of the time pre-trained networks are trained on very very large data sets and oftentimes again more data means better representations so if we're only given a little data um using some sort of pre-trained network can be a great idea um there are also a lot of free trained networks that we can use immediately so you can use models that haven't trained on large data sets already just search them up on like hugging base or something or import them directly and use them and we can also pre-compute and store our embeddings instead of using the original High dimensional data which can again save a lot of time and Storage okay and so here's just a very very broad summary so without transfer learning we're basically trying to learn two separate tasks and train our model separately um but then with transfer learning we basically apply the knowledge that we've learned from one pre-trained network and apply that to the second task and so we've talked about two main techniques for doing that freezing some layers and fine-tuning our our Patriot Network um yeah so very broad once again you basically just apply your knowledge um and try to transfer that to another task instead of relearning everything um and just a quick note we'll go into some details and examples of this in action but especially um I know this is not an NLP course we talk more about CV but transforming learning is especially huge in NLP as you can imagine with natural language once you've trained on a large large amount of text there's no reason for you to relearn all of that um so you already know the meaning so there's like some semantic meaning of words once you've pre-trained you can understand syntax so you can imagine in tasks for natural language processing it takes a lot of time if you want to retrain an entire network so using pre-trained networks and using transfer learning is a really really good idea but not only does that apply to NLP it applies to almost other domains every other domain xcp yeah okay so we will transition to the next part of the lecture which is going to be on self-supervised free training so before I tell you what self-supervised is let's clarify some terminology so if you think back to how we describe the machine learning setup to you the way it usually works is you defines a model you take some input X which is going to be your raw data and it gives you some output say y hat and you define this notion of a loss that sort of Compares how far apart this prediction is from the ground truth label y right and your goal is to optimize the network such that this error decreases and your model is trying to output something that is very close to the actual labels y . so in a sense your training process is receiving supervision from the labels your super your labels are guiding what the model must learn and some examples of and and this whole process is called um supervised learning like the name suggests and some examples of supervised learning can be your typical classification problem like the one that we just showed where you classify digits you're checking in some image of a handwritten digit and you have a label corresponding to that it can also be something like regression if you have kick in say 16 a or 16b I think you might have seen regression in those classes um there are other examples of object detection segmentation we will discuss those in the coming weeks so now we know that we can learn if we have both the labels and the raw data do you think we can learn if we just have the raw data and it turns out we can so even without any labels we can still learn something meaningful about the structure of the raw data how many have you guys taken 16b before so yeah you might have you might recall something called PCA from the class of principle component analysis it's actually one of the most common unsupervised learning algorithms out there because if you remember correctly you just input some data Matrix into that algorithm and it splits out those principles common inductors right you don't you never feed in any labels into that algorithm you just feed in the data Matrix um if you remember from the 16v car one thing that you did was to you you took the audio signals from the words that you would pass to the car and you would cluster them together again there was there were no labeling involved you just took each audio signal um projected it down to two dimensions and clustered them with other points so yeah it turns out that dimensional eruption with PCA clustering etc etc are common examples of unsupervised learning and this is sort of and this is sort of uh hopefully it will give you a clearer picture of what's going on so in the first picture your different points and they have plot labels associated with them so in a classification task you're going to predict what the labels are and you can draw like decision boundaries based on that but even if you don't have any labels the model can still learn that okay these points are dripping up together they're forming clusters and this is still like meaningful information that the model can learn so yeah hopefully it's sort of this picture makes clear the difference between unsupervised and supervised learning so the examples that we have discussed so far when we were going about transfer learning was supervised pre-training so we take these large models that were trained on say something like imagenet and usually these models like a resnet are trained for the image net classification task the image classification task is where you train a CNN on the imagenet data set which has a million different images and a thousand different classes and it needs and the cluster has to learn to like classify those images correctly and since this is classification is a supervised learning problem you have a label associated with each of the each of the images in the data now we mentioned before that large data sets are helpful for learning more generalizable representations so what if we take this idea further image not only has a million examples but you can find like a billion images on the web or even trillions right so what if you try to like harness all of them to learn representations and this goes beyond CV as well so a common data set that people use in NLP is the English Wikipedia so and this usually has like hundreds of millions of text tokens in that but you can find like a trillion tokens on the internet of text right so if you try to harness all of this information and maybe you can learn better representations using that and turns out this is a pretty good idea also but it turns out that these large data sets are usually not labeled you can you could like scrap text or image or images from the web but you can't really label them automatically right so a lot of the time you're working with unlabeled data so we want to see if we could use unsupervised learning techniques to the sum level data sets and learn representations using that and this is also appealing because labeling in general is a very time consuming and tedious process say that you want to label a billion images you would have to hire manual labor you would have to pay though you would have to pay the labor you have to pay for storage it would be very time consuming it would also be very expensive and it just turns out that Gathering good labels is simply a very very hard process and so that the question that research has asked is if it could do unsupervised representation learning and indeed we can so before I delve into that I want to clarify one more term so the valency provides representation learning is done is to play to something called self-supervised learning now like the name suggests self-supervised learning means that the data is receiving supervision from itself you still don't have any labels with the in the data set you still only have say a collection of images but what you can do is you can create labels from those images and train in a supervised manner now how does this typically done is uh I'll just read off the statement from the Facebook research blog is the general technique of self-supervised learning is to predict any unobserved or hidden part or property of the input from any observed or unhidden part of the input so let's say you have an image you hide some information about the image from the model and the model has to predict the hidden part from the unhidden part and this information can be hidden across time or space we will go over some examples soon and actually we will have an entire lecture dedicated to self-supervised learning for um Envision uh in a few weeks so we'll double deeper uh in that lecture Cube so again some more terminology before before I go on to examples so during a discussion of transfer learning we have been referring to two different tasks as task one and task two which is you know not a very descriptive name it turns out that in the context of self-supervised learning these two tasks actually have a special name so the task on which you train the representations you know what what we have been referring as to cast one for so long is also called a pretext task and the task tourist representations are transferred down to are also called a downstream task now different domains we have different kinds of Downstream tasks so in computer vision this can be something like you learn the representations from some pretext tasks and you'll use those representations for image classification or object detection or semantic segmentation or whatever for NLP this could be something like text classification machine translation document summarization question answering any of that uh it turns out that this is also possible for RL you could pre-train something called a policy and Ro and then find him that later for different kinds of tasks again don't worry if you know if you don't know what that means but yeah just wanted to show that great training is a very broad topic and self-supervised learning algorithms can be applied to different domains so I guess we can delve into examples now so this was a paper that was published in 2017 and it was called jigsaw what the authors do is they take an image they take some part of the image and divide it into a three by three grid so you get nine patches and what they do is they Shuffle the patches around and ask the model to predict the original order so the hope is that if the model can learn to predict the original order like what patch goes in the let's say the top left corner versus the top right corner it is trying to learn something meaningful about the image so it's not just focusing on say all level features anymore but it's also trying to understand what's going on in the image it's it's going to learn that an image can be made up of different parts and those parts are going to be related to each other um there are some more technical details on the slides um I won't go into those but something that the authors actually did I actually go mention this is when they sample the patches and they divide it into a grid instead of taking the grid directly they actually generate each patch a bit so the patches are like not next to each other because if they are the model can just learn to see if the pixels along the edge match each other and it really won't learn it anything in that case because it will just be you know because like matching pixels is a very easy task it's a very easy way to like cheat this process right so which is why you might see a non-perfect written image a any questions about this task before I move on yeah so what they did is they actually take they took the representations from this pre-text task and tested it on classification and protection um Downstream tasks um I think I have some numbers up there uh it turned out that this was sort of the best um pre-text task at the time and it actually and then it actually sort of bridged the gap between supervised and self-supervised learning in a sense um on these different uh classification and detection tasks so yeah another task is the rotation sorry go ahead yeah so I guess this task right like so the internet um remember correctly I think they took like most of the backbones from the original model and I think they just took the thing that created that as like a frozen feature extractor representations that way another task is something called broadnet uh it's sort of a similar idea but instead of predicting let's say a shuffle order what you do instead is you take an image you you rotate it by some number of degrees and which is selected from zero 90 degrees 180 or 270 degrees and you ask the model to predict the rotation angle from the from this rotated image and the hope is that and then if you look at the sort of example over here even if you rotate this image of a bird the fact that it's beak and it's either close together doesn't really change that there's still going to be closer whether in each image and the fact that it's sort of claws are pushing on this branch is still going to be the same in each image so the hope is that the model can learn some of that information is that a single image might have different kinds of objects and it might have to learn to focus on say something like the object's orientation location pose type etc etc instead of just focusing on again low level details I think this sort of next example makes it more clearer so on the left hand side you have a model that was trained in a fully supervised manner and when you look at what is focusing on on a given image it's looking at a single part at at one time but on the right hand side you have this model Illustrated using the rotation prediction task and instead of focusing on a single part it's looking at multiple Parts at the same time so if you look at this image on the bottom right it's looking at the eye of the cat and sort of it's not at the same time to see if there's like a relationship between those two and in the image of a dog it's looking at both the body of the dog and the face at the same time to see if it could like maybe discern some sort of relationship between the two so maybe this is a way to like qualitatively show that this also provides learning algorithms are trying to learn something much more Beyond supervised training turns out that this example is not constrained to just um CV you can also do SSL with NLP um one really common example in NLP is something called word2back which and the world of work to work is to learn embeddings for a single word now the way this can be done is you could predict a word from its surrounding context so say if you have the sentence the dog with the man you could try to predict the word bed from dog and dog because a dog should kind of imply that the word bit is associated with it so this approach is called a continuous bag of words model there are many other ways to train virtual back models one example is um skip Ram so instead of predicting a word from a context you instead predict the context from a word so you like kind of like flip the model upside down and there are many other approaches this is just these are just like two really common ones okay um there's also something called wave tubeck which is set up a generalization of virtue back for audio I'm not going to go into the details of this I just wanted to show that SSL can also be applied to audio it's a very broad sort of paradigm and I think the currency of the art and audio classification is Wave 2 Vector Q which I think came out a few years ago okay so what if we go back to this idea of where to work we are predicting a single word from some surrounding context right and we're only considering a few words at a time what if you predict a word from the entire sentence that it is a part of and as I think I think you might imagine that this white this might work better because the sentence will give you more context than just like those two surrounding words and and you can take this step even further and instead of predicting a single word from a sentence you can try to predict multiple words from a sentence and what when what happens is that you call these like multiple words a masked word and your goal is to predict the masked word from the rest of the sentence so if you have a sentence that says a quick Dash fox jumps over the dash dog your goal is to predict the words Brown and lazy from this input and it turns out that this was a really effective approach for learning what embeddings and there's a very famous model in an NLP called bird which takes us to the next level uh the bird is something called a Transformer model you don't need to know what Transformers are yet we'll have a lecture on that later in the course but Verge Texan these sentences that have like Mass birds and it tries to predict what those Mass words are it actually goes a step further it actually takes in two sentences instead of one and it protects the best words for both of them and at the same time it also predicts the order of photosynthesis like you might imagine that if you sample two sentences from a paragraph one sentence comes before the other one right so it tries to do both at the same time so it learns a word level and a sentence level embedding now bird was a huge success in MLP and I think bird is sort of what kick-started the interest in um self-supervised learning back in CV because this this idea of SSL and CV kind of like died down a bit in 2015 2016 2017 but people started taking more and more interest after people saw that bird worked really well in NLP and like another reason I included the slide is it turns out that the current state of the art for CV is actually very similar to Burke so that's a teaser for the lecture that we discuss Advanced Techniques and as software CV so any questions about any of these approaches because uh that's pretty much it for the lecture so just to give a wrap up just to give a summary we went through a whirlwind tour of representation learning transfer learning self-supervised learning and we discuss like different kinds of we discuss different concepts terminologies methodologies Etc I just want to point out that this specific lecture doesn't have any homework but there is a homework for this entire cluster which is the high Crush notebook that should be due next Tuesday even though this lecture doesn't have homework I mentioned before that there will be a lecture on on Advanced SSL for CV and that will have a homework so if you ever need to review uh the topics from this lecture so to work on that homework this slide deck should be up on the website again if you feel free to do that that is it for today a second pause
CS_198126_Modern_Computer_Vision_Fall_2022_UC_Berkeley
CS_198126_Lecture_11_Advanced_GANs.txt
foreign I think we can get started here um so today we're going to be talking about Gans but specifically in the context of computer vision um just kind of talking about a couple of interesting papers that have come down the line um about like four years ago-ish some scans have sort of fallen out of out of Vogue a little bit um but they're still kind of interesting I think and they can give us really interesting looking images uh like on the right when trained on like massive data sets like imagenet um so we're going to talk a little bit about how you would go about constructing um like an architecture uh for a computer vision Gan um we're going to talk about them three different sort of papers um that I think are interesting uh oh they're out of order um we're going to talk about stylegan then cyclegan and then big Gan um hopefully you don't have to worry too much about uh Style again or begin if you feel comfortable with the idea of like making a again architecture for computer vision specifically um and I think cycle again is fairly easy to hang in there for um if you if you just walk away with that that's totally fine um those are the those are the larger ones that I think are probably the most interesting um and yeah and if there's if there's questions we're going to get pretty into the weeds with stylegan um just because it was my my big want for this course to really uh on a couple of occasions really dive deep into specific architectures um so we're going to be going pretty hard in the paint on that um but if there's questions or comments or concerns about any of that like feel free to stop me um I think we'll have no trouble hitting at least cycle Again by the end of this so um yeah the hopefully that sort of sets your expectations for like um what you feel like what you should take away from this at least um so for basic and architectures like how do we make something that looks something like this like this was something off one of the projects that I did a while back uh just like a particularly interesting looking face um that you can actually generate using the uh homework three that is out now um so basically this this idea is your discriminator is basically going to be some kind of classification CNN something along the lines of like what we've seen um throughout this course uh it's it's what a discriminator is doing is classification at the end of the day when your input is an image so it's really not going to be uh something you need to worry about too much it's going to be a simple um CNN for classification just slowly down sampling increasing the number of channels you know decreasing the height and width of the feature map over time um still use whatever you want like pooling bash Norm whatever deep learning tricks you want it's just a classifier but the generator is a little bit more difficult because we need to figure out how to get from some latent Vector just some random noise vector uh all the way up to uh some height by width by three image we need to figure out how to up sample this thing somehow um and that's that's a little more challenging so we're going to talk a little bit um more focus more so on the the up sampling side of things um so down sampling again uh it's just a vanilla CNN uh this is a particularly nice graphic I found that I think represents nicely this idea of uh doing you know a convolution um where the gray window that's sliding around is just our filter our blue is our input feature map the little white areas in the side of our images are just padding and then the green is our output um feature map and we're going to use some more Graphics to look kind of like this in a minute um so that's why I wanted to bring this back up I know you all know what convolutions are at this point um so yeah down sampling just convolutions um so like the most naive way you can do up sampling is just nearest neighbor up sampling simply when you have just uh our little cell on the left hand side and for every single cell you basically cut it into quarters and whichever value uh was there beforehand is going to stay there so we go from having a one in the top left corner it's having four ones in the top left corner uh a two in the top right hand corner having four twos in the top right hand corner you basically just duplicate um every single cell on our existing feature map um to its neighbors um and this is sort of uh it's sort of a trivial way to go about it so it's yeah simply taking every single cell and you can imagine subdividing it a whole bunch and then just expanding it um and this is this is pretty naive because what you get isn't really a higher resolution it can still be well it still does look incredibly blurry um and you can follow up with like a regular convolution after this to to process our sort of blurry looking feature on the right hand side so um but we can still do a little bit better so rather than just nearest neighbor sampling you can do bilinear up sampling um where we basically take our our new feature map so we had something like like just one two three four one two three four we'd blow this up so it's four by four now and we put like let's say our uh our one in the top left hand corner to replace one of the twos one of the threes one of the fours uh and now rather than just filling these guys in with whichever we started with like a one in this case or a two or a three or a four instead we're going to average this so the average between all the nearest the next door neighbors so in this case like one and two um it's like one and a half um this dude right here will be the average of all of its neighbors so do something a little bit more intelligent um Place basically uh the average of all the neighbors and all the gaps in our new empty feature map that we want to fill in um you can kind of see that right here the yellow guy that's particularly bright gets a little bit blurred um when you average before the neighbors um it's sort of a slightly more intelligent way to do things it's getting a little bit better um and probably one of the more common ways to do it is called a transposed convolution um so on the left hand side you can see just our standard regular convolution um basically a transpose convolution when stride is one just ends up looking like a regular convolution but you've just added so much padding um that by the time you slide your window over it your ending feature map ends up looking uh bigger than when you started and if you add stride you get this really interesting pattern where you're putting gaps in between all the elements in your original feature map um so the stride gets added sort of to your input image so to see uh you've dilated your input feature map that you're then involving over so this is like a slightly more intelligent way because we're still using like a kernel we can still do some kind of processing on our input and we can get something that's slightly more meaningful than what we had before which is just sort of like Blurry scaled up feature Maps um so this is just sort of a slightly more yeah intelligent way uh to upsample images these are the three like most common ways that at least I have seen uh reading through the literature um you can pretty much just use these out of the box anytime you need to make your height by width um and your generator like bigger scale it up um by like a factor of two you can just choose any one of these and you'll get like okay results um again Gans are fickle enough to the point where uh just trying different things might eventually yield good results on one of them randomly um so are there other questions on uh sort of like these just sort of mechanical things at all like how we can upsample uh all of our feature Maps all right um okay um yeah so if your stride is one I believe it still just looks like a regular convolution but if you have like stride two we've taken our input image and we've added gaps in between every single element in it and then we're doing a convolution over that does that make sense so like the blue we started out with the blue 2x2 um in both of these examples the input to our transpose convolution layer is just a little two by two feature map and then when we added stride to our transpose convolution we took all of the little grid squares in our feature map and we split them out so there's a gap in between their neighbors now and then we ran a convolution on that um I don't know exactly where it came from um it's sometimes called a deconvolution though I know mathematically there's supposed to be like different things but in deep learning people just sort of use them interchangeably um that's a I mean it generally refers to the idea that like if you have my my understanding and this might be totally incorrect but if you have like a matrix that's sort of like particularly why when you multiply it by a vector your output is going to be smaller than what you started with if you started with a particularly large vector and you multiply it by a very wide Matrix you're getting a very small Matrix output but if you transpose your Matrix so it's instead quite tall when you multiply by a very small Vector you're going to get a large output so my understanding uh is that it just sort of refers to this idea that whereas a convolution takes in something bigger and usually spits out something smaller it's working the other way around with a transposed convolution you're taking in something fairly small and your intent is to make it bigger um sort of like multiplying by non-square matrices um does that make any sense sunlight okay yeah I mean it's there's more rigorous mathematical definitions for what all of these are but again in deep learning our idea is to just find some interesting math operation that is differentiable um and just run with it whether or not it's like mathematically identical to something you would find in like a math textbook um so yeah this is just an operation that we found works okay for blowing up in the uh feature Maps um are there more questions all right yeah um and if we have like a conditional Gan set up um it's a little bit trickier because now you have your input to your generator is now not just um a latent Vector but it's also a latent Vector Plus our condition Vector our Vector that tells it what exactly we want it to generate so like our generator takes in now a latent and like a one hot or a multi-hot vector of light I want it to be you know like I want it to be a three or in the case of like the homework where you're generating faces say like I want the face to have like say this one corresponds to like I want this person to have a goatee or down here maybe it's like I want this person to have brown hair you can select like multiple things but the point is that this Vector here corresponds to like what we want our output image to be so that's a question of like what do we do now that we have the two vectors um and I mean one lazy thing you can simply do is just concatenate these together concatenate your latent with your label um you can also process them separately um start with some deconvolutions or up sampling on both of these independently process them independently for a little while and then concatenate them later um sort of the the main idea and for like a discriminator where you're taking in um like this label over here in an image you can just sort of broadcast this label here to be uh concatenated with every single position in your image and then send that to the discriminator um it's sort of sort of lazy ways to go about doing it not particularly intelligent but they they can get the job done um just to give you a more concrete idea of how we would go about passing in um multiple things to a discriminator party with generator um it was sort of a lot of words but do people have like questions on that idea at all okay um so in that case we're gonna move on to style again and again just doesn't know this is more for people that like really care about architecture like I think architecture is really cool and I really like it when I'm reading a paper and they give like a really nice architecture design um and really go into the Weeds about what the architecture is and like why they did it um so that's what these slides are if you're not as much interested in that um and this you feel like it's kind of going over your head a little bit like that's totally fine um but also if I've if I am confusing you to like feel free to ask questions um so yeah hopefully this gives you a more in-depth uh look at like one of the more popular Gan papers of the last few years stylegan um and like what it's doing under the hood um yeah so just sort of a fair warning um so stylegan uh is basically just a new architecture for like your generator um it's a it's a generator with some artistic Flair in its architecture um so it's motivated motivated by like a couple of different desires so like we get these latents into our generator but like before we start doing deconvolutions and blowing it up and making feature Maps um it would be nice if we can do a little bit of pre-processing on our latent vectors first like maybe this latent isn't like immediately interpretable um to our generator so we want to figure out how to take this random noise and maybe put it in the format that makes a little bit more sense for a generator give it a give it a chance to do some pre-processing on this latent before we start doing convolution operations on it um we also want our latents to be a little bit more closely connected with all the different convolutions um in our Network so we have a whole bunch of different layers in our generator and we only really passed the late in the first one so this first convolution operation here C is the latent and then after that no one directly sees the label we just sort of have to hope that all the layers passing up information through um it would be nice since our latent really controls the style and like what our output image looks like it would be nice if we can feed that somehow in and give all of our different convolutions some sort of access to it to make sure that the style that's encoded in this latent um really has an effect um we also want a little bit better textures like you can see on this image hopefully um like on the right hand side of both of these images um there is not a whole ton of texture everything is fairly smooth we really like things that are more on the left hand side where all the individual hairs um really stand out all the dimples um all the imperfections on like a human face all the texture that makes something look lifelike we really want that to be there and at the end of the day texture is really just random noise um it's just a pattern repeated in different randomized little ways so we need to really make sure that our model has access to like a ton of noise um and lots of sources of Randomness because otherwise our model is basically responsible for pseudorandom number generation which is like a hard task for our model to learn to figure out how to make something that looks you know how to make texture that looks really randomized um it's not an easy task and it would just be a hell of a lot easier for a model if we just gave it some source of Randomness that can access at all times um so these are basically like the three different sorts of things that we really are trying to get at with Style again other questions about this so far and in these sort of issues that uh Gans can have all right keep going um so this is the architecture um we'll dissect this here in a second um on the left hand side it's like a normal uh generator it's basically just taking in a latent vector and repeatedly scaling it up doing some kind of normalization involving on it more normalization and then repeating that with more upscaling more convolutions normalization convolution normalization um just repeatedly convolving upscaling normalizing um and on the right hand side we're going to do a little more fun stuff um but to start with the thing I want you to recognize is that on the left hand side this pattern of just slowly up sampling doing a normalization layer like batch Norm involving more batch Norm that kind of thing it's still very much repeated on the right hand side we're going to start with some sort of input um don't worry about the input too much but uh it's going to just be repeated normalization a d a i n um it's just a special kind of normalization we'll talk about here in a minute but it's just repeated convolutions normalization up sampling convolutions uh normalization over and over like the core of the network is still very similar um so it looks very complicated but it's at the core it's still somewhat similar so I don't want you all to be too frightened um yeah it's still going to be yeah a very similar style of architecture um so what is this thing on the left here this is our desired one we want to pre-process our latent so we took in some weight Vector um and the intuition for like why we might want to do some pre-processing on this is like given in the paper here so like say we have like a two-dimensional latent vector that corresponds to like say we have a two-dimensional latent vector and it is interpreted by a model so that the first entry corresponds to like gender um most data sets with gender like male and female so that's what we're dealing with here and then the second one is like long hair versus short hair and you'll notice that like it's probabilistically less likely for there to be like men with long hair a whole chunk of our latent space is like basically unusable um if we try and generate an image of like a male with long hair um our discriminator is much more likely to just notice that like oh this is less likely it's more likely to be fake because their data set doesn't have that many dudes with long hair um so we want to give it a chance to like somehow take these these inputs these latents that are otherwise unusable and figure out how to somehow create a uh or map them to some kind of code that is more meaningful um more uniformly likely so that's sort of that's sort of the idea the motivation behind wanting to pre-process our ladies um so that's just what it is we've taken our lady and we're just passing it through a whole bunch of dense layers um eight of them to be exact um until we have a new latent uh modified latent called W yes friend yeah and I mean this sort of breaks down to a little bit because if you imagine that it's like a conditional if our latent is like our random noise plus like a condition too um I mean it's additionally problematic if you're sampling labels that are like out of distribution um if that makes any sense like if you're asking your generator to try and generate men with long hair and then somehow pass that off that's a that's a fault of your own for doing that at training yeah yeah you could uh that's except I was just trying to to motivate that maybe this example that they gave isn't perfect but but yes you you understand the point of it uh which is yeah you want to somehow create like a new version of your latent that's like yeah just more meaningful um so so yes we have our our more processed version of our original latent now called W um and now like what's going on what's going on here what is this madness so a d a i n stands for adaptive instant formalization so we talked a little bit earlier about like bash Norm sort of helpful or you're basically going to take everything on every single activation map in your CNN and you're going to to normalize it so we have like an activation map we have say a batch of first activations um and we're basically just going to normalize it just take every single element divide by the standard subtract off the mean divide by the standard deviation we have a whole bunch of activations and we're just yeah normalizing them um what adain does is instead of with batch Norm we had taken our uh activation X subtracted off the MU uh the mean of x divided by the standard deviation of x we had learned some values gamma and beta to rescale and re-bias by we had like these were parameters for our model um a d a i n instead just takes these as inputs so we're not going to learn them now we're just going to take it as a second input to the layer so we're going to do is our style Vector W is going to dictate how much we rescale by and how much we rebuy a slide that's the that's the tldr it's it's sort of weird um it's a weird way of introducing style into the network um the only act the only access point our entire network has for taking in our style Vector W is in the way that we we renormalize all of our features and it's like super weird um but does the sort of idea of instead of learning gamma and learning beta does the idea of like taking it in as a second input uh like y s y v make uh some sense to people instead of instead of learning the parameters by which we rescale and re-bias um just take it as a secondary input um to that layer um if if not it's not the end of the world um it's a it's sort of an arch concept but this this concept that we're basically only ever introducing our style into the network by simply rescaling and re-biasing all of our feature maps by a specific amount dictated by our style uh as they said in the paper it is quite remarkable that this even works um and then it produces meaningful results but for some reason um it certifiably does um you're taking in your style vector you're going to pass it through like one fully connected layer and then use the output to rescale and rebias all of your activations um yeah this is a weird concept and if you don't get it right away that's like totally fine if you want me to explain it a little more um I can I can try and do that um but again like it's a weird operation to do and it's even weirder that it works um yeah your style dictates how we rescale and re-bias all of our feature Maps um that's the that's the takeaway um yeah okay yeah and we can talk a little bit more about that later or afterwards um and then the second the last desire we had was increasing access to Randomness so we're going to basically just generate a ton of feature maps that are just purely random noise and at every single layer we're going to add that noise to all of our feature Maps and we're going to allow the network to scale up the amount of noise multiply it by a large constant or scale it down multiply by a really small constant at any point in the network all of these B values that are multiplying our noise before we add it they're learned so our Network can say I really want a lot of noise here I want to I want to generate better texture here if I had better access to random noise I think I could do that or it can say our features are really good at this part of the network and we don't really want random noise um let's turn the random noise that's added off um at every stage after every single convolution before we normalize we're just going to add some controlled amount of noise into the network that the network can decide to increase or decrease um yeah and this will this will sort of allow us to generate better textures better like imperfections wrinkles individual hairs that sort of thing um are there questions about sort of adding this random noise onto all of our feature Maps our controlled random noise I should say yes friend fantastic question yeah uh so you're yeah so we have a feature say we're only doing like one image like let's not worry about batches we have like a feature volume yeah this is one of the things I had to go and like look through their code for to figure out so we have a whole bunch of like feature Maps right um Midway through our through our Network and we take in basically like one feature map's worth of noise we take in one feature Maps worth of noise what we're going to do is broadcast this so we're just going to duplicate it over every single Channel and then for every single Channel that we have broadcasted to each of these gets scaled up by our alert amount um are our beta corresponds to just multiplying each feature map's noise uh by like a separate amount like let's make it the B's not betas B1 B2 B3 these are just little scalars that decide how much each feature map's worth of noise gets scaled up or scaled down and then each of our original each layer gets its own random noise um that we're sort of adding to these channels um yeah after every single convolution it's going to get a unique thing of noise um and after every single convolution it's going to get a unique set of scalars to control the amount of noise that's added does that make sense yeah I'm so glad it was worth it to read their code uh oh yeah are there more questions all right so the the main the main takeaways from Style again I know it's the architecture is like insane looking and the next paper we're going to cover will be more fun um uh the Innovations are basically our latent Vector that we took in is much better put to use um since we can do some like pre-processing on it um doing things like adaptive instance normalization to slowly introduce style um we're basically introducing each one of these lines denoting a convolution our latent Vector gets fed into and influences our model after every single convolution so this style Vector that is going to tell our generator what it wants the output image to look like doesn't get lost um we have like guarantees that this information is getting passed in and can be used by every single layer we have lots of sources of Randomness we just added a ton of random noise that we allowed the network to control the magnitude of um after every single step um gives us far better texture um and just some random factoids for people who care about architecture we didn't cover wgan we didn't get to it last lecture um but it uses wgan with a gradient penalty um or the vanilla again with a non-saturating loss and R1 regularization um just if any of you all happen to read the slides after last lecture if not totally fine um this is for anyone who cares um yeah and uh as you asked the noise starts out as one channel and is broadcasted to all the feature Maps um standard discriminator and then Style again two like this person here I'll just go to it why not this person does not exist.com right this person does not exist.com and you it will Generate random faces this is Style game too um and these are like some of the the faces that are generated um it was a it was definitely something in style again one as well but you can see like you can see the individual hairs like texture is something that is like vastly improved um by this random noise it's getting at it let's see if I can find my mouse [Music] yeah so stylegan is is a pretty big step up in terms of image generation the quality that we're getting um yeah are there more questions yes friends yeah I mean I think it just I think the reason was because it's better like your latent sort of corresponds to the output style like whether this person is going to have like different facial features um like hair color texture like the the curls in your hair um your latent Vector controls the style and I think that's like the main thing about this architecture is that it's really using this style Vector W this this processed version of your latent Vector um to really dictate how your output looks that's sort of that was my takeaway from it um though that might not be correct but that was yeah my take away um other questions all right um cyclegan much easier much more fun uh it's very basically that we're going to figure out how to take something in one style and transfer it to another um we're going to take realistic photos and turn them into Monet paintings or back the other way around from a Monet painting to a real life photo uh we're gonna turn zebras into horses and then back again winter Into Summer um yeah it's it's the basic idea that you're you're transforming images from one data set to look like they came from images uh from another data set of images um is significantly more fun in my opinion um so what we're going to have is we're going to have um still a somewhat standard setup we're going to have two discriminators and two generators um if we're doing zebras to horses and horses to zebras we're going to have one discriminator that judges does this look like a real zebra and one discriminator that judges does this look like a real horse and your generator's job isn't going to just be to generate from random noise a horse or a zebra it's going to be to take a horse and turn it into a zebra and then use that zebra discriminator to say like does this look real or does this look fake foreign and the one other thing other than just doing your standard Gan loss of looking at your your uh horse to zebra generator and saying like Okay the discriminator for zebras says you're doing a bad job do better instead of just doing that we're also going to add one other cycle loss which is where we're going to take a picture of a zebra turn it into a horse um and take that image and try and turn it back into a zebra and the idea is that if our generators are doing their job what we should end up with after we went to a horse and then back to a zebra it should look like what we started with um and you can just simply like compare pixels and just see how off we are so yeah it's two loss terms um our lost term for just like how realistic do our zebras and horses look and then the question of do we get back to where we started if we go from a zebra to a horse back to a zebra um are there more questions about this yeah I think I think again it's significantly more fun and yeah you can take individual Brands unfortunately this does not uh like there's no there's no consistency between frames like if you watch the video there's no way to make sure that the stripes stay in the right place um I can't seem to control when I restarted here um like you'll see the stripes start moving around um but I think it's still it's still interesting that it works at all brand by frame and that it does even a half decent job on video um so yeah a it's a significantly sillier uh more fun in my opinion uh version of again are there more questions about this that's a really good question think about uh the distribute so if we have a data set of zebras like where do zebras live they're often the Savannahs right where the grass is consistently uh like much more much more like yellow right so if you're the discriminator and you can look at this image and you say that's some real green looking grass this is this did not come from the Savannahs this is not a zebra I don't care whether it's got stripes or not your discriminator then has like a a way by which it can classify something correctly as zebra or a fake zebra um so to account for that your generator has to learn to change the background to make it look like a savannah too yeah that's a fantastic question yes uh it's a really good observation um yeah it it's the the shift in the distribution of horse images versus zebra image when you shift the background is also likely to change yeah are there more questions yes friend yeah so what I mean this video what it was if I'm not mistaken is they just took a video of a horse and they just frame by frame just changed it to be a zebra which is again why we don't have any consistency between the features on like the stripes are moving around and stuff but I'm sure you could right like I mean the idea of again is that we just have a second Network that's judging does this thing look real or fake so if you have a bunch of videos of horses I mean it would be significantly more compute because when you're processing a video you're processing it frame by frame so however much work it took to process one frame of a video right just doing like an image processing an image with a discriminator you have to now do that across all the frames so the auto compute goes up but yeah there's no reason that you couldn't do that if you just have a bunch of videos of horses and zebras in the wild uh I mean yeah it's like I think it's less common or maybe I'm just not tapped into it as much um but people definitely do it um yeah sequences are just hard because it's a lot more compute that's required you have to figure out um like you're probably not gonna process every single frame maybe you want to process like every like fifth frame or something like that um like you need to you sort of need to figure out how are you going to do this in a way that's like efficient because again videos are just big um yeah I mean there's there's probably more clever things you can do like I imagine um I know there's like a gang called like patch scan where instead of looking at the whole image just looks like small little patches if you're only looking at patches of video that's probably more efficient to do there's little hacks like that I should have covered patchgan too there's little there's little hacks like this that I'm sure you can do to get around it but yeah it's just it's tricky to figure out how to do it in a way that doesn't take like a year um yeah but but yeah there's no reason why you couldn't just take a discriminator that takes a video of real horses and our our fake videos and at that point you probably would start to see consistency frame to frame in your generated videos um otherwise your discriminator could say like oh these stripes are moving nope fake yeah no more questions yes absolutely loving it yeah monkey to horse yeah I mean I think I think for the purpose of their demo and trying to get published in a conference paper they selected a very clean example I suspect it would not work as well uh if at all yeah I have literally no idea but I can't imagine what I've never looked at a monkey and said that monkey looks like a horse so I I don't know but I suspect it's going to have a hell of a lot of difficulty and it'd be yeah it'll probably it can probably take a video of like a monkey and figure out how to generate a horse from that um however getting back to the original monkey is not gonna happen um I figure it'll probably just start looking at the image and just find like random variations in texture and just use that as the random noise you need like we had a random noise Vector that we were feeding into our generator I imagine it's going to take all the pixels of that monkey and just treat it like random noise and try and generate a horse from that but again it's not there's no way you're getting back yeah yeah you need to you do need to be careful with this and it's the same reason why we can get away with it with like paintings and like winter to summer um yeah are there more questions all right thank y'all for the good questions I appreciate it yeah I think it's a I think it's cool paper it's quite fun um yeah and then to end it off uh big Gan which is we had images from it uh at the very beginning I thought I had the images in here again okay um so like how do we train again on a conditional Gan for like imagenet which has a thousand different classes so the the vector on which we're we're conditioning we're telling it like what we want in our output is a thousand long which is is kind of a very tall order um so this paper sort of analyzes like how do Gans scale up um when we throw more data at bigger models and bigger batch sizes how do we do um talks about a little bit of some trade-offs between like stability uh and reliability and quality um as well as like the variety of images you can generate um they they also used an interesting architecture as well but we're not going to cover that they were also interested in how do we make sure that this latent is accessible by our entire network um it's like scaling big reveal big models more data bigger batch sizes do better who'da thunk um the biggest way uh the best way to get rid of training instability is um is to just have bigger batch sizes um this will give you better quality uh greater stability for complex tasks like Gans um it's true anywhere but Gans especially benefit from it and I think yeah things like you'll learn about diffusion uh starting like next lecture but a lot of the more common models that are trained nowadays I think I think I wanted to say Sable diffusion used like like a batch size of like 8 000 or something insane um bigger batch sizes increase your stability um it's yeah it's sort of a must at a certain point if you want to get really good results um they also have this little trick um if you want to at test time get better results how do we deal with the fact that sometimes at test time we're going to sample lanes that are like way off on the tail and they're like super unlikely um like if our Vector is just random gaussian noise where every single entry is just a normal distribution if we sample um a vector and it's got a whole bunch of elements that are like way off on the tail our model has probably never seen a vector that looks anything like that and it's not going to give us very good results um what do we do what do we do about the fact that sometimes at test time we're going to give it latents that it has no clue what to do with um and the trick is basically at test time um when you sample things that are way out on the Tails like this uh just resample try it again um until you get things that are much more in the middle of your distribution um random noise if you're sampling from a uniform distribution random noise that's much closer to the middle uh much closer to the mean is it's far more likely that our model has seen something similar to it and is going to be able to do an okay job um so at test time if you want to hack and get a little better results uh give it latents that are are slightly more squished towards zero um random noise with smaller magnitude effectively um however it comes with a trade-off if we're sampling noise in a much smaller range to give to our model at test time we're probably not going to be able to get as wide a variety of images and this is what we see um so if you apply this truncation trick um where if you get elements in your random Vector that are super unlikely just resample until they're much closer to the middle if you're uniform distribution you can see on the left hand side um is with very little truncation we get a really wide variety of images as you start going more and more to the right when we're truncating more and more and mandating that all of our random noise stays in this really small little Zone our images stop becoming varied really at all and then the right hand side is just an image that they gave of um to show that this trick doesn't always work um if your model is like not regularized and your model is just generally not good on latents that it hasn't really seen before um this is still gonna fail to some degree you get these these nightmare fuel dogs um so yeah this comes with the caveat of that you still need to make sure your model is like well regularized um to begin with but uh yeah it can it can get you slightly better results at test time um training instabilities they also talked about this in the paper a little bit too um the tldr is if you constrain your discriminator so that it has super well-behaved gradients and that it can never just dominate the uh the generator in the Gan game and you make sure you try too hard to make sure your discriminator is behaving well and playing Fair um you're not going to end up getting good results to some degree you just have to accept that at a certain point your Gan will collapse uh and one side or the other will just absolutely win the game 100 um you have to accept some amount of training instabilities if you want to get good outputs um yeah it's sort of it's a it's a trade-off and it's imperfect but good results won't come by by making sure your discriminator is too nerfed that's the that's the last takeaway from Big Game are there questions on this yes friend uh I mean it would probably it would probably exacerbate the problem a little bit uh where like things on the outside do really poorly um but that's an interesting question the other thing too um is if your model at the very beginning of training only ever sees uh really really similar latents it's gonna have probably a lot of difficulty generating samples that are like of good variety um and the discriminator might just see like the generators just generating like the same image modified like once or twice every single time um so that it's interesting that might work uh it might it might give interesting results um my guess is that it might result in a few more training instabilities but yeah that's that's definitely interesting idea um yeah it might work to work the other way around too to start with like a really wide distribution so our model has to uh perform well on latents that are super big or super small and then slowly narrow it down um yeah I would be interested to see if that if that works or not um yeah were there more questions okay um yeah I think we're I think we've we've hit everything takeaways Style again just making sure your latents are accessible all throughout the network um and making sure you have access to lots of random noise for very nice textures and things like that cyclegan I think we had a lot of really good conversation um just to sort of show that like this idea that you can use a separate Network a discriminator to see how real your outputs look like this idea can be used in all kinds of wild different ways um besides just generating single images one-off from random noise like there's lots of different ways in which you can apply this sort of like Gan loss of having a discriminator see how well you're doing um and then the big Gan is just sort of to cap it off and like what does it require to make Gams at very large scales um do well um so yeah I think that's I think that's about it um if people have questions feel free to come on up
CS_198126_Modern_Computer_Vision_Fall_2022_UC_Berkeley
CS_198126_Lecture_8_Semantic_Segmentation.txt
progress Ive interesting all right all right uh are you guys can just guess right all right all right y'all uh today we're going to be kind of covering a continuation of Tuesday's lecture um so we're going to be going over ulcers if I could it's a good all right perfect we're going to be going over uh image segmentation using certification as well so I guess what is image segmentation uh so I guess a review of yesterday right things went over we started with a classification Network right that essentially you know like took an image and then told you what was in the image right we moved from that to classification with localization or we went to land architecture and then classification with localization which essentially telling us you know what's in the image and where is it right uh oops disgusting there we go and then finally we went to object detection so essentially having you know multiple objects in an image being able to tell what they are and being able to tell where they are um all individually so the continue the continuation of that is semantic segmentation or sorry segmentation so the first we break these into two types semantic segmentation and instant segmentation the basic idea behind semantic segmentation is that instead of just finding a bounding box of an image or a certain object right we want to actually find like the specific pixels where that object is present in the image right and the continuation of this is instance segmentation which essentially is answering the question you know where specifically is each instance of each class in this image as in if we have a photo with you know a bunch of people in it it will not only be able to tell the exact pixels that correspond to people but it will also be able to tell you know like which sections of those are corresponding to each individual person um and again hello numbers and again we can have multiple classes in this as well right so we could have you know like an image that you know has a bunch of people and a bunch of dogs and should be able to tell each individual dog and each individual person in the background Etc right so again to give kind of an example this right it's like we have some images here right it's like you know we could have this image right here where there's a person and you know grass and trees in the sky and it's able to kind of pick out all these different components and it's able to kind of um segment them although again notice we're talking about semantic segmentation to start so it can't for example differentiate between like each individual cow here right it's just classifying this is cow and kind of marking it as such right so this is what we're going to start with um but the eventual goal right is that we want to be able to detect you know like everything right we want to be able to look at an image right break it down into kind of all of the different separate components for like you'd see this image like we have all the different individual people are being detected differently there's like two boats or like a bunch of boats actually a bunch of boats that are all kind of being detected separately as well right like this is kind of the ideal and case right this is like what we are trying to accomplish what we would really like so why is this useful um because this may seem like a lot of work it may seem kind of scary because it's like it seems like quite a bit of a step up from just picking a little box and an image to actually like classifying the exact like positions of everything and kind of separating this all out um so I guess I hope I can convince you guys this is useful so first of all we have this whole idea of the Gestalt principles I don't know if you guys are familiar with this but the basic idea right is that there's a lot of different components that go into how humans kind of perceive things right how they perceive objects right it's like if you look at this image for example right it's like I think these are little you know balls of like yarn or something like that right it's like you know is this you know maybe you're seeing each of these individual bottles of yarn right they all have like you know different colors on them but what your brain is processing is this kind of gradient right it's like the whole idea is that the individual dual components or sorry the sum of the whole image is more than just like the collection of the individual components right we want to be able to actually you know classify and like pull all these different things together and then come up with some meaning based on that right so being able to kind of identify all those components of the images more specifically is really useful so um other kind of just cool aside of this um that I think is kind of interesting as well is that even just determining you know like what an object is or the way that we kind of perceive things is it's a more challenging question then uh maybe you might might think it is and the sense that like if you look at this image right this is just a bunch of like weird curved lines right everywhere but you probably see a circle right because the kind of components of lines change color at uh you know an era that kind of you know happens to match that soil right and so it's like how could we you know kind of like get these different groupings these kind of perceptual groupings have to be kind of figured that out you know how can we do that from a network perspective Additionally you can see this example on the right of this little you know set of weird shapes moving I'm sure you guys all see that that kind of looks like a dog right even though it's objectively like not following the shape of a dog right like if we were to you know talk about rcnn yesterday or Tuesday if you guys remember right if we were to like you know pick out the different components of this object right and we were to you know like create bounding boxes on them right the object of a dog is not actually connected right it's a bunch of different kind of sub components that all kind of come together so again the really the point of this is to kind of motivate segmentation right and why this is like a useful thing for us to look into right because the idea is that if we're able to kind of identify all these different components but we're also able to more generally you know connect different you know classifications or kind of do larger level you know uh segmentation this becomes a really really useful ability to have so again yeah segmentation is just really there's a lot of like really abstract speaking but the basic idea is just segmentation is telling you which pixels go together right like taking an image telling you these two are pointing or the same thing right these three different so how do we approach segmentation so there's actually a component of this that we brushed over last lecture uh we actually reference segmentation in the rcnn section again when we were talking about region proposal where we were essentially saying that we will use some magical classical you know region proposal method that will just magically look at an image and propose to us boundaries that might be good regions to classify on right and so we're actually going to take a look at like what that actually means how it's kind of implemented there's obviously lots of different mathematists but the the simplest possible way that we could kind of approach this is as follows so first of all the idea that we're looking for is to create connected segments by grouping based on some sort of similarity criteria right so if we have you know like this image right here right maybe we want to you know group together potentially the cards or maybe we want to group together the you know individual components like you know the little spades on this playing card right it's like we want to group together visually distinct objects on a maybe not so distinct image so let me just uh yeah there we go so one way to approach this is that we can essentially Define some function by which we were to group image or pixels together right is that if we have some image right like let's say we have this image down here and we pick point you know this point right here which maybe is X1 y1 and then we pick you know a point next to it X2 Y2 right and we can say we can have some sort of function that takes in the point returns some like similarity criteria we can essentially say if the difference between those two if those two are pixels are similar enough to each other their difference is below some Epsilon then we group them together right and so this is like a very simple kind of classical approach to this problem um and there's a lot of different similarity functions or like metrics you could use to measure similarity a classic one is just you know intensity so like the brightness of a pixel that's not the technical definition but you can think of it as right it's like the intensity of a pixels like how kind of bright that color is right so for example if we look at you know this pixel right here and this pixel right here right they are pretty similar in color but they're not too like they're I mean they're pretty similar in color but they're a little different right but their overall brightness is similar enough that that difference there is going to be less than this classification and so then we would kind of merge those two pixels together in the sense that we would consider them part of the same group right I didn't kind of you know flood Bill outwards here right and then maybe we'd get an image that looks like this threshold image on the right where essentially you know there's the white section is like all of the kind of pixels that have been grouped into this one you know Collective second right um so there's lots of other ways to kind of approach this but uh the idea is that's kind of I guess a very very basic classical approach this algorithm now we're going to look at some you know newer maybe deep learning approaches to this so one way that we could kind of do segmentation that's a little bit more interesting uh because it's learned as opposed to a fixed algorithm is we can go back to that sliding Windows idea that we talked about on Tuesday right except this time instead of doing slotting Windows an attempt to classify a bounding box what we can essentially do is we can slide Windows across the image and then we can have for each like you know slide in CNN that current box its only goal is to classify the middle pixel of that box right in the sense like if we had you know this red square down here right it's running classification on this patch of the image and then it's outputting what it thinks the middle pixel is okay and so what this allows us to essentially do is is we can run this CNN sliding window across the image and every time it outputs it'll output you know what object that should be a part of and then on our output image like our segmentation map we can just write that you know value to to the corresponding pixel right so we're sliding across and you know maybe this pixel is saying sky this pixel saying tree this pixel is saying tree this pixel sang tree this pixel is saying cow this pixel sen cow right we can kind of run all the way over and then we'll kind of be able to create that segmentation map like that uh this is as we saw last time sliding Windows approaches in general just it's just very inefficient right it's like once again we're we're you know recomputing a bunch of shared features right we're kind of you know running the classification over the same area um lots of times so how do we fix this right well we we talked about this again on Tuesday uh but there's this is kind of a different approach to it so we can do convolutions n10 again right we can just take convolutional network uh instead of running it multiple times we can just have you know one sort of convolutional operation that we can run over the image that ideally will perform all of the desired effects that we want um without having to do the sliding window approach so in this case uh what we essentially have or so okay to clarify there's one kind of caveat to this situation which is that last lecture we were talking about you know classification like detection that kind of thing and so our output was a defined size right in the sense that we were like when we're talking about the case of classification with localization our output was going to be one classification one you know x and y coordinate pair and one width and height pair right for the value box right whereas theoretically with segmentation we want our output to be the same size as the input and essentially contain you know in every single Pixel tell us what that object is right so one kind of bonus of this end-to-end convolutional approach uh is that the the whole system is is size agnostic of the input um and so you may be like it's maybe a little bit confusing it's like well if we're plugging the image into the convolutional neural network like how how is it size like isn't it expecting a certain number of inputs that kind of thing uh but essentially the way that you can think about it is that if our convolutional layer right is that um you can use that like kind of filter explanation where we have you know a convolutional kernel or like a filter that's kind of running a long image right and then it just you know Goes On by the step size moving over each time uh if we just take that filter and keep running it right there's no required you know like size right we're not decreasing the size of the network using pulling layers or anything like that we're literally just running the convolutional network over getting an output layer that's exactly the same you notice this is a three Channel like if we look at the actual numbers right this input image is three channels like the red blue and green Channel and then a width and a height right when we run our convolutional layer over right we're still getting a you know width by height length volume right it might have more depth or less depth depending on how many channels we have internally but the basic idea is right we're just passing that convolutional filter over so we don't have to worry about like you know the size changing or that kind of thing we can just kind of run everything over and our predicted output um we'll we'll map one to one you know exactly with the original image uh yes um yeah great question so uh yes theoretically um there's a couple different ways that we can we can use so first of all if we have like tell me get some job so for example like let's just take a very trivial example where we have like a two by two image right and let's say our filter is three by three right so one thing that we could do is uh we could potentially use padding right so it is if we just like uh padding is like complicated and it shouldn't necessarily just do this but let's just say we add zeros to the outside like all the missing pixels in the image right now when we run our three by three filter over right it's going to run here it's going to Output that into you know uh maybe it's going to Output that into the first pixel right and then when we shift it over one it's again using up those padding bits right it's classed by this one or to run on this one it's going to move into there right and the idea is if we have you know stride one we have enough padding rates like we're not going to be we don't have to be decreasing that size um and I guess just quick aside but when doing padding and that kind of thing there's a lot of different I don't know if we've talked about this yet um but like padding with zeros is kind of weird because then it's essentially you're like treating the the network is seeing as if there's like a black wall at the edge of the image um there's a bunch of different kind of approaches to like fix this um yeah one second one of the things you can do is like you can kind of mirror the edges so for example we could take these pixels and like duplicate them here or like even better you could take like this whole thing and then like flip it over and kind of duplicate the padding kind of matches the colors on the inside but yeah great question you had a question yes um yeah so the basic idea here um okay really you can kind of just think of it as uh essentially as we kind of go through these convolutional layers right and then as we kind of get to the final output the final output is going to be a volume that has size width of width by height of the image and then the depth of it is essentially each slot in the depth is talking about a certain classification output so the idea is where art Maps like we're essentially finding the maximum classification output for each pixel yeah exactly C is C would be the number of classes oh yeah so uh you yeah I mean there's lots of different like we talked about this a little bit last time like there's different ways to like kind of encode things but for right now just think of it as like a one hot encoding where like each class you know maybe there's a one if it's Sky there's a there's a one in the next but if it's you know like a cow or et cetera um does that answer your question all right great yeah so oh yeah okay so the problem with this is that uh it's pretty expensive to compute this at the original image resolution right and and what do I mean by that well I mean let's say you have like a 4K image right and it's like maybe you have I don't know like 20 convolutional nodes or something right I'm just coming up with random numbers right but it's like the idea is we're still doing that excuse me we're still sliding that filter over right and doing that kind of computation on like every single Pixel right it's like we're still doing quite a bit of work right and so as long as we kind of maintain that original resolution right it's going to be kind of hard to uh you know shrink that down so one potential solution to this um is to essentially down sample the image gradually um and then upsample it at the end so what do I what do I mean by that well essentially um what we can think of is if we look at this image here right it's like we have this input image right and then when we run our you know first convolutional layer right we can instead of doing this like you know padding thing where we like want it to be the exact same size and having like a stride one and that kind of thing we can you know maybe up the stride a little bit get rid of a little bit of padding right do something like that uh so that the output of running the convolutional convolutions over once is going to be smaller than the original infinite which will be smaller and width and height maybe it's a little deeper or something right but the idea is we're kind of shrinking it down and each time we're kind of shrinking the image down further and further um all the way down to like in this example it's like I don't know it's 21 or something I don't actually remember what these numbers are but the point is right it's like the volume that we're outputting is getting smaller each time so we're essentially down sampling the image every single time right you can think of it as where taking the image right the convolutional neural network ideally is going to be like extracting information right removing redundancy from the image as it kind of passes over and each level is getting like more and more like abstract in the sense of like it's you know ideally maybe would be collecting like the first pass would be like collecting information on like edges or that kind of thing and then the second pass would be like you know combining those different features to like maybe get more you know high level understanding of the amendment or something like that right it's like it's all this theoretical but the the whole point is that we're kind of down solving this we're getting down to a much smaller kind of volume at the end but then of course we have this problem that we want our output our segmentation output to be the same size as the input image right because we don't want to like output if we input a 4K image we don't want to Output like a two pixel by two pixel segmentation map right because that's not going to be very useful if the the pixels don't really match so we somehow need a way to kind of take this small volume and somehow transform that small volume back into an image the size of the full image right which is a little it's a little weird all right like how do we how do we do that so there's yeah how do we upset um so there's a couple ways to do this the classical approach that uh or classical approach that I'm sure if you guys have ever used Photoshop or done anything like that in the past you guys have definitely seen this before um where you know I don't know if you guys ever use Ms paint but if you put an image in Ms paint and you like scale it up a little bit it gets all like pixelated and gross because Ms paint is I'm pretty sure by default using nearest neighbor or at least maybe it used to um which is disgusting but essentially the basic idea is when you scale up an image um there's you can essentially think of it as we're stretching the pixels out and then now there's kind of new spaces for new pixels to be formed and so a interpolation function essentially deciding like what those new pixels should be right how to kind of blend between you know the two pixels we had already that are now separate enough that there's a new one that's seen it in between so nearest neighbor um this one that I was making fun of essentially the basic idea is that we're literally just taking the value of the pixel it's most adjacent to so if we have a small tiny image and we scale it up really large we're just going to get giant pixels because we're not doing any sort of blending we're literally just selecting the nearest pixel um you can do you know like some sort of a linear interpolation where it's like you take the two Edge pixels and you find the color that's halfway in between them right and it's like you can do something like that uh the option that like most uh software Engineers that can use Photoshop and you try to scale up an image um I believe the default is going to be it's either bilinear or bicubic interpolation but essentially the idea is that it's essentially taking you know the color information from you know both the pixel like you know the pixels to the side and also the pixels up and down um and essentially kind of computing some sort of linear like average between those um to find the output color uh and so the idea is that if we take you know like this image on the right right and it's like we want to stretch out to four times the size right we can put those pixels on the corners and then we can use like bilinear interpretation interpolation to kind of fill in the missing spots right and this like theoretically works right like it's not like a wrong approach uh but the issue with this is that what we're really doing is essentially scaling up the image and then blurring it a little bit as opposed to failing in detail that we're missing right and so a way that we can improve this is by once again we can move from a classical algorithm to a learned out right so the idea is we want to up sample but instead of up sampling with you know like bilingual relation or something like that we want to have some sort of you know neural algorithm or something uh that can learn how to up sample images right so uh I already kind of went over this but like just I want to do a quick review of convolutions because I'm about to introduce something called the D convolution which is pretty easy to understand if you understand convolutions so I guess just very briefly I can go over again like right it's like you have you know like a filter that you're kind of passing over like in this case three by three um and you're essentially taking you know all of those values and then you're multiplying them by the way to the filter and then kind of summing those all off and putting it in the corresponding output pixel right and then you you know shift over by the stride each time and you kind of keep passing on through that until you've you know gone through the entire thing so the other thing you can kind of think of is how can we kind of reverse this process right like we want to take essentially the same like a kernel idea of a kernel like kind of passing every image right but instead of passing over you know the uh you know some sort of input layer and then getting a small apple layer we kind of want to take this and do the opposite right we want to pass over a small input layer and somehow project that through some sort of filter uh onto a larger app right so to that enter the de convolution so it's literally I just flip the diagram right we have the smaller one on the left and the larger one on the right uh and so the basic idea is we are literally like you can kind of think of it as we're literally flipping the diagram that we had before now I'll get into how that actually works in a second but so instead of passing the filter over the input layer you can think of it as we're passing the layer over the output and a removing pixel by pixel on the input right so actually I have some slides here but honestly I think it's going to be a little easier if I just draw it so I'm just going to draw an example let's say we have you know one or that's not a one I can write members all right one two three also is my handwritten way too small can you guys see you're good I'm good perfect okay one two three four let's just say we have this very simple input image right and we want to upscale it to like a three button right so what I said before is we can start let's say uh let's also say we have like a three by three kernel right so uh I'm gonna make this kernel very simple and let's just say it's like uh I go like one zero one zero one zero one zero one just for fun I don't know so what we're gonna do is we're gonna start off we're gonna look at the first pixel of the first image all right and then we're gonna do is we're going to take our filter and we're going to place our filter over the output layer centered around this pixel okay and essentially we're going to do here is we're going to take this input value and we're going to multiply it by each of the output values in the output or sorry about each of the filter values and we're going to essentially project that or kind of imprint that onto the output right so in this case like if this box were placed over like this whole section is just like outside of the range so I don't have to worry about that right but you can kind of think of it as it would be you know maybe uh like one you know one times one right that's going to go there one times zero that's going to go there right like 1 times 0 and then one times one right so on the very first path we've kind of added and again this is just this number multiplied by this and then placed on the corresponding slot on the output right and so you can do that once and we get some values kind of magically appear here right and then all we have to do is we can shift over to the next pixel right and now we can do is we can maybe you know shift this box over as well all right so maybe now the film this filter is now looking at this region centered here right and we can say okay let's take two two times two is or two times one is two right this is not two all right we can say you know two times zero zero right two times one is two right Etc the point is you can keep doing this right but you may see a problem with this which is that we're going to end up like overwriting values that already written and it's like kind of weird like what's the point of this right well the idea is that if we would add a value to a node that already has a value we just add them right so the idea is is that these output values are essentially going to become like a sum of the kind of projection of the input values through the filter press onto that image right so I'll go over to kind of again so it's a little bit weird but you can kind of think of it as you know it's like we have this input pixel we multiply each weight in the filter by this input pixel and then kind of think about it as like we're pressing that down onto the output image and that there's already a value there we just sum it up right um and again we can keep shifting that over um yeah and so that's a that's a decomposition um so using this we can do something a little cooler which is that we can take like essentially an incremental approach to the problem right where we can you know start with this really large image and then we can run the convolutional convolutions over it to downsample it into some you know low res internal representation with maybe some depth but it's like you know pretty small um and then we can do is we can use D convolutions to start raising the size back up right to slowly upset um and I guess there's two benefits of this approach as opposed to the approach that we talked about previously so first of all the big one is that the up sampling is learned it's not just a classical algorithm right it's something that can learn and can you know adapt to the problem that we give it which means that instead of just like filling in you know the blend of the pixels of like what the like average color value is right it's like theoretically the de-sampling could kind of learn to you know maybe like fill in certain edges or like knows a certain shape and so when I'm sampling it might refine that shape a little bit right it could do you know different cool kind of approaches like that um and secondly the reason that we're doing this up sampling like slowly instead of just doing it in like one giant push um is essentially just that we get like you know more chance for you know if you think about like doing this uh deconvolution process right it's like there's more intersection between things there's more opportunity for verbal app if we kind of do that over multiple passes yes um right because I mean like in the example that we just showed right it's like the input was smaller than that you if you had a like greater than one stride and you didn't pad yes for sure um so then there's no sort of like boxing options in the what so will the up sampling function is the decomposition like there's no additional up sample yeah it's entirely just convolutional um yeah that's a good question um okay yeah so if we take this approach um and added a couple little bells and whistles we'll get into a second uh we get something called you know so the basic idea about unit is that it's combining all the kind of previous improvements we talked about uh with the addition of one little fancy feature which is Skip connection so I don't think I don't know if we've talked about those we have great okay I have to explain it so currency of connections idea right the the whole idea is that you know it's like it can pull in you know information from you know less processed parts of the process um up all the way to the very end right so you can see right we have this image as we kind of pass through we're slowly like down sampling it until we get to this internal State and then we start up sampling but when we're doing the up sampling we pull in information from the corresponding level of the down sampling path so the idea is like you know for example in the very last section of the network right it's like we're pulling in like potentially like detail information from the component over here right which is why these skip connections are really really powerful they work really well for this type of task um yeah so modern segmentation models like it's essentially like there's a lot of continuations on kind of unit a lot of similar approaches um I can just you know actually have so for a while family models deep flat family of models which say the art they trans took over and recently Transformer based models like send former have resulted so the main kind of contributions of these were essentially just you know different methods to kind of better capture image information at like multiple scales so like as you if you guys want to look at these charts later you can you can check them out but essentially just like you know different different sets of connections and different kind of rates of down sampling and different kind of approaches like that um so all the things we've been covering so far have been about uh like semantic segmentation right it's just classifying like this is cow right as opposed to this is a cow and then this is a different cow right so how do we kind of expand this to instant segmentation so one approach this is we can go back to our old friend rcnn we can expand it a little bit to just solve this problem for us so if you guys remember rcnn right again the basic idea is we use like a classical method to propose potential segments and then we you know essentially like you know find like crop the image to each of the segments and then run the classification on there and then if it matches then we output that banning box and otherwise we combine segments and then keep doing that over and over again right it's like I'm sure you guys remember it's just a couple days ago uh but so essentially mask rcnn is a continuation of that where once we've kind of cropped that image down right we classify it but then we also uh we also add kind of an output Mass prediction right where essentially we run one of these methods right to essentially you know predict the location or run one of those you know segmentation methods to essentially identify that current object right within that kind of value and so the idea then is that if one bounding box outputs you know like this shape or whatever as like a person right and then a different bounding box outputs you know this shape is a person we can tell that even though they're both classified as people they're different people because they represent different options okay so yeah you guys can see some some examples of this right this is like some some examples running right it's like able to kind of detect the different airplanes able to detect all the different people um and you can see these Mass predictions that are coming out right like these are correspondences and people um again they're scaled to the shape of the bounty box because rcnn but so you can see that those kind of are all all working and then they're scaled back up to kind of match the corresponding um output and so yeah here's an example of this running um so full instant segmentation using mask rcnn so you can tell this is for like a self-driving type you know uh purpose where it's detecting the you know outlines of cars the outlines of people pedestrians like it's able to detect this backpack on the person as well as kind of identifying uh each of these different peoples in each of these different cars as separate objects um yeah that's I believe that's about it as far as the slides they don't have any questions right now yes [Music] yeah you you can kind of think of it as um it's a little bit simpler because we have uh the not that you like have to remember all the details but when we're doing rcnn the when we pick like a potential bounding box and try to run it on like classify um we essentially like squish it down into like a certain size box and then we run the um and then we want the classification on it so the idea is that we are guaranteed to have like a certain fixed size image so we don't necessarily have to do like a fully convolutional I think there's different kind of ways that you can do the same position but yeah that's that's pretty much it yeah I don't yes blizzard what is the label oh that's a good question um so you know what you know I'm not actually sure how they collect it do you guys Jenny you guys know how they collect data to um train the segmentation stuff on um foreign yeah that's one option you just have people to do it for you just arbitrarily create the training data for you um okay yeah sometimes there's assistive methods or nowadays it was just going to adjust it and because it'll be informs like you know about four years and Justin but yes there's still a little bit of course thanks pal uh does anyone else have any questions yes um how do you choose your Matrix so that's learned in the same way that you choose a filter for a convolutional network it's just the weights essentially yes how is the what the decommission so I mean essentially like it's kind of hard for me to answer that question about like having a bunch of like labels and everything but essentially it's like it's really this is the exact process that I was talking about before right it's it's literally just a projection you take the filter you multiply each value in the filter by the uh you know value of that corresponding pixel or that corresponding uh slot and then you essentially you know you're iterating your filter over the uh image you're entering your multiply filter over the image and essentially at each point of the iteration you just add the spot at that corresponding filter to the spot in the output and then you just kind of keep passing that yes question was borrowed for a compilation it's like vector major complications ah you know I can actually remember thank you very much on top of my head and how this is optimized um I don't remember I can get back to you on that one sorry about that yes so not really I know I presented it like that um it's not actually like inverting a convolution it's just easy to kind of think about that like it's easy to think about that way it's not really doing like an inverse of like accomplishment operation yes yeah yeah so between those values it's a bit hard to Define it's more of a process yeah I'm sure there are like formal definitions of like like essentially efficient ways to to optimize this we didn't actually look into this for this class again the point of this is not to like have you guys Implement you know uh like hi George it's just be able to like understand like conceptually what this is doing and then just trust that the people who wrote the algorithms implemented it in some vast majority way um that I actually don't know I just don't know I'll tell my head uh but yeah any other questions no okay great Rohan do you want to show the demo okay okay yeah yeah let me one second I'll like this real quick so I can open up the second wait sorry can you say semantics say 10 URL okay okay let me I stopped featuring uh give me one second he wanted me screenshot again oh yes thank you hello so yeah oh this is just a quick demo of a train model that was trained on uh Cloud GPU this is an instant segmentation model um so literally all the details are abstracted away this is just like a visual representation of how instant segmentation would look post implementation uh so we don't even need to run this uh essentially this the code data set has a bunch of IDs this is just a map of all of those um to reassociate what the segmentation model says to whatever it's like ground truth label is yep so this is the image they passed in um I think this is somewhere in Japan uh all of this is is literally just calling the model that has been pre-trained um and yeah okay so this is this is kind of what I wanted to show uh this is an example of instant segmentation um so as you can see there are multiple people that it's detecting in various capacities it's also able to detect things like umbrellas with a like 50 accuracy score it's a little more secure on other things as you can see it looks like there are two people behind this umbrella it was not able to get the second one um but what's interesting about this model is that you can give it a score threshold to predict on um so if you were to increase okay so you can also visit the URL for this it's tinyurl uh column slash semantic Dash setting um and then you can actually play around with the the perimeters but if you were to increase this to 70 a lot of the actually maybe running to it is a good idea yeah yeah uh we do have a little time so if you guys want to follow along tinyurl.com semantic Dash sag foreign is increased you're only accepting information where you have a very high likelihood of the semantic segmentation being accurate so as you can see the handbag goes away all of these people go away and the only people that we see are things that the model can very confidently say it can draw a valving box around with over whatever 76 accuracy so this umbrella is exactly the boundary um and the pre-training the beauty of this model is that you can pre-train a lot of things on the cloud on like AWS a lot of offloaded services and bring them together in an open like this so your homework is going to also be a pull out a notebook I believe uh the resident assignment is mandatory in the united one is optional but I would highly recommend doing unit one I think a lot of it is done for you um and RN can talk more about that as well so be free to come to office hours if you have any questions about it I'm sorry that is due I think it's being released tomorrow night okay it'll be over a week from now so uh and it's also everything is on the course website so all the due dates and assignment links will be there yeah um that's pretty much it you can ask me or Ryan or Ryan or Val any questions if you guys have any thank you yes you're still going backwards of things
CS_198126_Modern_Computer_Vision_Fall_2022_UC_Berkeley
CS_198126_Lecture_14_Transformers_and_Attention.txt
three you get the previous recording I sent you by the way oh yeah whatever so many reasons thank you foreign it is yeah but alrighty so we can just about get started now um the kind of goal of today is to uh ease into the motivation behind attention um and then kind of how that relates to Transformer models um which are very up and coming kind of the state of the art thing um and we're going to be spending a couple classes going over uh like modern Vision Transformers um as well as like Vision Transformers as a whole and attention is kind of a Cornerstone of that that is is going to be very necessary um to kind of give you an idea really quickly before we jump into it of what attention is um if I tell you to look for a certain thing in a book like a certain chapter or certain like phrase um what you can do is you can read the whole book um hope that you recall everything and then kind of present that uh that specific place as you read it um but a more informed way of doing this would be first to go to a chapter and then a certain like subheading and then find what your looking for that's kind of what attention is doing it's focusing your efforts um and your your kind of your eyes on a certain uh location and blurring out the rest so that you can figure out what specifically to to focus on during this like look up process um and something that we're going to also be talking about is how this uses a query key value system to make more informed decisions more informed guesses as to which things to pay attention to um the modern attention system is kind of focused on how humans read so when we're reading a certain sentence everything else is kind of a blur and we're focusing specifically on the sequential words that we're looking at yeah so attention allows models to focus on the important parts of an input um and make an inference using a specific subset of data instead of taking in everything as a whole um not all features are equally important and that's something we're going to talk about more is kind of figuring out a probability distribution to figure out which things are more likely to be more important than others a big example of this is NLP um and RNN so residual neural networks are often used in NLP contexts for example sequence to sequence models are encoding sequences as a fixed Vector then decoding them to another sequence an example of this problem is translating so if I wanted to trans translate for example French to English I start with a sequence I'm encoding this then decoding it into English um there are other kinds of models there's like sequence to Vector models um et cetera Etc but sequence to sequence models have both pros and cons when a pro approach from like kind of the lens of an RNN and we're going to be discussing them through this through this lecture there's a lot of content to discover um so this is going to go decently fast feel free to stop and ask any questions you may have yeah so as you can see here on the left we have a bunch of French words and on the top we have English words and you're kind of creating a map of correspondences where the diagonal Maps certain uh French words to English words and this is kind of the byproduct of attention you're saying that like these words are more closely related to this based on their ordering based on their structuring and position and so I'm going to say that these input tokens have certain correspondences to Output tokens and the reason that the shading is different is because not all of these are one-to-one um but these are our best guesses as to which ones correspond to which outputs so yeah there are a couple different types of attention the two we're going to be kind of talking about are heart attention and soft detention heart attention predicts which indices are relevant um at a certain time step so you're kind of looking at all indices and trying to figure out which ones of these are specifically important um this is kind of hard to back prop through um and it requires a lot of computation because you're searching through the entire input representation for the most relevant input tokens the other side of this is soft detention where instead of figuring out specifically which inputs that we want to focus on um we create a set of soft weights with the softmax function to create a probability distribution based on the input tokens that indicate their importance then we do a weighted sum where we're constantly tuning these weights um and this gives us an expected value of which tokens are more important which keys are more important and we're going to be getting into that and this intuition combines the representations of different features that we have soft detention is what is generally used um and it's something that we're going to be focusing on quite a bit so yeah We're translating from French to English when translating destruction we want to reference uh La I I can't pronounce that but the attention weights will be uh tuned accordingly and corresponded to certain outputs um this is kind of the meat behind uh the motivation of soft attention right we have a database of features that we're interested in and that is our French reference and we want to convert these to an English sentence output to eventually make our prediction so we have a certain sequence of French words um that's our first sequence our output sequence is going to be a sequence of English words um so the two kind of problems we have is are relating French words to English words that's one problem and the second problem is making sure that this ordering makes sense the order in which we get the French words is also the order in which we decode the English words um so there are two things that we need to do in order to make this distinction we need one metric to determine importance so what are we looking for in the input to correspond to the output so this is how do I match a certain query to a set of given Keys um these are going to be kind of represented as q and K and then we have a weighted sum of features corresponding to each importance weight so how can I weight these different words these different similarities to give in an Eevee and expect like an expected value as to which words are more correlated and this is getting the value from the key if your prediction was really good if you're really good your key will be equal to your value if you're not that good you're going to be tuning these weights as you go through your uh this process to figure out which values are are more representative than others yeah so I think this is a pretty good illustration of how this system works um in like a traditional encoder decoder system so we have uh We've we've talked about like encoders and decoders previously um but if I have I am Iron Man and I want to translate this to uh just sweet Iron Man um this is translating in this case English to French um but we want to encode each word and as I build an embedding uh and a kind of latent representation of these words um each each phrase is picking up from the phrase before it so this is a sequential model um then we have a context Vector which relates H4 which is kind of a correspondence of the entire input the entire sentence and relates it to decoding sequentially as well then we we go from S1 to S2 S3 S4 where H is our encoder creating these representations we have a context Vector that Maps our full representation to our decoder and then we slowly decode um and this takes those representations and puts them back into a form that we can consume um this is how a traditional encoder decoder network works where we have our inputs sequentially kind of processing then we have a context Vector that only takes in the last one the difference in uh using soft detention is that now our context Vector takes in information from each latent representation we don't take this in just sequentially but we're putting all of them into our context Vector then using them to decode which means that given a certain S2 this this only has the first 2D coded right now in order to come up with S3 we want to take S2 which is our previous kind of guess or our previous uh representation and take it with respect to all of our different encodings so these latent representations which is why our two inputs are going to be an S vector and an H Vector where H is our uh encodings and S is our previous decoding um the next step is to create a score function to figure out what these similarities are between these um and this can be this can be scored in like a variety of different ways um there's a general scoring where you have a weight attention Matrix which is tuned in the attention layer um you have you can just take the dot product um of the uh s transpose and whatever your h uh embedding is for the encoder representation you can concatenate them you can take a soft Max there's a lot of different things that you could do notice how the softmax uses a weight Matrix that's changed but it doesn't actually use the encoding it only uses the previous decoding method um and this is specifically by location then you're only taking into account the location of previously decoded representations um and kind of to give a summary of a bunch of other metrics that you can use um this is a whole bunch and the papers that correspond with these um but what these metrics allow us to do is based on our encoding and our previously decoded uh considerations can we come up with a relative importance can we come up with a scoring function of relatedness what are we looking for based on what our current decoder state is which the only information we have at this point is previously decoded things so yeah again uh S Sub I is the decoder hidden State and H sub I is the encoder hidden state in this case SI is the queries that we're looking for and hi is the keys for each SI we want to find its relevance to each hi and we're going to be kind of going over this um if you think of this uh as kind of like a python dictionary where you have a bunch of keys uh foreign values that correspond to this these keys right we're introducing a query that we want to compare to each of these keys so we're going to be doing this this comparison right we're going to be taking like the dot product of these or put it through some scoring function this will come up with a weighting so I'm kind of waiting that we then want to apply to our values and coming up with this can give us the relative importance of how our Hue which is kind of what we want to find out here in this case is going to be our uh our uh SI right so our query is our decoder representation and hi which is our encoding is going to be our keys that we compare against um the score is then passed through the softmax function to get the weights so these weights will be generated through like a soft Max function to get a uh like probabilistic representation of the relatedness of a query with a bunch of different keys yeah so this is basically explaining that we take our feature we calculate attention weights and you weight all of our features using these weights and then kind of come up with a linear weighted sum of all of our features so yeah um the example before uh was Global attention where all input tokens are used to generate output tokens this is computationally inefficient because we're doing this operation with each and every key that we have um but instead of summing and querying over all inputs if you only query over a certain uh window of inputs so going back to the the book example right if we are trying to search for a certain thing in the book do we need to look at how a certain word or certain phrase corresponds to every other phrase in the book or for example do we only want to see how it corresponds to the five pages before and the five pages after um which could save us a lot of computation so kind of how with the convolution you can go over figure out which region is most important as your kernel slides um we want to have a certain window where we only query over certain inputs and that's where like the notion of local attention comes into play um heuristics can be used to figure out this window so if I have a certain length input and a certain length output ideally the ice output token probably corresponds to the I input token so for example if I'm of course translating a series or of a sentence of length 5 from English to French um more or less this depends on the language and stuff but the fifth token will be in the vicinity the fifth output token will be in the vicinity of the fifth input token and so on and so forth so you can Center a fixed size window around the decoder step so like I was saying five pages before five pages after um and these can be used as heuristics so when you're doing the learning process you're creating a truncated gaussian function to calculate these attention weights and this gaussian function represents kind of the distribution um that your uh your attention will will follow um we're going to talk more about the linear layer um and sigmoid function to like get these these centers but hope hopefully the kind of General motivation behind Global attention and local attention kind of came through all right so attention for vision this is something we're going to delve a lot more into uh next class in the class afterwards when we go over Vision Transformers and advanced architectures for vision Transformers but attention is simply a means to look up features um like the example of looking up a certain phrase in a book we're trying to look up features while being computationally efficient and attention gives us that that Insight that intuition um something that we've talked about before our squeeze and excite networks when we talked about Advanced architectures about two weeks ago now two and a half weeks maybe um where we can give attention Channel wise in an image um and select which channel is more important we can also give attention spatially which part of the image is most important um and kind of redirect these features based on the importance um yeah this kind of just explains that we want channel to channel interaction um squeeze and excite this is kind of what I was talking about before you globally pool um to figure out your importance weights you can weigh each Channel individually um and then excite you pass these weights through the MLP and rescale these uh feature maps to get a kind of weighted representation of which channels or which representations are more important than others so instead of having to uh kind of take linear combinations of all of these channels you now have a more informed approach of which features are more important um to both save computation and make your final representation uh more accurate um spatial attention uh is uh kind of similar um to what we were talking about before and that here we have a yeah here we have a kind of text to uh sentence represent or image to sentence representation we have an input image we want to extract features um using convolutions then we have a residual network with the tension over the image and we want to use the different attention uh weights that we've come up with to do word by word generation to create a sentence that describes this image um there's this is going to be a pretty high level explanation we're going to get into more of this stuff probably in the future um but we want to extract a feature map of the image um and we've we've talked about this extensively how we can use convolutions to extract key features lstms have we talked about ostms before really okay yeah so these are long short term memory uh networks that allow us to kind of take little uh connections between uh certain words and and keep those in mind as we kind of go over a a long span of data but yeah essentially actually we're using an encoder lstm and a decoder lstm we've talked about encoders and decoders at the beginning of this lecture it's a very similar thing we have weighted features that we then want to uh kind of correlate to words so yeah we we also talked about soft detection hard attention um and these also apply to spatial attention so soft detention is differentiable you can train with back Prof and because you're using like kind of a weighted probabilistic interpretation of this um it's very similar to the translation example that we went over French to English heart attention is trained stochastically with sampling so this is kind of a random sampling method that you have to use given that you're trying to figure out the specific features uh that correspond and input to an output so yeah vits we're going to be discussing these in depth so uh we're not going to spend too much time on that um and yeah attention can be used in various architectures as well uh we've talked about inception Nets uh how they work uh now when you're doing this combination of features you're paying close attention uh no pun intended to which features are are weighted higher than others um so here you're you're kind of creating an intelligent way of selecting a kernel um combining these features and again running soft Max on uh the uh result to figure out which which feature combinations of channels are are more important okay so this is also very interesting um so regular attention uh colloquially um you'll take a combination of for example the words in the French language with the words in the English language to figure out what the translation is um but say you're doing a a language modeling task where you want to predict the next word based on previous words um the most uh kind of uh representative structure that you have at that point are the previous words in the sentence right the previous words that you've already talked about the previous words that you've already discovered um the idea is that words in a sentence have relationships between themselves and that's where this notion of self-attention comes into play um and vaswani kind of talks about this in his attention is all you need paper um when you're decoding with an RNN we want to use the additional information of other tokens that we've already uh kind of decoded previously in the sentence you want to look up the tokens from the encoder and uh that was kind of what we're doing prior to this self-attention is looking up tokens from the same input so in this example we have the red word is kind of as we go through this sentence one by one what we're processing um and the blue is what we're paying attention to so notice they're different shades of blue these are generated again using kind of like our score our our cross analysis to figure out which things we should weigh more than others um so if we have the FBI is the words that are most important to predict chasing at that point are FBI and is because these kind of give us uh relationships that we we want to keep in mind so instead of saying something like the FBI is run we want to prioritize the FBI is chasing from our word bank or our universe of words right by the time we get to criminal um we want to get is chasing a criminal right so instead of doing choosing something like criminals or like criminality we know that um is and uh are used as kind of like singular words so we could come up with the word criminal um that's kind of the motivation behind self-attention is waiting the uh words before your predictive word to come up with a best guess for what your word could be alrighty on to Transformers uh I guess we could take a quick breather here see are there any questions that you guys have or anything you would like to add Jake yeah um that's basically the building blocks that we're gonna use questions on like the uh the like key value query thing like looking stuff up um and an idea of like waiting them by different amounts based on how similar our query is to like any given key in our in our dictionary um like feel free to ask because that's probably that's really the uh the main thing we're going to use going forward just the other stuff is sort of like context that this idea of attention and you know like keys and queries that's used all over the place um yeah but but self-attention and that soft attention uh in particular people have questions or need any clarifiers in any of those yeah just to reiterate um soft detention is one of the main focuses that we're going to do it's what's used in uh the Transformer model that's described in attention is all you need um the idea behind this again is that you're creating a probability distribution that you can pull from so that you can predict certain relationships so you focus on certain features you don't focus on other features so when you apply your kind of product to your uh to a soft Max distribution this pushes things either closer to one or closer to zero um and we're going to talk about how Transformer models as a whole can also solve problems with rnns specifically Vanishing gradients and when you have a lot of data that you're considering yeah so kind of going into oh yeah sorry [Music] so not quite so you're taking the essentially dot product between your query and all of the keys so for example if I had a sentence like I like I like pie then your uh your query in your first step could be I and your keys would be like and pi and you would also consider I right um and we're going to talk about this more but when we take this in the context of Transformers you're actually going to be using matrices instead of like a one-to-one comparison so then we would compare this this similarity uh together uh and then these these numbers that we get these are like our weights that are then going to be passed through like a soft Max distribution so in reality you're you're comparing whatever you're currently on to the keys of a bunch of other things to figure out kind of like which which lookup is most representative in this scenario yeah if you like really specific to um like W1 is obtained by passing your aquarium your key through your similarity book so like that could be like a DOT product which just your dog product is going to measure like how close those vectors are like lining up so W1 is obtained specifically by just taking the similarity with q and K1 your second wave W2 obtain for Q into A2 and then all of our values B1 B2 B3 B4 B5 whatever we're just going to multiply them by the corresponding weight so B1 is going to get multiplied by W1 V2 is going to get multiplied by W2 um and then and then to get the value the total value at the very end of that we're just going to add all of our weighted values we're going to sum together B1 times W1 plus V2 that's W2 like that so does that make sense yeah that's a really good point so this this q and K are being passed into the the scoring function of which there are many as we saw before um and yeah we we also related these previously to H and S which were our decoder and encoder in the uh like kind of we started from the RNN uh kind of showing how we have an encoded uh latent space and a decoded latent space um and how that corresponds to our our q and kit um yeah so moving moving forward into complaints about rnns um they are not very paralyzable parallelizable right um we want to take in each input as we go we can't parallelize our operations and they're very bad at capturing long sequences right if I had a very long uh representation of H's I need to go through H1 H2 H3 H4 sequentially all the way to a uh however long my sequence is and then I need to take this in respect to my like context function and then apply that to generate an output right so there's a lot of compute steps that are needed to calculate the relationship between a certain TI and then if I add to that TI plus D um the amount of computations I do will scale linearly with d and I can't parallelize any of these computations um so yeah I need to run the RNN D times uh in that time I might forget the representation come up with uh previously of TI the solution to this is adding attention so yeah uh going back to kind of uh going into sorry the meat of uh today as well uh attention is all you need uh is a a very cool paper um that talks about getting rid of recurrence altogether and using self-attention to model sequences or groups of tokens so going into the Transformer architecture um we have similar inputs as before we have a sequence of token embeddings and positional embeddings which we're going to talk about um but essentially to give a quick creface into this um again if we have a sentence like I like pie two things that I want to know are I want to embed these because our our program ultimately cannot take in of words right and used to take in embeddings or numbers or vectors so if I embed this in a certain let's call it like e e one the embedding vector uh this is one piece of information but the second thing I want to know especially if I'm doing something like predicting like the next word or like an intermediary word right I want to know the position of these because that matters a lot so I also want P1 where these are positional uh embeddings and by doing some combination of these I get a representation that I can then pass in to my Transformer model uh and we're going to get into more a little bit more about how to come up with these uh fees both of these um but there are many ways to come up with positional embeddings for example like this is position one two three right now all right um this can be figured out using like sine and cosine functions uh to figure out where in a sentence you are um and keep that in mind as well um all right moving in to the actual uh uh kind of Transformer model uh that's described in this paper uh you have the big red cat um you have an attention block um and then a bunch of uh outputs that you want to come up with kind of going through this process you have a multi-head attention block then you have an an addition and normalization layer where uh multi-head attention is essentially where you're running attention from a bunch of different points on everything so instead of just having one uh representation where you have like scalar values that you're multiplying now you're multiplying uh values across a span of H's or a span of uh embeddings or representations and then you're combining these all together at the end and you're doing layer Norm on this which we're going to talk about as well um then we're running all of these through a feed forward network but the difference is now that I'm calculating multi-headed tension for all of my queries against all of my keys and these are matrices now this feed forward step uh can be done in parallel and that parallelization is really what brings out uh kind of the merits of a transformer as opposed to other representations so yeah each token has a key query value like we talked about uh and for each token representation we use its query to find the most relevant tokens in the sequence based on their respective keys and uh this is kind of focusing in on what the multi-head attention block is we want to calculate the importance with the dot product of the query and the key and take that in a soft Max representation to push values closer to zero or closer to one and probabilistically then apply these to our values and then take the sum of those based on their importance that we've previously determined based on the dot product and that becomes the new representation of that token and as we go through this attention process we're refining our weights thereby refining how we win our values thereby refining our final prediction as to which words can come next uh this combines residual connections and their Norms to come up with a a representation that kind of alleviates Vanishing gradients and provides a a representation that is accurate uh we we did talk about Vanishing ingredients before um if you have gradients that are like slightly smaller than one um you can as you multiply these over the course of a long or a very deep Network these are going to be pushed towards zero and your value is going to be like infinite infinite testimony small very very small um yeah um at the end of this we have a linear layer um and this this is kind of essentially what we're doing you have uh a like an H depth of cues you have an H depth of of keys as well and you're you're calculating this because you have w q w k and uh WV so this is essentially like a linear matrix multiplication that we're doing so yeah you're applying a linear layer to get kqv from the input embeddings um you have your e which is like your embeddings for each of these words you have w k WQ and WV which uh represent the different weightings of uh sorry different values of keys different values of cues and different values of v's and we're doing Mass matrix multiplication which is very easily parallelizable on CPUs so that also saves a lot of computation you're applying a linear embedding to get your final kqv from input embeddings um and like we talked about before what we want to do is find the dot product between the queries and values each token directly attends to every other token and the connections between arbitrarily long inputs is o1 because we're doing this dot product of each queue with each k um and because this is a kind of a matrix that we're we're multiplying uh no matter how big our input is we're still doing this one operation so yeah after this this K and Q multiplication happens uh for a single KQ multiplication we apply a soft Max distribution based on the dot product values that we get as you can see some of them are are higher they're pushed towards one others are lower they're getting pushed towards zero and then we re-weight our V's based off of these W's that have been pushed into a stock Max distribution once we multiply our weights by our values we essentially have a weighted sum of v's and by adding all the v's together we come up with a a final value and this is on a token by token basis um scaled attention is where we scale our DOT products with one over square root of D and this helps because dot product values will grow very large with high Dimensions right the kind of dimensions of your Matrix are going to be extremely big so your value is going to grow because of that as well some components of softmax can have extremely small gradients because of this because as your values go really big if something is like an order of magnitude less than something else those are going to be pushed towards zero uh extremely fast so by dividing by one over square root of DK which is uh your I guess you could consider it your distance or your size you're scaling the dot product based on its uh its number of features or its depth this is to standardize things and ensure that you don't have some gradients or some components that are automatically being pushed to zero just because you're multiplying more things so your number is obviously going to be bigger yeah so like I was talking about before in multi-added tension we're projecting each key q and V that corresponds to a token H times so essentially you have an H depth that is then reconcatinated after your tension operations then multiplied by another weight Matrix um this models different representations of the same tokens as well as its different functions right because you have now an H H depth you can consider to the matrix multiplication that you're doing um you're able to consider a lot of different representations of your data and Dropout is used after the multi-headed tension block um we've talked I don't know if we've specifically covered Dropout before but this is a way to kind of alleviate overfitting um and make sure that uh we are actually learning an accurate representation of data so yeah um I really love this diagram it shows that uh you have a wvwk and WQ uh that correspond to whatever your qk and V are um you're using a scaled dot product uh to get your an initial like attention estimate you're concatenating all of these layers together you have H layers right you can see that for each of these the depth is H you have like H channels or H representations that you're uh kind of using to get intuition um you're concatenating these together passing it through a linear layer right but this feed forward neural network can only take in one vector it can't take in a matrix of values which is what you come up with because you have a matrix of of values a matrix of keys and a matrix of queries your your result after you concatenate is going to be a matrix of these vectors um we then want to apply a transformation wz onto these vectors to get a a specific Vector that we can pass into our feed forward neural network and this is kind of what we were talking about before this Z this resultant is calculated by the softmax of taking your Q times your K dividing by the dimension of one of qkv um they'll be the same Dimension ideally um and this represents the size of Q and K so if you're doing this qk multiplication of matrices of drastically different sizes you're kind of normalizing now by the dimension that you're you're multiplying by uh and then pushing that towards a soft Max again this gets your scaled weights and then multiplying that by your value which is exactly what this formula describes this will get you a matrix uh and then you're you're passing that through another weighted uh Matrix to get a singular Vector that gets passed into your feed forward Network so yeah um something else you can notice is that you're adding residual connections as you can see um we talked about this in advanced architectures as well when we talked about um residual neural networks but this is to remedy Vanishing gradients in models with a lot of layers very deep networks uh the next thing to talk about is layer normalization uh this kind of these two both contribute to the ADD and Norm step uh respectively batch Norm uh we talked about before is kind of normalizing a certain like you can think of it as horizontal batches making sure that the mean and standard deviation are are normalized uh but here statistics are calculated for each samp uh with respect to the parameters in each layer so you can think of bash form as kind of like a horizontal norm and layer Norm is actually like a feature Norm that is done vertically I believe there is also a slide detailing this specifically uh but hopefully this added Norm step makes sense now you're adding a residual and you're normalizing by a feature so yeah uh bash Norm is where for all training samples you are uh kind of normalizing your your data horizontally layer Norm is in a certain batch I want to normalize each feature Dimension uh and that is what's used specifically in attention all right uh we can now move on to the feed forward kind of actually uh let me pause here for a quick second are there any questions uh this is a yeah quite a bit of content kind of been just charging through it and Jake if there's anything else uh you want to add people have like questions on any stuff too on like there was a nice diagram with all of the different like potential ways and everything um this one yeah like this stuff right here um like equal questions I'm like any of this stuff is one here I think one of the easiest ways that or the most intuitive way to think about it is that you have a query which is like kind of changes depending on which token you're on uh you're trying to find like in like your Universe of information what is most related to it so it's like you're trying to look up something but you don't know exactly what you're trying to look up right we like we talked about before so I'm multiplying this this q and K or passing it into some scoring function in this case it's the doc dot product as described in this attention is all you need paper um we come up with some some kind of dot product we're passing that through a soft Max function uh and that that is pushing values depending on other values right obviously um certain certain scores of these qk comparisons are more important than other scores um then you're just multiplying that by your value that determines which values are most important and you're just taking away this sum so this will implicitly consider uh kind of certain uh similarities and differences yes so you have uh yeah more or less that is that is what we're doing here um once you have the weighted sum of v um in the big attention in the multi attention model since we have a matrix of these values this is this produces multiple vectors right then we we can only pass one of like one representation into our feed for a neural network so we introduce a new Matrix that's tuned or like part of the training process to just do a matrix matrix multiplication and get a singular value basically yeah uh up until here yeah that's exactly like you described alrighty so the feed for a neural network uh is uh two uh two layer neural network where you have ways shared with regards to position applied individually to each token embedding right so this kind of represents the uh the combination stage that we talked about before we have a after this a residual layer a residual connection and a layer Norm applied um and Dropout is used after the feed forward block to ensure that you're not overfitting to a uh certain representation yeah so some re-weighted K's uh applied put that through a residual and layer Norm combined with the input embedding right this is where your residual comes from whatever your initial embedding is uh and then yeah you pass each of these now now you have a value right uh the these are the representations after your multi-hub detention and residual nip layer Norm so this is after the meat of the Transformer right uh we've already done all of our operations the next thing that we want to do is we want a position wise feed forward or a one by one convolution this kind of depends on what application you're doing this for uh is this like a vit is this like kind of like a like a predictive model for sentences right but we want to pass this through a multi-layer perceptron uh to give us our final value uh and a representation that we can comprehend and understand and we do this for each and every value and this is again these values have already been generated these are your weighted sums that have now gone through a layer norm and a residual um and the beauty of this is that notice like we we are doing an MLP for each of these uh different things but again this can be parallelized which is is really awesome because previously in residual uh neural networks a big problem was being able to not parallelize anything but now each of these MLP steps can be parallelized um yeah you put these through one more residual and layer Norm to get an uh attention block output so again another you're now translating these new values into embeddings um and you're able to combine again using residuals um passing it through another layer Norm layer so all of this is just kind of getting more in depth into the actual uh like implementation described in this paper but essentially it's just a combination of like residuals and layer Norms that you're putting your data into so yeah uh this kind of uh like ends the Transformer architecture that's described in Boston's paper um now this I I kind of introduced this before but position embeddings uh because the architecture is not inherently sequential if we just pass in embeddings you're kind of doing this qkv uh process of calculating uh like a representation very uh uninformed because you don't have any position information uh you can change this by changing the embeddings that you're passing in to include position where instead of everything attending to everything else you're focusing on a certain window based on position information uh there are a couple uh solutions to this sinusoidal embeddings so you have a fixed equation that you generate a a P1 Vector off of depending on uh like other uh combinations of features that you have and this is like a very simple formula or you can have learned embeddings which is where you're learning the position of certain words in a sentence and using those to make informed decisions okay yeah so uh another thing that is kind of introduced is having a dummy token uh which is attached to the beginning of input this is often used for uh if you are trying to predict like the next token or the next word in a sentence if you're trying to take in a sentence like I loved this movie right say I want to do sentiment analysis on this sentence which is like a very common NLP task there are a lot of like databases there's like an IMDb movie sentiment database that's really commonly used say I want to instead of a sequence to sequence model now I want to do sequence to a vector right so sequence to sequence an example of that like we talked about would be translating French to English a sequence to Vector model is now I want to take this sentence I love this movie and I want to give it a score where the first value of our Vector is bad the second is good right so this I'm gonna have like I do this uh yeah so you have a good score and a bad score um that would be an example of sequence to vector and this could use CLS tokens where we want to use the output representation of the token for Downstream tasks so yeah the math here we really talked about before uh you have a Q input K input V input uh and we're multiplying our our Q times uh K transpose doing this dot product and this is essentially a big Matrix Matrix multiplication where we're multiplying our cues and our K's to generate W's uh and each I represents our weight uh and this representation in particular is our weight before we pass into our soft Max function so we represent each eye as a qk product we pass these into our softmax we get a weights Matrix we multiply those with our values again this is a big Matrix Matrix multiplication we get our weighted sum and that is our attention this is exactly what we calculated before uh and uh this is exactly what's described in the paper uh notice how attention is really just large matrix multiplication between your query key and ultimately your value um again your distance term at the bottom kind of scales based on uh the size of this multiplication that you're doing and yeah there's a ton of operations uh because you do have to multiply these matrices but these are huge Matrix multiples which is super parallelizable on gpus so this is very computationally efficient uh and scales very well on a kind of like distributed system uh even though there are a lot of operations which is why this kind of brings us to a uh better conclusion than what we previously had with rnns right we can parallelize this no matter how big our input is um and yeah that's that's basically in the meat of it uh there's a lot of other things that we're going to be talking about in this class sequence models they're encoder models for Bert uh to generate your embeddings uh Bert is awesome it's used in a bunch of different NLP tasks um it's a transform model as well decoder models uh uh they're a little more complex uh we'll uh have we talked about GPT already oh okay I wouldn't stress this just the basic uh like Transformer architecture in the city of like every single token um is like comparing itself to every other token that we have in our sequence that's that's really like the meat of it yeah all this other stuff is it's not going to be necessary for us all over the place yeah a lot of different developments in this field um and yeah I think that's pretty much it uh revisiting the problems with uh residual neural networks we had long sequences but now because you're attending to everything we have constant operations even if we have if we're adding an arbitrary distance between our last embedding and like the new stuff that we want to consider uh before we had a sequential model that wasn't parallelizable now we have Matrix operations which are Super parallelizable um so we kind of solve the problems that we had um this isn't even inherently sequential right we have these position embeddings which contain some position information but we don't have this inductive bias we're giving it a semblance before that I've first need to process this then this then this then this right so you're introducing before some inherent bias in the assumptions that you're making from the data which is that this comes before this comes before this so unlike rnns and cnns which will impose uh translational invariance or inductive biases um we're not doing that in Transformer models uh and that's that's pretty much everything uh yeah thank you guys for coming uh we're right on time uh if you guys have any questions feel free to stick around but
CS_198126_Modern_Computer_Vision_Fall_2022_UC_Berkeley
CS_198126_Lecture_21_Generative_Audio.txt
all right uh today we're going to be talking about intro to generative audio um this kind of deviates from what we've been talking about before um but we're going to kind of tie it into the different kinds of models that we've been talking about um as well as how they can be used for um generative audio um this lesson will first dive into some signal Theory um and then move on into things that we're more familiar with things like deconvolutions and using Transformers for next note prediction um hope you guys enjoy yeah so we're going to start with a soft introduction to digital signals um then go into some geometric signal Theory um and with Transformers and finally how how we can kind of generate sounds using these so the first thing we want to talk about is how can we sample and quantize a continuous time signal right um something that's unique about music and audio in general is how um how continuous it is right um when you listen to like a violin play or a piano play this is a continuous signal that you're taking in and you're you're processing which is not very easy for a computer to do given that every operation needs to be um in on a continuous time signal um the way that we we can fix that is through the process of discretization or to to make a an analog signal a digital signal for us to be able to process um the result of sampling is essentially taking um a continuous time signal and discretizing it by a certain sampling rate um the two kind of main ways that we can make our signal easier to process is one by taking samples at certain time periods and two by quantizing our level so instead of dealing with A continuous scale we can quantize at certain levels for example a frequency of like 2 4 6 8 Hertz what this allows us to do is come up with discrete points um as you can see here um our analog signal is is quantized and broken up into certain regions on the uh I guess you could say the y-axis in this case applied over a Time axis and the other hand we can sample at certain rates that gives us a Time discrete signal combining the time discrete signal and the quantize signal gets us our Digital Signal where we have a certain sampling period so we know how far along the kind of time Axis or measurements are and we know that our measurements are something that our computer can understand because they're quantized right this ensures that we have a numeric representation of the signal and greatly limits the amount of information that we need to process which is incredibly useful for any type of audio processing application um the next thing we're going to kind of talk about is changing forms right if we want to go from analog to digital if we want to go from digital back to analog what are the kinds of ways that we can do this um so the analog to digital converter uses something called the sample and hold circuit the details of which aren't incredibly important um if you guys do want to learn more about Dax and adcs you can take 16b as well as signals classes at Berkeley I would highly recommend those um and ADC converts an analog input to a digital output whereas a digital to analog converter converts a digital signal to an analog output we can kind of start by talking about the ADC circuit so this uses something called the SARS ADC algorithm essentially what this is is a binary search to figure out what is my best digital approximation of my analog signal right so I have a continuous signal pass it through my sample and hold circuit um which is an actual circuit um that that you can you can Google um but essentially um what we do after this step is where we've quantized the signal in some way shape or form so we don't have fully continuous values that we're passing into our SARS ADC um algorithm um and then from here we take in our our input and our output is um a discretized form um based on the amount of bits of precision that we want to have so depending on whether we want a two-bit approximation a three bit approximation this highly depends on both the level of precision that we want the constraints are computing power as well as the additional form that we're trying to process um the digital analog converter um is kind of the opposite um except here we use a low pass filter which is also covered in courses like 16b um the motivation behind this is that we we maintain a signal that's the pass band where we allow every signal to pass when we hit certain cutoff frequency we Wane our signal by a certain factor and that factor dictates the slope of this line and our signal will continuously Wane after that cutoff frequency is hit on the right is kind of the picture of the approximation where we can approximate a different quantization levels different bit parodies based on the uh the Stars ADC algorithm um the next thing we're going to talk about is quantization levels right how many bits do we want to use to approximate a certain signal and the level of quantization here correlates very directly to the dynamic range of the signal that you're quantizing we know that sampling rate is measured in hertz this is covered in classes like physics things like that where the frequency is one over our our sampling period in hertz um the quantization level as we know is measured in bits right so if we if we have two bits these are the kinds of bins that we can put our signal into if we have three bits we have a lot more Precision we increase the amount of information that we have um you can you can see this here where you have a sample value which is your initial uh kind of wave a quantization value which is the the quantized wave and you can see how the quantized wave becomes a better and better approximation and our error value which is the green becomes smaller and smaller as our bit depth increase is right so with a bit depth of two our approximation isn't very great with a big depth of three um it's definitely getting better our errors reduce significantly and our quantization is uh a lot more indicative of our sample bit depth of five we're really adhering to our our curve now our error is almost zero and a bit depth of 16 you have an almost perfect approximation um there is a trade-off though the trade-off is compute power how much time do you have to process this if this is something where for example a lot of musicians they want to sample their voice and Pitch it up very fast right what quantization level do we do we want there do we want a lossless pitch up that takes an hour to compute or if we could do something in five to ten minutes that is a little bit lossy but not enough for the human ear to really comprehend is that something that we are satisfied with or um an interesting uh proposition could be can we do a lossy pitch up with a uh with by filling in the the blanks in some intelligent way through prediction or kind of note fitting which is an interesting consideration I think given the fact that audio is a continuous time signal um the digitization process and the choices you make matter a lot and because of that this field is so interesting and there's a lot of really didactic work around how we can take these continuous signals discretize them run some sort of algorithm on them and then re-continuize them so that the user can can listen to to the byproduct um this is a very interesting uh Theory um the the Shannon newquist sampling theorem um and this this is involved with a whole bunch of things in digital signal processing um as well as really anything that you're getting a a signal from and or sampling at a certain rate which in industry is is almost every signal you're going to encounter um so we know signal can be reconstructed from its samples without loss of information if and only if the original signal has no frequencies above one half of the sampling frequency so our sampling frequency has to be uh greater than or equal to uh two times Whatever frequency we're trying to process if the sampling frequency is less than double of the highest frequency present aliasing will happen um this asserts that you need at least two samples per period um the aliasing phenomenon is incredibly interesting this happens both visually and um auditorily um and aliasing is an entire um topic just based on itself as you you can see here your approximation gets very different as your your sampling rate increases and decreases if your assembling rate is 0.3 Hertz you can see that the x-axis is the pointer angle in degrees and as as your sampling rate increases your approximation to this polynomial also increases we're at one Hertz you're almost perfectly fitting the polynomial on the the points um something interesting to consider is that at a sampling rate of of 0.3 you are fitting a polynomial right because you're you're creating a polynomial through the points that you have however this polynomial is very not indicative of uh kind of the actual signal that you're trying to process right while we we can see that this this does kind of capture the trends of our data and that it falls when our data has fallen it rises when our data has risen um this is a pretty bad approximation by all accounts you can see here the the actual um motivation behind uh the uh the Nyquist trade right at a continuous Sun soil signal um the star original signal above the Nyquist trade um you can see that we have uh the I guess you could say the x-axis um uh collisions as well as our our Peaks and our dips um so this gives us like very discretized information um because we're we're over sampling right um the undersampled case is we have samples at certain points along this this axis given that we're taking samples at a certain rate that is uh less than double of the highest frequency um so this is an example of where aliasing would be present at the Nyquist trade so exactly at um double the highest frequency you can see we have an approximation that kind of captures uh this this trend if you fit all these points you get a wave that is is waning and uh and rising at different points um so above the Nyquist rate you have like a very very strict approximation of the points at the Nyquist grade you're able to sample um certain points without losing too much information as in the undersampled case um so this is a very interesting phenomenon uh kind of talking briefly a little bit more about aliasing aliasing is the byproduct of poor sampling right so a lower wave resolution will result in a modified output signal as compared to the original input that we're trying to process um we can see if this is our original input wave um different frequencies approximate our input wave differently right this wave you wouldn't really think is indicative of uh the actual wave um that we're trying to process this wave is one could argue uh even less indicative we're skipping a lot of points through our interpolation and this wave is is the byproduct of a lot of of aliasing right where we have a curve that is not representative of our our sample at all because we're sampling at a rate that is uh is not adhering to our our aliasing uh the law so frequencies that are higher than one half of the sampling rate will be automatically transformed to lower frequency sees that's where information loss stems from we take our low frequency and that becomes a product from taking our original frequency that's above half of the sampling rate and subtracting half of the sampling rate from it so essentially you can think of it as down sampling and contributing to information loss by transforming uh higher signals into lower frequencies so yeah there's a lot of literature about um aliasing effects um including spatial illnessing right um you see that uh there's there's a lot of different ways that aliasing takes place um this is an original image if we do Point sampling um where we we take certain points along this you can see that the image changes we get kind of the gist of this um but this is a a compressed image it contains similar information it allows us to kind of see what's happening but it contains this in a very compressed format super sampling is where um you know this this image is uh take in this this 4x4 uh image and we're kind of running a kind of stride over top where uh we're losing some information in the background as you can see it really Blends together but in the foreground um because we have more information in our image um we're able to have more information in our uh our final image um so this is a very very interesting phenomenon there's a lot of literature out about aliasing um it's a problem that's very prevalent um in in any signals problem um and something that a lot of Academia is is covering um so I'd highly recommend checking out some of these links um this is a kind of fun demo um that um you guys could could try out it's generating 8-bit music in C um this is is kind of uh and and add add-on um to this presentation it uh doesn't uh really contribute exactly to what we're talking about but I thought this was incredibly cool um with one line in C you're able to to generate Melodies um which is incredibly cool um yeah so please check it out if you if you guys have a chance uh the next thing we're going to talk about briefly is geometric signal Theory um and specifically you know what a projection is how it can be used to reconstruct signals and finally how that can tie in to reconstructing signals right um we we are also going to talk about um using deconvolutions um we talked about the unit architecture very thoroughly used for image segmentation we talked about it in our survey of uh computer vision techniques um and this is this is something that uh that is kind of going to come back in terms of how we can reconstruct signals so the inner product and projections right we know that the inner product measures similarity between two vectors um specifically you know if you have a vector x a vector y um zero means orthogonal um they're they're very uh dissimilar um and a high value means that they're co-linear right that they're they're very similar to each other projections are an application of inner products where one vector can be projected onto another Vector um and you can you can kind of see what the projection is based on that um the idea behind this is that uh and and kind of reconstruction in general is that if I have two uh two vectors in the basis of a a certain Vector that I want to reproduce by using some linear combination of those vectors I can reconstruct my exact signal given that these are orthogonal vectors in the in the basis that I'm working in um yeah this just kind of walks through an example of projection the projection formula is the dot product between X and Y this is I guess you can think of it as a similarity measure between these two vectors um you're dividing this by the uh the norm squared um and multiplying this back on X um and through that process you're applying a transformation that allows you to shift this vector by a certain angle um and then rescale it onto a given vector so projections for reconstructions right um like I mentioned earlier a vector can be reconstructed with a linear combination from its projections onto another set of vectors if and only if the set used is a basis um for those of you um that have taken like math 54 16 a b um or each one 27 um the basis is a Subspace that's covered by all linear combinations of vectors where the set of vectors um we can say s 0 to SN is called the span right um essentially this is a a representation of all the vectors that can can contribute to a a certain Vector combination right so by defining a Subspace of vectors of certain vectors linear combinations of these vectors can form any Vector in the vector space that I'm interested in um so given that a basis is required the set of vectors used must be orthogonal to one another and this is very important as it ensures that that we're maximizing the amount of information gained right by having two vectors that are orthogonal to one another um we know that there is isn't any similarity shared between these right um their their dot product will be uh will be zero um so for example uh these are two vectors whose dot product is zero um so that fits um and this really covers any Vector um in R2 um e0 uh we can Define as uh one zero so a horizontal Vector E1 is zero one a vertical Vector if we look at where we we're trying to project some Vector X onto uh the the e0 space right we can kind of see how the math works out here so by applying this to the projection formula um and solving out we get um x0 e0 right which is essentially x0 uh zero right um given that we have we have no X1 component um when we when we do our multiplication out the X1 component cancels out the result of this um we can very very clearly see with this example that when we're projecting a vector X um onto e0 we get X in the direction of e0 right so we can see that this projection did work out as we're left with x x 0 0 um which is on the the x-axis I guess you can say um so we have x0 and only the uh the direction of or the components sorry of x0 in the e0 direction Um this can be used um to ultimately reconstruct a signal right so for example if I have you know the same definitions of e0 uh and E1 um that form our our basis um we want to take first the projection of of X onto e0 and then the projection of X onto E1 right we have no information here about what uh you know like these these components are but by taking the this projection uh onto e0 we get X in the direction of e0 and we get X in the direction of E1 so by combining information that we have right these two are are orthogonal to each other right they have no uh no shared information so if for example we had a basis that spans um multiple different um kind of spaces um and we had a lot of information by taking the component of X on each of these bases that are orthogonal to each other every time we take this projection we're gaining information about the vector that we're projecting right and by by gaining information on every you can think of an axis um we can ultimately reconstruct our original signal right so we know that X is equal to a combination of the projection of X onto e0 the projection of X onto E1 um which equals these and this is defined as our our total Vector right x0 X1 where we have both components of our Vector that we've recovered simultaneously by by taking the projection onto our bases um the result here is that as our our basis differs as long as these are orthogonal vectors and they span the complete uh basis of a certain Vector we're able to reconstruct this Vector um the idea behind the the last two sections here was to give you motivation for for how signals work and how kind of classical reconstruction can occur using math that we're all familiar with right um this this makes a lot of sense and uh kind of primes uh the water for uh what's coming next which is a more machine learning based approach things that we're more familiar with perhaps in terms of how we can use those to reconstruct signals and ultimately how we can use those to generate audio right predict the best uh kind of next node um so looking at uh kind of the next the next step here we want to use deep learning for reconstruction right where we are are reconstructing a low quality audio to high resolution audio right um and this is this is the kind of uh model framework um that we can use for this um you might notice it really closely resembles a unit which is something that we talked about during image segmentation um and earlier on the class um a unit can use a 1D representation of a sub pixel convolution this is a special type of of convolution a deconvolution sorry a layer that does the same operation as a deconvolution does right we're essentially like up sampling um and then it rearranges the pixels in a certain Dimension what this does is you know it increases the entropy um and thus the Information Gain that we get from this operation which is incredibly useful um especially in this scenario where we're trying to uh kind of increase uh the uh the resolution of a certain low quality image form so as you can see if this is our initial wave our our final wave is is much more populated right we're able to gain information through this uh this unit process so the way that this works is um our down sampled waveform is is initially sent through uh kind of eight down sampling blocks right that are made of convolutional layers with a stride of two as you can see um bash storm is applied we're using a relu activation function and at each layer the number of filter Banks is doubled so that while the the mention along the waveform is is halved the filter Bank Dimension is increased by two you can kind of think of this like when we talked about the swing Transformer right as we intelligently combine these these shifted Windows we're reducing the size of our uh of our image but we're increasing the dimension the dimensionality of it we're doubling the dimensionality as we're having the size so we're not losing information but we're developing a new representation of these images um same here we're developing a representation of these these audio signals so as we pass through the bottleneck layer which is constructed identically to a down sampling block right these connect eight up sampling blocks which have residual connections to the down sampling blocks you can see the residual connections here right what this allows you to do is it allows you to preserve features and share features that are learned from the low resolution representation of the image into our higher resolution output so we have a down sampled block um over here which you know has the uh the eight convolutional layers um or sorry eight uh down sampling blocks and we have the same uh reflected over here um in the up sampling box um the up sampling sorry the up sampling blocks though use um a uh a sub pixel convolution that we talked about earlier um where we reorder information along a certain Dimension um to expand um kind of the information that we get um the final convolutional layer um has a restacking operation and it does the reordering operation following our our subpixel deconvolution um we also are able to to generate our upsampled waveform after this restacking step um and the the loss function used throughout this process specifically was was kind of a mean squared error loss function so by playing around with different kinds of of loss functions um and uh in this case we're just taking the mean squared error between the output waveform that's upsampled and your initial um high resolution uh waveform that that we have as our training so that's that's the loss that we're considering by improving that loss function you might be able to yield better performance um but that's what the the authors of this specific methodology did um I thought this was incredibly cool um as it kind of it parallels a lot of Concepts that we talked about earlier on um units um the swin Transformer keeping residuals we talked very very uh very very in depth about how residuals are able to um kind of keep information across solves a bunch of problems vanish ingredients um as your network gets bigger um and you're able you're able to to use those in in kind of reconstruction as well for audio which is incredibly cool um something else is that um these these kinds of techniques are used in a variety of ways to reproduce music from bands um in uh you know the 1900s um the late 1900s mid 1900s whose recordings may not have been preserved in full quality um so by you know through the remastering process they're able to to kind of do things like this um so the listening experience for the end user is uh is improving so yeah these are some of the results um that uh were found from uh this uh this audio um where you have a true Spectrum um and uh you can see the waveform here um is the amplitude of the waveform over uh time the down sampled Spectrum um has you know you can see it's it's capped at a certain frequency while the reconstructed Spectrum after the passing through the unit um has a lot more depth um you're reaching higher frequencies we're keeping kind of the down sampled um spectrum that that we know um is is accurate and we're able to kind of fill in the gaps in higher frequencies um right uh the the SNR signals noise ratio goes down between the the down sample waveform and the reconstructed waveform um as you can see on the reconstructed waveform matches um the kvps of the the true waveform um as uh we're we're adding kind of clarity we're adding color to our our downsampled waveform to to ultimately get something that hopefully resembles our true waveform better all right um we're going to kind of uh kind of transition now into Transformers for audio generation right how can we use Transformers to predict the next note for example um and uh we're going to go through this by looking at some considerations um that we've considered through uh like image Transformers NLP Transformers and things like that right so notes to a Melody right we want to take notes and for example produce the next note in a Melody we can use the Transformer architecture to predict music notes our goal is to build a sequence model for music where we take an input sequence and predict a given Target sequence right the two steps for building this model are kind of the same two steps that we use for any Transformer model we want to convert our data into usable tokens which in the music realm does provide a different issue than it did for our other two representations um of of image and text we want to then build the model and train it to predict the next token right the first step takes the form of converting data which is music files into a token sequence which is individual notes right So eventually we want to start with with this kind of notes format and we want to tokenize it into something that our model can understand right um this presents a unique problem that we're going to be talking about extensively um but essentially um with with an image for example we talked about how you know you can you can flatten your pixels you have your pixel values that's something that our computer can understand with text using a model like bird or something you can create a 768 feature Dimension Vector that can be passed into to any models that you're training right which captures information in a whole bunch of of Dimensions right um music is a little bit different um and the reason for this is that there's a Time series that you're not able to perfectly capture right what beat are my notes on it's just an eighth note chord note half note right um what what type of signature am I in there's a lot of considerations that need to be made what key am I in it so the Transformer music model um is trying to do sequence generation through next token prediction right we have an end of sequence token um and a beginning of sequence token right um so given a certain input we want to be able to predict you know that this this will happen um and we we want to be able to get our our end of sentence token as well um so going into the actual tokenization right um if we if we're tokenizing for example a sentence we have our vocabulary which you know we can easily map by a dictionary to certain Keys uh certain values where where our key is are our actual vocabulary and our our value is associated with that so taking text as a music model is like a language model we can tokenize this into you know a series of tokens that all correspond to our vocabulary which is very intuitive what we can do for uh for music is We can approximate this which is a series of notes in a piano roll right or a graph that has our offsets and our pitches right so our pitches span you know a certain uh pitch set um and we want our our information to be captured in multiple Dimensions right we want the the pitch of the note as well as the length of the note for example these two notes at the bottom right they correspond to to e and A2 we want these to be half notes we want them to last for half as long as our C4 right that we have we have another note in the middle um which is is going to intersect with the length of our our bottom two notes that we're going to start new nodes right um G and C3 um and we have an e node on top that also starts uh at the same time and that lasts um two beats um for for our half note um so yeah these are all important considerations that we want to make and these take the form of a piano roll right which is a plot of frequency across time um we know that um a single music note is a collection of values you can think of these as the different features that make up a music note right um the two naive ones that you can immediately think of are pitch and duration um but this can be expanded into multiple things right um as shown other attributes you can use as part of this are instrument type Dynamics Tempo can be used for more complex representation um taking something that is you know indicative of two features that represent music and really expand that into as many features as you think can can positively contribute to the representation we're trying to form multiple notes can be played at a single point in time right this is um polyphony um where you can have multiple notes you know intersecting each other we need to figure out how to tokenize this 2D data now this this representation into a single Dimension to be fed into our transform model um there is a simple representation of of music notes right where you have values and durations so you just have a value and then you have a duration um and this this is kind of a a naive approach to to how we can represent a certain music node um so yeah they're they're kind of different approaches we can use notes which is one to many um where you have individual notes and you have many of them where you're trying to encode a single node into a sequence of tokens um and combine the values into a single token right where you have c chord node D chord node e half note right which as you can see here um this kind of adheres to you have you know two quarter notes and a half note and the tokenized form of this is you have C and then quarter D quarter e half right but the problem with this is that you have a large vocabulary size where you have to keep keep track of all of these um specific uh kind of sub tokens where you have if C being mapped to a coordinate somewhere else you could have sorry um somewhere else you can have cmap to a half now note et cetera et cetera and you have less control over predictions given that your vocabulary size will grow as you add nodes right um the other kind of thing is polyphony right taking many notes and mapping it for one specific kind of tokenized set where you play note sequentially if it's separated by a SCP a separator token if not we want to play all nodes together right so this keeps track then of where notes will start and end right we have a series of notes that we want to play together and then a series of notes right we have a separator token and then another node that we want to play a separator token and then another series of notes that we want to play right as you can see here we have our our start token and then we have our our node here our node here our node here um D4 you know this is a quarter note d8 this is a half note it's double the length of a quarter note then we have our separating token right we want these to start together then we have our separating code talking and then we have a an n62 which is this actual note representation on piano um so we'll only have as many notes as as our piano right we're kind of uh reducing our vocabulary um instead of our vocabulary spanning um each and every specific instance of a note we have our vocabulary just spanning the notes itself but we're kind of uh we're tokenizing in this way where we have separator tokens we have our start token our end token and we're able to capture information in a a very sequential and very structured way so putting this all together you can get your initial translation right where you have a um our our initial translation was a translation from from this piano score um to this this tokenized form um which is is very very cool yeah um so the next thing um to kind of touch on is data augmentation right this provides an amazing data sub multiplier to to Simply get more data um a single song can be transformed into 12 songs of different Keys which can help increase our sample of training data and generalize key scales and beats throughout a data set right the more data you have the better your model will be and the more generalized your model will be right we've talked about this thoroughly with images as well for images that are small where you have a lack of information convolutions outperform Transformers but when you have a lot of information right when you have a whole book that you're trying to process as opposed to like maybe a single sentence Transformers will far outperform classical methods of of both computer vision and natural language processing the more information you have and the more generalizability you have in your Transformer the better it'll perform right we talked about Occam's razor we talked about how a generalized Transformer a generalized solution um kind of can fit the Goldilocks of what we want in a model it's easier for machines to predict keys without flats and Sharps um which has you know similar to what humans do it's easier for us to focus on uh the the regular keys on piano um so this specific example was trained on on those um however uh the glass and Sharps with specific augmentations and specific training processes um can also be added to our vocabulary um so for example um this specifically I believe this uses like a music 21 um uh framework but it's able to you know tokenize a certain item and then you're able to transpose this to you know a certain uh a certain amount of notes um a certain key and this is the the transposed representation so just by transposing you're able to increase your your training sample size which is uh very cool and can improve model performance by a ton especially given that we're using Transformers um the next thing to consider is positional beat encoding right we want to include some metadata to feed into our model to give it a better sense of musical timing because the position of the token in our our tokenized representation it doesn't correspond to its actual position in time for example this is um the same example where we have an item we're tokenizing this um and this is the position of each token right but in reality token 7 is being played on beat one right the token at index 7 is being played on B1 and we want a way to capture that information right so converting notes to tokens isn't a one-to-one mapping um if we want to send our model the beat that our our music is following um we can include this as metadata along with the actual tokens to get more contextualized information so now it no longer has to figure out musical timing on its own it can now have some semblance of which notes are on which beat right um this is uh this is very cool um because it also parallels um other Concepts that we knew um in terms of positional uh structure right um when we were talking about um Transformers for natural language right um we were talking about how you know using some like sine and cosine functions we can we can pass in information about the positionality of our text so that our Transformer model has some semblance of that right um this is very similar um except for music um which I I thought is is very uh interesting all right uh the next thing we're going to talk about is teacher forcing right so when training Transformers you want a way to mask information right you don't want to give the Transformer all the information because then it won't really learn a a very good representation of the data that you're providing right we want to be able to mask information that it previously had as well as mask information that it'll have in the future right so we want to apply an attention mask to keep the model from peaking and essentially leaking information at the next token it's supposed to predict we can do this by kind of observing this model right um so here we at each at each step the model can only see itself right at the first step um where zero is a token you can see one is a token you can't see um which is a little backwards I know um but you're you're essentially um masking the every token except for yourself right here you're at the next kind of Step you are able to see information you previously had and you're getting more information about the the current token that you're on and you're masking everything that's that's not the token you're on in the future and by the last step you're able to see everything right by applying an another mask a window of size 2 here at the first step you only see yourself right at the Second Step you don't even see what token you're on you only see the previous token at the third step you only see the previous tokens as a fourth step you also only see the previous tokens so you're essentially enforcing a window size of two where you're only updating the information you see the prior information you see every two time steps right at the very end you can't see the final two bits of information and that lack of information is very important to a Transformer as it allows you to predict several steps ahead and will ideally produce a more generalized model so this is a reverse teacher forcing where we're masking future tokens and potentially pass tokens depending on what window masks we're applying um but yeah this is this is another way to improve model performance and I think it's a very interesting concept so going into the actual Transformer architecture um this specific implementation used a Transformer XL which is a specific flavor of the Transformer model um this features right we talked about relative position encoding um as well as hidden state memory Transformer memory um specifically to this model enables very fast inference for music prediction right we've done a lot of things to optimize for for our prediction we're including a beat embedding so that's not something it has to learn we're including information about um the the tokens that um you know limits our vocabulary and captures all the information we want to have um we're really only capturing two things we're capturing the pitch of the note um depending on what note it actually is and we're capturing the duration of the node instead of having to reevaluate the whole sequence on every prediction you only need to evaluate on the last predicted token because of that information gained throughout the predictive process previous tokens are already stored in memory um and we're able to you know get a sense of of relative position with Transformer XL whereas vanilla Transformers will use Absolution absolute position only it's important for music models to know the position of each token relative to one another because positionality matters right the order that you're playing the notes really is is what matters the most and this is an additional to our positional beat encoding which we're which we're including for the the model to have um so I'd kind of like to end with a little demo generated by by somebody um who who use this model um to kind of predict the end of Canon in in D Major by by Pachelbel so here's Pachelbel's Canon I'm kind of in the spirit of Christmas coming up as well um yeah so as you can see there um this is the original pocket balls Canon and this is what's predicted um where these uh little white and green uh notes are the original Pachelbel's Canon um as you can see this does deviate a bit but honestly it sounds pretty good um so the Transformer model is able to uh to kind of do this next note next sequence prediction pretty pretty well um which I thought was was very very cool um so yeah there's a a lot to do in this field um a lot of really cool things happening um and yeah I hope you guys learned something about uh about generative audio today um and uh and are inspired to kind of give some of these things a try yourself uh yeah thank you guys for tuning in have a good one
CS_198126_Modern_Computer_Vision_Fall_2022_UC_Berkeley
CS_198126_Lecture_9_Autoencoders_VAEs_Generative_Modeling.txt
thank you so I'll work with you today but if you're Amazed by their review as honest as possible and the end of this morning so um all right hey guys so you guys have probably not seen me yet my name is soham I'm a another person involved with machine learning at Berkeley I created some of the content for this course but this is my first lecture so uh at the end I'd appreciate it if you have like if you have any questions throughout if you want me to slow down you want to clarify anything please stop and just let me know all right all right so the past few lectures we were talking about you know some CNN related stuff like tasks in computer vision so you guys probably have seen you know like image classification semantic segmentation all these different ideas that are related to the like the broad field of computer vision and like kind of trying to find different objects within different images or trying to classify different things within the images we're going to start a module for this decal that's going into a slightly different direction we're going to be starting a module in generative modeling so what is generative modeling the basic idea is that we have a ton of data you know we have so many different images of cats so many different images of faces wouldn't it be nice if you could use machine learning to generate new images of caps new images or bases but whenever you think about it this seems like a really really tricky problem right because we're giving like all these concrete examples of cats how do we expect the machine to understand you know what are the characteristics that make a cat a cat you know maybe I'll end up learning that uh uh you know like it's actually the particular value of some pixel in the corner that makes a cat a cat maybe it'll end up learning that uh anything that has like a bunch of legs is actually a cat So then whenever you ask it to produce a new cat it'll actually just produce an image that has like a thousand legs which is not a realistic so you have all these like kind of things where it's very easy for the computer Visions like for your machine Learning System to just like regurgitate some outputs we've seen before but the goal of some of these methods we will be showing you in this lecture as well as the next one is that we will be showing you methods to generate novel images given some data set so the technique that we're going to be used uh going to be using is called variational automoters but before we discuss that let's discuss what an auto encoder is all right and sorry I apologize uh I don't have my speaker notes in front of me so these uh these explanations might be a little bit rough but please bear with me so let's first talk about the jpeg algorithm just quick poll how many people in this room have heard of jpeg before okay I was really hoping it was everybody uh out of curiosity does anybody have some like some knowledge about how jpeg compression works like maybe like some things they heard about it that's perfectly fine if you uh really don't know so I'm just going to like briefly say so it turns out that there's a few different ways that you can store images one of like the more popular types are like you know pngs so essentially what a PNG is doing is that it's storing like all the different colors and all the different locations within an image so you might be thinking like very very intuitively uh that's not exactly what's going on but like intuitively what they're doing is they're just saying okay at pixel zero zero I have uh this color for red green and blue and then pixel zero one I have this color for red green and blue and they do that for each pixel in the image but that's not in a very efficient way of storing images right uh you know like intuitively it seems like in general you know that too nearby pixels are likely to have very similar colors right uh like let's look at this image of a cat if I told you that a one pixel of a Chic is orange then you had to guess what are the nearby pixels on the cheek you'll guess that they're also going to be orange right so there's like these different patterns that you can use to kind of compress the amount of amount of data that you're using so here I'm showing in a source image and then the jpeg algorithm applied at two different levels of compression so we see for the middle image this is actually a compressed version of the original but it's actually very hard to distinguish like almost none of the pertinence information is being lost even though we are able to cut down on the number of bytes uh the third image is like a more extreme version of the compression so even though like you see that you know it's like very clearly has artifacts there are like kind of Corruptions to the image it's still recognizable as a cat and uh I don't have the actual sizes for these files off the top of my head but you can expect that the third image here will have a much much smaller size in the original image and even and it still retains that kind of semantic information that looks like a cat so how does jpeg work well sorry let me take a step back and say why are we doing compression in the first place so there's a few reasons why you might be compressing data one of them is that you might want to save space right so like a suppose you have a database of like 100 000 images of cats you don't want to like completely blow through you like your computer is like memory budget so you want to like have each image compressed so that your overall using reducing amounts of space that you're using another like kind of application is like if you're sending these images over a network then like usually will be limited by like the number of like bits you can send per second so you want to like reduce the number of bits that the uh the images are so you can send more of them uh in like a reasonable time frame and the final thing I want to mention is that because of these like other kind of like things you're trying to optimize for where you're trying to save space and like you know like uh maybe reduce Network bandwidth your compression algorithms don't necessarily have to be perfect so by perfect I mean here it's okay if you lose like a little bit of information like going back to this original thing um it's okay if like the top right pixel of the image uh gets it's like the the value for its red channel changed by like a couple of percent that's very unlikely for a human to like recognizly like see the difference but it might help whatever computer algorithm we're doing to kind of understand or like to be able to compress that data into a smaller smaller amount of space so there's a few reasons for compression well I think more fundamentally the reason that we as machine Learners care about compression is because compression relies on having understanding of the data that's something I'm going to go into a little bit more depth uh very shortly but this is some some takeaway that if you only remember one thing from this lecture I want you to remember that compression means you must have understood the data at some level so let's go back to the concrete example of how jpeg works so the reason that jpeg works is because nearby pixels are related so you have like some value of like some pixel then the pixels around it are likely to be very similar values and also we know that just in general very very fine features and in the image tend not to be so important like a human eye cannot distinguish between you know a couple of percentage points in the color difference across two adjacent pixels so as a result I'm not gonna go too much into the full like signal processing involved here but we use a technique called a Fourier transform which essentially represents the kind of image as they sum up like Sines and cosines and it says like all the signs and cosines that have a very very high frequency are likely not going to be very important for the actual image so it's basically saying the pixel the pixel differences don't matter so much it's only like the kind of like General structure of the image you know how does these 10 pair 10 pixels compared to these 10 pixels next to it those are the kind of things that like humans really care about and those are the things we want to preserve so in essence with the jpeg algorithm does is it takes that Fourier transform and actually just completely throws out some of the higher frequency information some of the like pixel to pixel relationships that don't really matter whenever it comes to human perception yeah well let's take a moment to think about what we did here because we understood that human perception has these characteristics that we don't care about like pixel to pixel uh relationships we're able to like reduce the amount of size of the image so because we have some understanding of the patterns in the data we're able to have some reduction in our storage complexity now I want to take this one step further so the jpeg algorithm is great because it generalizes to like pretty much any image you know any kind of realistic image that you're going to take with your camera you're going to be able to find some reasonable jpeg compression but what if as like just like an intellectual curiosity you wanted to make a compression algorithm specifically for cat images so for General images the only kind of information could rely upon was that nearby pixels are going to be correlated but if we know the images are images of cats maybe there's some like more advanced compression scheme that will work maybe you can do something that's like oh let me just specify like the locations of the legs the location of the tail the location of the head and then I just say okay let me store some additional bits to say this is the color of the cat uh this is kind of like the pose of the cat with all this information we can reconstruct a very good estimate of how that cat looks like well what are we doing here we're relying on much deeper knowledge of how a cat is structured than the jpeg algorithm did the jpeg algorithm only looked at like different pixel values and used the assumption that nearby pixels are close to each other here we're making some very deep assumptions saying whatever objects is in picture in our image if we know it's a cat then we know generally how cat likes our shaped we know generally how cat bodies are shaped so then you only have to tell me particular aspects of this cat and I'll be able to deduce the rest at a high level that's how it works that being said it turns out very difficult to kind of engineer this by hand you know what does it mean to look like a cat lag what does it mean to look like a cat body and your kind of intuition from the previous few weeks of lecture should kind of be kicking in now whenever we have this kind of challenge of trying to represent some features of some data that's really hard to like hand engineer really hard to hand code that's something where you should be thinking about how can you apply machine learning to kind of learn those patterns automatically so here I'm just showing a cute example from a few years ago this is called eigenfaces so essentially this is like Way Way Back in the early days of computer vision before we had all these fancy convolutional neural networks this uses only um it uses a technique called PCA if you've taken like x16b you might be familiar with that essentially what they did was they took a bunch of like images of faces that were like all centered and crops in the same way and they applied something called dimensionality reduction in order to get like here are the kind of like typical aspects of the features so you see in like some of these eigenfaces if you look at for example eigenface number 11. it's showing that like okay this person like has like a like a distinct like a Smiling Teeth he has like some uh two swatches that correspond to eyes and he has you know like a kind of nose and turns out that like by combining different characteristics of these different phases you can end up like representing like a large class of different faces so like here this machine learning algorithm we just told it to try to represent the data faces in a sensible way and it ends up learning like what are the pertinent like kind of characteristics of faces and this finally brings me to the main topic of today's lecture which is an auto encoder so earlier I mentioned that eigenfaces was using something called a PCA like a in order to kind of compress these faces and like understand what are the pertinence features here we're going to use a neural network to accomplish the same task and on the on the screen you will see the structure of this neural network so let's just take a moment to study this you see there's going to be some input layer and this might be your input image okay so it's like your picture of your favorite cat then it goes through a series of different layers and then it comes to something uh that they've labeled here as the code it's called the bottleneck layer I apologize for not writing this on the slide but the this like smallest thing player in the center is called a bottleneck layer and then from the bottleneck layer it goes back to the output but one thing you'll see visually on this slide is that the input is a very tall Vector while the bottleneck is a very short vector so what's actually occurring here well it occurs in a real implementation of an odd encoder is that you might start with like a 128 by a 128 input image but then if you're a convolutional layers will kind of decrease the size of the image until it reaches a point where it's like only like you know maybe 30 by 30. and then you'll have like a second a convolutional network that'll take that 30 by 30 image and like try to blow it back up to output something that looks like the original image so let's just like uh go over the structure again the first half of this network is called the encoder it's taking your input and producing some some representation it's like some Vector that's going to be a lot smaller than the original input and then the second half of this network is called a decoder it's going to take your representation and produce an input an image back so the goal of this encoder decoder structure is that your original image and your output image are going to be very close to each other so I think on the next slide you're going to see we apply something called reconstruction loss to this basically saying we want x and x hat like the original image and the decoded version of the like the uh the result of this network to be very like similar to each other okay now one very natural question to ask is like why why would we design a network that can like take an input and produce the same output can somebody like propose a very simple way of uh designing Network that can take an input and produce an output that looks very similar to the input without any like additional intelligence yeah exactly if you just took the function f of x equals x then the output would be exactly that uh the input so that's not a very useful uh useful function so why do we care about it here the key fact here is that the the uh the code the output of this bottleneck layer one second the output of this bottleneck there will be a much smaller size than the input and in doing so we're forcing the network to compress and by forcing some compression we're forcing it to learn some features of the data yes that's that question here yeah so the question from the audience here uh is like could you like somehow take the output of your encoder like adds like new features into it did I understand that correctly okay so like that's a very good question and that's actually something we're going to be addressing in the later part of this lecture once we get into variational Auto encoders that will then be possible it turns out that just with this network structure it's not possible right now so what are the kind of like things that we want from a code we want a few things one thing we want is that we want it to be small right because like the smaller we can get it to compress the better understanding we must have had by the data but then there's actually some other kind of like things that we might want from this kind of encoding one of them is that if you take like two similar images then you might want them to produce two similar codes so like uh this notation I'm going to be using throughout the lecture X is the input C is the code uh is actually in machine learning literature is called latent Z is the latent and then X has the output we want uh if x and x likes two different X's are similar then we want the corresponding Z's to also be similar and that's not actually being enforced by this Autumn quarter structure in fact there's actually no information we know about the kind of like Z vectors they can just be kind of like arbitrary things in whatever dimensional space they live in so it turns out that's very typical for us to go back in and like do operations like say if we have an image of a a poodle and an image of a golden retriever uh or like we should not uh we may we might wish that we could like kind of interpolate between those images and get like some like new breed of dog but it's very likely that if you just took the two encoded representations the two Z vectors for these two different images and you just added them together we're like sorry you average them together you're probably gonna get some garbage once you pass it through the decoder it turns out that this Auto encoder structure as shown right now is actually not imposing any kind of sensible structure on like the the codes that's being generated and therefore it's possible for it to learn some like very like uh it's possible for that space to have some very undesirable properties yes yeah like uh some of the questions like first of all is our main like kind of use of this network for the encoder and then yes and then the second part of the question is like would the decoder like involve the convolutional network players and that's definitely like some way to implement it like it turns out that auto encoders is like a very general structure so like you could like replace the encoder and decode robotic Transformers or something uh depending on like kind of your application but like yes decom players is definitely a way of doing this all right so now I'm going to be talking about variational autoencoders so variational audio coders is like it's going to be building upon some of the stuff that we talked about in Auto encoders but something that actually gives us some like really really magical properties like these properties will like let us do stuff like taking two images and then like kind of averaging these images in a sensible way so the example that I was like describing earlier if you have two dogs and then you like average their codes somehow whenever you decode the result it should give you something that looks like halfway between the two input dogs and it should be like in a sensible way it should be like you know if you have like a really short dog and a really tall dog the uh the output should be a medium dog it shouldn't just be like something with like two hex or something like that but how do we get those kind of like sensible results whenever we apply different operations to these uh to these latent vectors all right before I go into that I'm going to discuss more generally what is our kind of like generative framework just because this is something we're going to be returning to uh view times over the course of like this kind of module so in general uh whatever we kind of do like kind of like image generation or text generation what we're going to do is like we're going to sample something called the latent Vector that kind of like has the information of like the image or the text kind of included in it and then we're gonna like use that to Output some final output so the intuition that I'm using here is that the link Vector kind of acts like genes so you know like uh every every human has like some set of genes and that set of genes will kind of determine the structure of your face so like even though you can't like uh you know like you can't observe the genes directly maybe if you like had this kind of data set of like a large number of people then you could like start looking at like maybe some like the kind of uh relationships between the different faces and then you are able to deduce like oh maybe this person has like some Gene that codes for like maybe a red-hairedness and then like you like to do some like kind of properties of that and then if you wanted to generate a new face then your general approach would be first you like choose like some set of like a of genes for this like hypothetical human and then given those genes you choose like a face that corresponds to that actually because I get a quick pull how many people have seen the words like prior before and how many people have understand what I mean by like P of X Bar Z a few people okay so it seems like some people are like a little bit familiar with this concept but this just as a quick refresher whenever I write down uh let me just increase the size of this so people watching lecture at home can see whatever I write he acts RZ this is saying x given Z so it's basically saying once you know Z there's going to be some probability distribution over X and then whenever we say a prior well it's basically we first generate our set of genes and we know that there's like some distribution over possible sets of genes you know it could be that like you know when you're born you're randomly given like some genes that code for long hair or you're randomly given some genes that could for like shorter hair or something like that so like we said we model this by saying your set of genes is given by some probability distribution and then once we know what particular like genes that you got then your your facial structure will be uh some kind of like function of the uh of the genes that you received so I'm not saying that it's necessarily A deterministic and mapping uh because there there is like some things that affect your facial structure aside from your genes for example uh a real world takes out like your diet or like the amount of exercise you did as a kid or like the uh kind of environment you grew up could all affect slightly aspects of your appearance so that's one part of it but it's just like kind of abstract these Notions away you can kind of think of it as even when the process in the real world is deterministic sometimes you might want to bottle it as a machine learner as kind of like random if you don't understand like all the underlying processes so for this example uh you might not be able to kind of like capture the kind of other things that are affecting people's like facial structures so as a result you just model it as like some probability distribution so you know like maybe 50 of people uh uh ate this growing up and like 50 of people ate something else growing up okay so that's a bit of a detour like introducing some notation uh I don't think this is super useful actually there's one thing from the previous slide that I do want to kind of highlight so what the encoder's job is as they all take some input X and it will produce some latent Vector Z so I'm using these terms interchangeably latency code the word code comes from like a field called coding Theory it actually precedes machine learning but in the machine learning literature is more common to call it a latent so I'm going to be using that kind of word more often now see the encoder takes some input image and produces a latent and the decoder takes that latent and producers it's estimate back for the original image all right first of all can everybody see what I'm writing on the board right okay so the approach that we're going to use to ensure that we get some sensible structure and latents to give such desirable properties as um I apologize I don't know your name but one of the students earlier had asked for is it possible to take the lane vector and kind of add things to it to add more features uh so like we want to kind of impose some structure on this line of space and the technique that we're going to end up using is we're going to use probabilistic encodings so the idea here is that suppose that instead of like mapping like uh some input to some particularly so you can think of that as like you know um it might have Maps this particular point what if it maps to the kind of distributions over possible disease so like we're saying the notation for that and probability because we're saying V is drawn from a distribution depending on X so it might be that we're saying that X will like map to this entire kind of region and the reason that going from just one point to an entire region is useful is because this is forcing nearby like points in later space to decode to the same Vector that's a really important point so like please stop me if this doesn't make sense um originally we had an encoder giving one point and then like whenever we do the decoder it'll take that one point and produce the uh the original image but now we're producing so we're taking the encoder making like some probability distribution so for now just like think of that as like some big circle so we're like setting the encoder uh has to like reduce some circle and then every point in that Circle must decode to the original image so because every point in this circle must decode to the original image they're saying that nearby kind of back nearby vectors in this latent space are going to have to decode to similar images so like let me just make this clear that you're on Final thought of the encodings are so this brings us to variational Auto encoders so the basic idea behind the variational autoencoder is that we have the same encoding like encoder decoder structure as we saw earlier in the lecture where you have some kind of input image that's being passed through some neural network encoder it gets out some code or a latent vector and using that latent Vector you decode that and get something that you want to be close to the original language well now there's actually some caveats before we would have that like the input X would produce some Z and you would say that directly into uh into the decoder to get X hat but now we're doing something different instead of what we're doing is that we're using X to Output the parameters of some gaussian and then we're sampling C from that gaussian first of all does how many people here are familiar with the concept of like a gaussian distribution okay so for those of you who are not familiar a gaussian distribution you might know it in one dimensional as like the bell curve let me just separate out here so you might have like seen like this kind of like curve before it turns out that the gaussian distribution is like a very uh it's a very important model and like probability statistics it shows up all the time like kind of real world statistics but also it's just like the reason that we're like using uh these axes to decode into gaussians instead of individual points is actually just entirely this for the reason of this picture that I drew on the board it's like instead of forcing it to be like a single point that gets c-coded it's forcing to essentially be like a circle somewhere in Clayton space and that entire circle has to get dehydrates to the same thing so the structure of the variational encoder loss is that we include some reconstruction term so this first thing that you see here which is x minus X hat of r squared that's essentially saying that your input to the network and your output Network should be close to each other but now we're adding a second term and this is called the uh I can't for the life of me remember what it's called but this is a forcing the kind of like output of the encoder to look like a normal distribution and it gets very mad if it does not look like a normal distribution so it kind of prevents you know the uh encoder from kind of just producing one point and learning only one point to Output is the deformer and forces instead is like encouraging the encoder the output like very large sets and then the entire set has to be coated to the same point so you kind of see that there's like some kind of like uh contrasting objectives in this loss here uh what the auto encoder would like to do is like map every input to just like a single point like really far away from each other so that whenever it gets decoded it's able to just say oh you're you're from that point I know exactly which image you came from oh you're from this point I know exactly which image you came from but what this uh this other term the loss is doing uh is essentially saying uh I'm forcing all the points to kind of be close to like uh being in the center of the plane so first of all uh yeah I'm first going to be closer to the center of the plan and they're also being forced to like have like some uh some variance so that's actually opening disks instead of just individual points so if you don't know too much probability then like it's okay I think that the main intuition here is that you should remember that this is like kind of force it to learn that nearby points should decode nearby vectors okay and the reason that having these kind of like structured lading space is such a useful idea is because it turns out that having structure with latents uh lets you do something pretty incredible things apologies let me uh degrees Assessor unless you do some pretty incredible things one application that I mentioned earlier that I'm going to come back to just because I think it's such a surprising application is that once you switch over from using Auto encoders to variational audio encoders it turns out that you could do things like interpolate between two images so uh you might have seen some applications online and I wish that I'd maybe included one in the slide deck where you have like you and put two celebrities faces and then like it gives you a slider that you can use to slide between the two faces and if these uh if there's online demo is any good then you're going to see that it behaves like a kind of sensible way if one celebrity has like you know a very pointy chin while the other one has like a round chin then like the slider will like cause it so that the chin becomes like us uh less and less pointed as you like move the slider along and the reason that works is because in this variational autoencoder it's kind of forcing all these Lane vectors to kind of have these like live together in the same space and like have these nice properties like you know nearby latency decode to the nearby points so if you kind of like draw a path between two line vectors so if you were to take two lane vectors and draw a path to Prima and then you try to decoding points on each kind of Step along this path then each of those things to like kind of progressively get further away from X1 and get closer to X2 yeah so it turns out that what you do is that you take a neural network and you output the parameters of that gaussian distribution see you use a neural network and it's going to put some Vector mu and some vector or Sigma and then those parameters get fed into a gaussian distribution then you like sample from that gaussian distribution and then that sample gets fed through the decoder uh or like are you saying like how are immune segment like sorry could you repeat that sorry yeah [Music] so you know how like for the different machine learning tasks we've discussed so far we take like some input Vector we apply some layers to it and then become some output vector so that's exactly what's Happening Here yeah exactly [Music] yeah that's the thing I'm leaving out the details of how to train this because those seals are really tricky because like your intuition's spot on you should be like this sampling stuff looks so different from the things that I've learned about you know at the start of this course when we talked about like you know for neural networks each of the layers has to be differentiable in order to like do back propagation through this it turns out that you cannot do backup propagation through the sampling but there's like some hacks that can get around it essentially um like I I didn't go into too much detail about exactly what this like KL term in the loss was but it turns out like this paper has a bunch of really good ideas that like somehow made this thing work and like now that those ideas exist in the world you can like think about how to extend upon those ideas and like how to like modify those ideas in order to like fit your like applications and then a message yeah the the choice of the gaussian here is like really arbitrary one of the reasons it's used is because it makes the actual final expression for the loss a little bit nicer to compute all right this is like a like a pretty actually natural stopping point so do people have like additional lectures on Auto encoders on va's are people like lost does everything so far make sense all right so now we're going to discuss the final part of the lecture which is vectoral quantized vaes so so far whenever we're discussing vaes we've been always working in this kind of continuous setting so I said that you're going to have like some uh some Lane Vector Z that like kind of exists in this like a kind of space and then we're going to sample from a gaussian but you see like pretty uh pretty quickly now this is forcing um like this C being a log into this disc is forcing it to like kind of have like some kind of continuous value so like it's uh the gaussian is not a discrete distribution so you should not expect to get from your probabilistic encoder some kind of District discrete outputs well can anybody give you an example of a field where discrete kind of outputs makes sense where you you only want one of N Things as your possible output to the network yeah yeah so like classification is like a a good response so yeah the classification is the correct response to the question I asked and the question my head was different from the question they asked but like yes that's a very uh very like a nice observation I was like slightly going towards if you're trying to generate some like new art uh if you're generating like you know like visual art it makes sense if your things are kind of continuous because like if you like change some pixel value uh by one percent then it's the same it's the same pixel essentially so it's going to be the same artwork essentially but if you try are trying to generate poetry for example what does it mean to change a word by one percent there's no there's no like good definition for changing a word by one percent either it's a valid English word or it's not so you can only change like kind of text in a kind of discrete increments so yeah uh I don't think that you guys got to expose too much NLP yet but essentially whenever you're using like textual data it turns out that you represent them using discrete tokens so like you might have a dictionary of like saying these are like the topic you know uh five thousand most common words in the English language so whenever you have like some uh some text you will say oh this is word one this is word ten this is word five thousand uh but but like you all like kind of like not be able to uh like those are all just great things it's going to be one of those values from one to five thousand encoding they kind of like works in the in the text so this means that if we have some architecture that's meant to work on kind of like natural language Shadow on data involving words then those tend to refer to screen representations so like uh if you guys have happen to have heard of Transformers prefer Transformers tend to be applied on like a kind of discrete uh data so as a result uh if you want to generate some like kind of like text instead of generating some images it makes more sense to like find some way of extending vaes to work for these cases where you're using kind of discrete tokens so kind of the hack that people have thought of is something called vector quantization so actually another quick poll have people heard of the one nearest neighbors algorithm okay so it looks like it's a pretty uh mixed pack the basic idea is that actually I'll just point to the image of the book you might have a bunch of vectors right so those vectors are being represented by the axis qualities those vectors are being represented by the axis instead of essentially saying we have some code book of like uh kind of like valid words or valid tokens and whenever we see a vector we're going to round it to the closest codeword so in this example here you see that like uh let's focus on the top left corner of that image for now so in that top left corner you see that there's like four or five different vectors in that kind of region but the closest element of the code book the closest Red Dot is the one being labeled y10 which I don't imagine you guys can see but just uh the Red Dot in the top left corner because that's the closest code word they all get rounded to that value so essentially what a vqvae does is that it does the same steps as a normal vae but after you do the sampling of the gaussian then it has a separate step where it like fantizes all those values meaning that it rounds them to different elements of a code book but then that like leaves the kind of issue how's this code book chosen you know like we have all these vectors that are being outputted by our vae and then we want to round them to a kind of discrete set of values uh how do we know which discrete set of values will make the most sense for the network we could just hard code you know like we want the code words to be evenly spaced but maybe like uh maybe it turns out like most of the images in your data are like related but then there's like some other images that are like just really different but also related so in this case like evenly spaced code words might not make sense it might make sense to have like the code words kind of be like uh there's a bunch of code words corresponding to different kind of clusters within your distribution so as a result we also want to have a step where we're learning the locations of these code words so this ends up being the loss for a vehicle vae it's really complicated and I don't recommend you study it this is like probably one of the most difficult topics in machine learning but basically the basic steps are you have a reconstruction loss which is similar to the things that we're saying earlier you want the output of your decoder to be similar to the input that you pass in so that's one element you're lost then the next element is that you watch the vectors that you're being outputted from the networks to be close to these code words because like essentially if there if the vector outputted from your uh from your gaussian it's really far from any of these code words and it seems like it's a really strange operation you're doing to like rounds that like record that's really far out to something that's like a really large distance away you want your code words to more or less represent the centers of the clusters of your like a distribution so we have this like determined a lot a loss called Copic alignments that essentially says like the results of your encoding should be close to the uh should be closer to coverts and then it turns out that we need another term in our lost call commitments I'm not going to go too much into detail there but essentially uh you'll notice in this problem we're learning two things simultaneously we're learning where the vectors output by the uh why the encoder should go and we're also learning the uh the the code word elements that like kind of represents like the centers of the clusters of these uh uh outputs of the encoder so as a result because we're learning like two different sets of things both the output vectors and then also sorry the latent vectors as well as the the things they're getting rounded to we need two different terms in the loss to kind of kind of like force these two things to be close to each other the exact mechanics again not so important I don't recommend stressing about this uh if you're very curious like you can read the paper okay so just taking a step back what we're doing is that we're like using an encoder to get a gaussian we're just taking samples from that gaussian and they're rounding them to different values that make it like a kind of discrete thing but what's stopping this like uh this kind of network structure from just like learning to hard code the input data it seems like one of my issues with the first architecture I propose the auto encoder was that I could learn to just take your input data like just like map into some discrete set and then like uh learn some inverse map for that so it's going to like just memorize your data and like so not use the full light in space so it seems here that like maybe your code words are going to end up like a hard coding input data and also seems like if you only maybe have like 512 codewords then it seems like if you're actually doing this protectional generation you'll only be able to generate 512 different words well that's okay once you're seeing for image generation then being able to generate only 512 different images seems like really terrible but you want to generate like a huge like diversity of images how do you achieve that uh one kind of naive Solutions like you increase the size number of code words you're learning but like then you kind of only have a linear relationship between the number of code words that your network learns and the number of distinct images your network can kind of output so instead of doing that what we do is that we have each input produce many coverts so this is going to be a little bit uh complicated so let me illustrate with an example so the example they gave in the paper well suppose you had like a 128 by 128 by three image So What by what I mean by 128 by 128 by 3 is that you know the height and the width of this image are like 128 pixels and each uh pixel value has three different color charts so then what it's going to do is the encoder is going to um eventually get this down to like a 32 by 32 by one uh kind of like a tensor size so now instead of 128 by 120 it's only 32 by 32 instead of three color channels there's only like one chip then each of the elements of the security by 32 by 33 burn each of them gets separately quantized so one of average 12 values so now it's still 32 by 32. but instead of like having like one real value it's going to have like some quantized values between the between like a zero and 511 or something like that and then using that you can like end up learning something that's going to Output some some image and the reason that this works is because uh each element of this like 32 by 32 grid is going to be a different uh like quantized code word and we said that the number of code words it's going to be around 512. the total number of distinct kind of things you can generate is going to be 512 to the power of 32 by 32. and that's huge so like now we've gotten away from the problem where we needed as many code words as we needed uh potential outputs by like kind of like saying okay we actually forced the uh the the input to mapped to many code words then that that collection of code words it has like a lot more possibilities that's going to kind of exponentially but right now I've not been so clear about how we're actually sampling from the vae oh sorry we're in the VQ whenever we're assembling from the vae what are we doing is we're just sampling some uh some C from a normal distribution I'm passing it through the decoder and that's going to give us a novel image an image that nobody has seen before but how do we generate from a vqvae we could try uniformly sampling the code book elements but you know maybe it turns out that like in the true data distribution you have many more data uh data of like dogs than you have of cats so you might want to end up learning that like these kind of code words are like more common to use than these other code words so it turns out that you can also have a step where you learn a prior over the codewords all right I realize that we are at time so I won't go too much detail about learning these priors and also you guys do not know an auto aggressive model is yet so I'm going to kind of skip that part from that but yes uh you have any questions just a recap we started with autoencoders which I was able to take an image compress it down and then like produce the same image which forces it to get some understanding of the data but we want to be able to generate new data or take linear combinations of data so then we move to a variational auto encoder variational autoencoder forces the latent space to be shaped like a gaussian distribution So then whenever we want to sample like a new image that nobody has seen before we just take an elements of a gaussian distribution and then decode that and then finally I've discussed that we have very Vector quantized variational Auto encoders which are able to generalize these ideas to the kind of discrete setting all right that's all I have to talk to you about us about if you have any further questions come find me at the front of the room
CS_198126_Modern_Computer_Vision_Fall_2022_UC_Berkeley
CS_198126_Lecture_2_Intro_to_Deep_Learning_Part_1.txt
because foreign wow thank you yeah and I got everyone's attention for a minute here is this on maybe all right I got everyone's attention for too many here uh does anyone have like questions from like the last lecture we kind of like brushed it pretty quick um to try and to try and get as fast into into deep learning as possible does other all right does people do people have like residual questions or maybe they went back and like looked at slides now they realize they have there's something that wasn't quite answered does anyone have like are there questions comments concerns about like previous lecture that we can talk about um before we get into things today um like I think there was a little bit of maybe confusion of like um uh like the bias variants trade-off um that kind of jazz um one hot encodings anyone have questions come with concerns or are there yes friend we're not going to talk about it very much um I mean there's there's like um there's all different kinds of wait um by exploratory data analysis do you mean like hyper parameter tuning or do you mean um like how we how we choose all of the like what former model takes all that or are you talking about just like looking at your data and speaking Yeah so there's like um there's all different kinds of things that we're not really going to cover much here like like k-fold cross validation it's like stuff you'll learn in 189 or like data 100 um it's basically for the purposes of this class you're probably not going to be doing a whole ton of it um but if you want to know how it works how you would probably go about doing it on your own you're going to like partition your data into the the stuff you're going to train on um like a chunk you're gonna tune on and a chunk that you're going to test on um so you'll have a specific segment of your data that you're just going to like tune with effectively you're just going to try a whole bunch of different models and stuff loss functions see what works best for that data and then you're going to use that to move on to like your training data and stuff does that kind of make sense okay if not I'm happy to elaborate as well okay um we are about at time um so yeah if anyone if anyone has uh more questions I think there's some time that we can take here at the start of this lecture to just um address them if they're if there are questions in the audience before it get started yes friend so I probably data because if you um find oh Lord um the whole point is that there's they're like fully independent of each other um so if you find a model that works really good on the segment of your data that you're using just to tune um odds are that it's going to work better than average um when you actually go to like um report your final results for your model um so they're they're not entirely independent the model you chose depended on that data um and you want to make sure that whatever data you're like reporting your results with is like fully independent of the data that you trained on and selected your model with is that fair yeah okay um are there more questions if not we can Jump On In all right um party on so welcome everyone to lecture two um this is going to be a bit of uh again another quick lecture um so please feel free to to stop me if there are questions that you have things you feel like I haven't fully explained um again like the first three lectures of this course uh and technically the fourth one as well are just to get you into deep learning as fast as possible and like break next feed um so that we can actually start like learning how to use the tools for deep learning and like playing with stuff um so that's that is sort of a short sort of explainer as to why this is going to be um very quick um The Hope is that after today you understand what deep learning models are uh the most basic form of deep learning model and um how we go about training them uh we we talked a little bit about different kinds of models but we haven't really touched much on like how we actually would go about selecting the best parameters so we're actually going to make sure that you know um what the form what the function actually looks like in the case of deep learning as well as how you select the good values of parameters um so we'll leave fast please feel free to stop me at any point um and answer let me take your questions um I just wanted to put that out there before we get started um so let me fix my speaker notes so first we're going to start with like a little bit of math review We linked some things that we wanted you all to know before starting uh this class but just in case we're going to go over some of them like a little bit of the the linear algebra that we're sort of expecting for this that is going to be useful later in this lecture um again this is a a safe space and please feel free to ask questions if you have them um we're going to talk about the motivation for neural networks we're going to talk about neural networks themselves we're going to talk about how we train them gradient descent um and talk a little bit about some other building blocks you might want to have um bells and whistles for your basic neural network and then a little bit about why deep learning is used why why we what we have observed um and why it is so prevalent so a little bit of math review um Vector dot products this is um something we want to make sure all y'all are comfortable with um just the idea that if you have two vectors their dot products you have a vector of W1 W2 frequently using W's for our parameters and then WN multiplied by another Vector X1 all the way to xn that if you multiply you do this which is referred to as a DOT product um that it is equal to X1 times W1 plus X2 times W2 and so on all the way down xn WN and this is how we notate it we refer to this Vector of W's as a row vector and the vector of x's is a column Vector it's sometimes shortened to W transpose multiplied by X does anyone um are there questions in the audience about just Vector multiplication um and more specifically this notation how we get from this written out all the elements to just W transpose X where W and X are vectors okay um then in that case we're going to roll on to the next one um the reason we're going through all these is because we will use them in just a minute um and I just want to make sure that you again you're fully comfortable with with this conversion from elements in vectors to this shortened Vector notation and if I'm writing too small in the back too let me know and I can try and write a little bit bigger um so Matrix Vector multiplication you can think of a matrix specifically this is something that probably isn't immediately apparent if you've just taken you know like a like high school algebra and stuff you can represent a vector as just a list of row vectors so this whole guy right here is a vector just like that Vector of written out W's that we had on the board another vector so on and so forth and when you multiply that by column vector you can imagine Distributing this column Vector to each of your rows and doing a whole bunch of dot products right so in the top you'll have W1 the dot product of w Vector W1 in this case W1 is the first row um dot product with our X W two transpose X all the way down WN transpose X or all of our W's and X's here are vectors does anyone have questions about specifically how we can um make Matrix Vector multiplication compact by representing it as you see on the bottom right as just a series of dot products between the rows as well as our original column vector okay if there are no questions then I will move on to the next one um don't be scared by this bottom thing it smells your fear um the the important thing I want to just talk about um is just notation how we sort of like index into our vectors um with neural networks we're going to be talking about just sort of like black box functions um it's just some magical black box that takes in a vector X let's talk it's not writing or apologize I'm Spinning Out perhaps another Vector wide um and as we start to talk about what's inside this black box um that makes it quite magical uh we will be referring to um again elements of vectors and I want to get you comfortable first with notation so X is often refers to like our input Vector to our function to our magical function that is a function of vectors we're going to refer to why usually as a label vector so we talked a little bit about like one hot encodings and how if you're classifying digits and you have 10 different potential output classes it's convenient for perhaps best to represent the label for the digit three uh as zero zero zero one and then a bunch of euros so that would be an example of our label y it's a vector W is often a matrix of parameters every element in the Matrix will be a parameter something that we're trying to learn B is if we have like a vector still usually parameters and then Theta is just sort of this shapeless thing that refers to all of our parameters together in this case on this slide um it is a vector and when we're subscripting things like a vector B subscript I that refers to just a single scalar um because we have a vector and we index into it and pick something in it that's just a scalar um things like that that we can have a vector of parameters um like if W is a is a matrix of parameters um or or yeah are individual elements of w are all just scalars um and they are they are parameters individually each of these scalars is a parameter that we're trying to optimize other questions in the audience about um this idea that we can have things like matrices and vectors um but all they really do is just contain all of our scalar parameters just values that we're trying to optimize for the questions comments concerns okay um so again don't worry about the thing on the bottom I just wanted to emphasize that you can have um vectors of parameters you can have matrices of parameters um but they're all all the parameters individually on their own or just scalars um just the same as you've seen before um and derivatives of vector functions so our function if we have a function here that spits out a scalar if our function is just the dot product between some Vector of constants a and another Vector that is the input to the function X uh there's no reason why we can't just take a derivative of our output with respect to one of the um in this case inputs to our function um so if you have something like this because at the end of the day everything that's inside a and X they're just scalars there's no reason why we can't take the derivatives of our output which is a scalar with respect to like one of our inputs which is also a scalar and so this is something you can see a transpose X is it just boils down to scalar multiplication and addition even though it all came inside vectors and matrices it all boils down to scalar addition and there's no reason we can't just take you know boring old partial derivatives with respect to the inputs are there questions about this because this is something I really want to make sure we're all clear on um because it just makes life really easy going forward if everyone's on the same page with us other questions all right um so with that in mind we can start motivating and talking um about neural networks I just want to make sure that everyone feels okay about um notation that we're going to use with all the linear algebra um so why where does the motivation for a neural network come from so there's like all different kinds of problems that we can solve in machine learning um regression classification reinforcement learning and frequently it revolves around super non-linear functions like creating in the case if you're trying to regress onto a polynomial that looks super funky you're trying to learn something that's like super non-linear it doesn't look like a line at all it's a really complex kind of function that you're trying to model um and we want we want something that's capable like a universal model that's really good at doing all these different kinds of tasks um and the motivation we want to have for that is like the human brain the human brain is really really good at doing all different kinds of tasks um it can classify objects you can figure out where objects are they can do all kinds of crazy stuff um and despite the fact that your brain is really just one model that can do all different kinds of things and we're going to try and take some inspiration from that to figure out is there some kind of model that we can reuse for all different kinds of tasks down the line that's just like good out of the box and your brain has neurons um there just these little dudes in your brain and they take inputs from surrounding neurons and depending on what the inputs are uh they spit out a value um depending on the sum of all of the inputs into that neuron um and just a fair warning too this is not the same as Cog SI if you want to take cogsi that's fine but don't like be baited into thinking that you're going to get like a special insight into deep learning by taking Cog site uh maybe some people do but I've yet to hear about it people just sometimes feel like they got into it because of neural networks and then they felt like they got cheated but we're going to try and take some inspiration from that um so if we want to look at we're going to first like kind of draw it out um like what we want what we want our black box to be so we're basically going to try and make sort of like a mini brain and like the simplest form of that is the perceptron um it is a mathematical formulation it's pretty similar to a neuron um we have a whole bunch of inputs um figure out where to write those we have a whole bunch of inputs I think Vector oh and my backpack is chalk on it we have a whole bunch of inputs and we're gonna just wait them just like the incoming connections from one neuron to another will get weighted and then summed up so we have a whole bunch of inputs in blue um X1 X2 all the way to xn we're going to weight them all and add them and then activate on them with some kind of Step function um we're gonna hijack this and we're going to replace the step function with arelu which basically on this graphic you can see for any value less than zero it spits out zero for any value greater than one it's going to spit out the value it took in um so yeah it's it's what it ends up looking a little bit like actually no I'm jumping the gun um we're taking in a bunch of inputs we're scaling all of them by a different value our w0 ends up being added really um and that's B on this slide here so we have our w0 which we're just going to refer to as B um and then all of our inputs weighted by a different values so that's really all the perceptron is but the whole point is that it sort of looks like a neuron it's pretty close it's got a whole bunch of incoming connections and it's got one outgoing connection based off of what all those inputs were so this is sort of like what a neuron looks like just taking it inputs weighting them adding them and then spitting them out through the Rayleigh function so this is like just a little demo basically a toy problem so if our inputs are if our inputs are like three two and one and we have a bunch of Weights in this case three negative one negative one and eight w z w zero or bias term of negative two we're simply just going to take our first weight and our first value we're going to multiply them together the output of all this we're just going to take our first weight our first value nine next weight next value negative two next weight next value negative one we're going to add them together with our bias let's forgot it negative two add them all together which should give us four and then we're going to pass it through the relief function and then it's so sad we're going to pass it through the Rayleigh function which in this case because the value is greater than zero it's going to spit out the exact same value it took in and on the right hand side it's the same thing only now all of our weights multiplied are by all of the inputs plus our bias sum to a value that's less than one uh and when you pass a value that is I'm sorry less than zero and when you pass a value of less than zero through the relu it just outputs the same thing it took in um so there's any enough questions on on this example and how the math works or are there questions yes it's just the graph up in the top left it's just a function uh that just takes in a scalar um and it when it spits out uh is the value it took in if x is greater than zero yeah X was greater than zero otherwise it just spits out zero um so yeah so the output graph just looks like that fair enough all right um are there more questions yes uh the weights are arbitrarily chosen so for every perceptron it could have different weights okay and a different bias their learn values depending on what you're trying to do for a different data set you might learn different values um yeah they they're learned values and they can they can do whatever they want whatever um optimizes for our loss function are there more questions no if not um so let's figure out how to write this a little bit more compactly because doing all of this like Edition is really gross and takes a lot of space uh an easy way to write this is just as a DOT product between all of our inputs and all of our weights and then just adding a little bias term at the end um so you can see that down at the bottom what we have before is just a whole bunch of X1 XI is by times WIS and then we just added B and we're just going to represent that on the right by just a DOT product between our Vector of weights and our input vector and then adding just our scalar bias does anyone have questions comments or concerns about the uh the way in which I have written this because this is going to be used a bunch more we're gonna we're about to go Hog Wild here okay um so this is this is the effective form uh formula for it after you've added the ray Loop um this is this is the perceptron this is our simplified simplified model of sort of what a neuron is doing um but your brain has a lot of neurons and they're all stacked and all over the place so we're just gonna go absolutely nuts we're gonna stack and we're gonna Cascade arbitrary numbers of perceptrons um so what we've basically done is for like the top blue Circle um you can think of that as a perceptron um that top blue circle is a function of all of the inputs before it all weighted and then outputted again so we've just taken a whole bunch of perceptrons that we just saw and just stacked them all on top of each other and then we took the output of all of those perceptrons and then fed them into another stack of perceptrons we've just gone absolutely nuts um but it's sort of resembling a brain the top blue guy takes in a whole bunch of things you can think of the top blue guy as a neuron it takes in the output of a bunch of other neurons um and then spits out the radioactivation on that and then on the right the green guides taking the output of a whole bunch of neurons that came before them um and then you know weight them weight all of the incoming connections and spit out a value um so like let's look just at a single layer because that's kind of overwhelming by itself so again what this is is just a stack of perceptrons um so we can think about just doing the perceptron operation a whole bunch of times um we have a function now of all of the weights of all of our different perceptrons um in this case the wi are vectors corresponding to the weights of the ice perceptron and we're just basically going to take all of our independent just this adding that we did over here and we're just going to stack them all up all the outputs into a vector um only we can shorten that to our previous perceptron notation so if you look on the far right it's literally just the perceptron formula a whole bunch of times on the same input does anyone have questions so again it's just this really function right here and we just stack it a whole bunch of times um the Rayleigh in this case applies to each of the elements independently other comments there probably will be likely yes friend foreign yeah and you've done that a whole bunch of times and you're just stacking it yeah they're they're all taking in the same input but each perceptron has its own unique weights um can I provide any more clarifications to anyone yes we're doing is just yeah it literally just boils down to the dot products so you can see again on the far right each element in our Vector is just the perceptron output and the perceptron output is yeah just the dot product with the scalar being added um and you can see each of the biases are different and each Vector of Weights is totally different independent all that are there more questions I can answer yes yeah we're gonna eventually when we when we bring a whole bunch of layers together um you'll see how this works a little bit more but eventually yeah all of we're going to try and figure out what a good value is for every single element in every single W and for every single B but the W again it just has scalars in it right it's not scary scalars are like they're our friends they're fine um so it it's it is eventually going to just boil down to a whole bunch of scalars just being added and multiplied together um yeah um are there more questions okay um so here's like just a sort of toy example again um turn to your neighbor and just talk about this for a minute here so again we've notated all the weights um and you know our input values and just turn to your neighbor and talk about you know doing the math and what you think the the output will be um make some friends targets thank you thank you thank you foreign thank you that people have if they feel good about this company oh and I also forgot to note but the the output layer will not have a Rayleigh on it I forgot to mention that yeah go ahead questions thank you thank you very much can I get a can I get a show of thumbs to see uh how people are feeling about this thumbs up and like a thumb not thumbs down but come to the side if you're feeling like you still want more time foreign then we will keep going here um so the if you just end up doing this uh you get negative three um so the the first logic in the middle um when you when you actually calculate its value it is less than zero um so you're going to end up getting um the value that is zero and then the other two because they are greater than uh zero when you pass it through the really you're going to get the original values which are five and one respectively and then the output we did not apply a raylu to um and when you just wait all of those values from that we obtain from this middle layer you're just going to get negative three can I get like a show of thumbs to see like how many people got that it was kind of a lot of basic addition so people can get it like the fully expected because like doing this in my head too is like impossible um okay so it looks like looks like people felt okay with that um so this is this this is a very small very basic neural network um that is what it is that is what it looks like to compute the outputs um of our neural network and if we want to revisit this even more and get our notation even more compact um previously we listed each element in our output as being the result of a DOT product and a single Edition if we can practify this even more and just bring out all of these into just one vector the dude on the left hand side is literally just matrix multiplication a bunch of w dot products with our input X that's the same thing it's just Matrix Factor multiplication and again it really applies to every element individually in this case so this is the most compact form of a forward pass for a single layer of our neural network that is that is the way to write it we have we have achieved what we wanted a very compact way to write one little layer of a neural network do people yes friend yeah The Ordering of yeah sorry it wasn't very clear from this but I was hoping that it would be the first um the dude on top corresponded to the first weight the second output in this middle layer here corresponded to the second weight and so on well this is just me making a crummy little design for it uh normally normally you'll just represent your weights as like again like a row vector and you'll just notate it as we're doing here and then there will be no ambiguity um but yeah good question and I apologize this is not I'm I I will see if I can make better graphics going down going down um as we progress but uh I hope the I hope the point is clear the point got across um the the essence of this exercise um are there more questions all right um so this is a little bit of a side track but so let's talk about like classification so we have we have a neural network and we've this example here that we just did can be scaled out so we have like many layers and they're even bigger uh and we end up with like millions and millions of Weights um so with all this just uh we want to take a short digression and talk about um how we would go about like interpreting the output of our model so in this example the output was from like negative Infinity to Infinity but that's not like super useful if we want to try and like classify a digit so previously we talked about like what happens if we have a picture with like numbers that go from zero to and including nine so say this include this is a seven but maybe it kind of could be interpreted as a two um how say we have like we we've talked previously too about we can represent our labels for this as being one hot representation of this where we have zeros in all of the places that don't correspond to a seventh so this is zero one two three four five six seven eight nine so we have a one in the seventh place and we talked about how we maybe want to do something like mean squared error we want our our model our black box to spit out something that looks like this but like how do we constrain that how can we make sure that we can interpret our model's outputs as like the probability that the model thinks that this is a one a two a three or four five six seven how do we how can we like with math make sure that the output um can be interpreted as like our confidence that this is a seven um if our model like is I'm not sure maybe that's a seven maybe that's a two it maybe would do like 50 chance now that's a two and then a whole bunch of what is that zero one two three four five six seven eight nine maybe it's like fifty percent sure that it's a two maybe it's fifty percent sure that's a seven how do we like make sure that our output the model's output can be like interpreted as you know the odds of it being a certain class um and we're going to use one last operation on the very end of our neural network um that's called softmax what do we want we want two things um in order to make sure that our output can be interpreted as like probabilities that this is you know um a seven probability that's a two we need to make sure that all the elements sum to one we also need to make sure that no value is less than zero or greater than one because having negative one probability is absurd um so those are we need some kind of function that always outputs a value or a vector um where all the elements are from zero to one and they add up to one so what we're going to use as a soft Max don't worry about the formula on the right it's just if you want to go back and look at it later um what we're basically going to do is maybe our our Network output is like crazy values like 3 20 negative 50 like insane values the first thing we're going to do um is take the exponential of all these things so we're going to take in this case we're going to use a base of e just because it happens to make the math really nice but you can use like any value if you want um we're going to take e to the three we're going to just replace our Vector e to the three e to the twenty B to the negative 50. what is this done all of our elements are now positive that's a start nothing e to the power of any number is always greater than zero so that's like a start and the next thing we're going to do with our formula is with all of these values we're just going to normalize it so we're just going to take the length of this vector and divide every element by the length of it so that when you sum all of it up or not the length of it I apologize the sum um we're going to sum all of these guys up and divide by the sums so that when you add all these elements up they're going to sum up to one um so it's very simply just the act of taking all of the outputs of our neural network taking e to the power of all of those individual elements and then dividing by the sum of all of them so that we guarantee that one everything first we guarantee that everything is greater than zero and then second after we have divided or normalized um we've made sure that the sum of all the elements in it are one um and because the sum is one and everything is greater than zero then we also can see that like no individual element is going to be greater than one um so does anyone have sort of questions about that and it's sort of it effectively will make sure um our model will never have like perfect confidence um unless it outputted like negative Infinity for almost every single value hence the meme that there will always be all of our probabilities will be probably they might be small but we're never going to output something that looks like this it's probably going to be 0.001 0.0001 very small values um after we've done this operation um not possible softmax means it's not probable um the largest values will stay the largest after we've taken the softmax um whatever whatever value is largest after we take the soft Max it will be the largest probability by a good margin um a little bit a little bit Mappy but um I want to know are people comfortable with that and the idea that we're just we have a function here that takes in some value some Vector of values and spits out a whole bunch of a vector of values that sum to one and are all greater than one greater than zero okay sort of a Mathias side you'll never have to implement this or worry too much about it but it's just something you're going to throw on the end of your neural network so to like recap we took a whole bunch of perceptrons um which are just weighted inputs also bias and we stacked a bunch of up of them up to create a layer and we just cascaded a ton of them um and that gives us this neural network and if you want to interpret the outputs of your neural network as like probabilities of different classes you're just gonna stack you're just gonna stick on like a soft Max operation at the end of it if you want to do classification which we will talk about a good bit in this class um so that is that is the Ford passive neural network that all of this crazy stuff that's been happening in machine learning boils down to this um that is the essence and here's how we got here just taking the perceptron notating it as just a DOT product plus a little scalar value on the end stacking a whole bunch of them up um and forming a Matrix Matrix Vector multiplication so you can represent a single layer of neural network and then cascading a bunch of them you get something like this for a simple two-layer neural network that is that is what it all boils down to um so yeah if anyone again if anyone has questions comments feel free to stop me um or I can keep rolling on um so the question was asked well y'all were talking about that example um it's like why do we even bother having this relu if we took the relu out of this two-layer neural network we would get something that looks like this um and at the end of the day what this boils down to if you multiply a bunch of matrices together you get another Matrix and then it's like why did we even bother with all of these layers if at the end of the day if we multiply everything out it could have just been summed up by a single matrix multiplication in the first place the reason that we add this relu here is to add some kind of complexity to it to make the function much more complex because at the end of the day again a weight Matrix from one of our layers W2 multiplied by another Matrix W1 could just be represented by a single Matrix in the first place um and it adds some necessary complexity that allows us to model all kinds of crazy complex things um so are people feeling okay about why we would need um these rayloos in this case okay so gradient descent how do we actually optimize um our models so we talked a little bit about loss functions some kind of metric for like how good or bad the output of our model is doing and ideally it is high when our model is doing really poorly and it's low when our model is doing really good um our model output may be a vector so we need to make sure that we can handle that um that our labels and our outputs can be vectors and ideally this also should be like differentiable um we're going to be using a little bit of calculus here um so it's also important that this is differentiable we'll talk a little bit more about why that is and what that looks like here in a minute um but this is again just sort of recapping what we talked about Tuesday um and we talked about like mean squared error if this is our real output and this is what our model outputs we just take the difference between every element Square all the differences and then sum them up that's a not a bad metric for uh for like how good or bad our model is doing because if everything's really close then you know that means we've probably outputted close to a one and we've classified this maybe correctly as a seven um it's a loss function that we can use um so I want to talk a little bit about uh make a metaphor here if you're on a hill and you want to go to the bottom but you can only see like one foot around you what are you gonna do the best move you can do is you're probably just going to take a step in the steepest downward direction if you want to get to the bottom of that Hill since you can't really see everything around you you're probably just going to take a couple small steps in the direction of steepest descent stop reevaluate figure out okay has the direction of steep is descent changed and then keep on trucking so we're going to use this as this is this is how we're basically going to optimize our neural network um we have a very complex it's a vector function really um and the steepest or um the steepest Direction in this case is the gradient of any kind of vector function this is something you learn in multi-variable Calculus if you have some kind of function of multiple variables and you want to figure out at any given point what the steepest direction is you're going to take the gradient of that function with respect to its inputs and if you want to follow the the steepest descent you're going to take the negative of that gradient um just this idea of of hill climbing and how it applies to uh multi-variable calculus um in calculus again just your steepest Ascent the direction you can step in that will cause the greatest upward change is the gradient and likewise the opposite direction the negative of that is the direction of steepest descent um so this is hopefully something y'all have seen in in math before if you're not comfortable with why this is the case feel free to check out Khan Academy it has good examples for why the gradient is the steepest direction that you can step in so we have uh complicated admittedly function um of what are effectively scalars so we have a function our loss function so it's out of scalar and all the dudes inside all of our W's all of our B's they're just scalars right it's not really substantially different from just a normal function written out right like X One X2 that are all scalars it's not substantially different from a multivariable calculus problem um in that all of these elements inside our W's are B's whatever um they end up just being scalars and our output is a scalar um and is are we just take our data here as constants um that you don't you you don't take derivatives of um so what we have here is just kind of a complicated looking calculus problem if you want to figure out what is the best way we can change our weights since it's really hard to figure out like on at face value what is the best value of these weights that would optimize and give us the minimal loss that's like a really hard problem so frequently we're lazy and what we'll just do is we'll just initialize all of our weights all of our biases to be completely random and then just say okay we can't see everything we don't know where the best place is but we can kind of see which is a good direction to step in it ends up being just the same thing we're going to take the gradient of our loss function with respect to all of these scalar parameters and we're just going to step them in a direction of steepest descent and we're just going to step with the idea that hopefully our loss if we're taking steps that are small enough measured enough and we'll keep we keep evaluating where the new direction of steepest descent is that if we keep doing this over time we will ideally come to a good selection of weights and biases this is sort of unintuitive because again like if our weights biases are if we have millions of weights and biases uh this is a a million dimensional gradient function and a million dimensional Hill so I encourage you not to try and visualize it like that um but I hope you you understand that in multivariable calculus when you have again um a gradient of some much higher dimensional function it's still the direction of steepest ascent and if you move in that direction it will be the steepest increase in your function's output so we again here we've just written it out explicitly the first weight matrix it's first element and then so on the second weight Matrix um it's last element all of our bias vectors all those individual scalars that's what our loss function is a function of and there's no reason why we can't just turn it into a you know math 53 problem um so let's explicitly write out how we take these little steps here um and I should note again the gradient Vector what it ends up being is just the partial derivative of our output the output of our function which is L here with respect to all the inputs and our inputs here are our parameters right um it ends up just being the partial derivative of our loss function with respect to all of our individual parameters you can think of it just stacked up and then we'll worry about reshaping it later to match the shapes of all of our weights and biases this is kind of an unintuitive concept so I imagine there's questions um and this idea that we're just going to basically step down the hill we're going to figure out which direction we can move all of our weights and biases so that maximally decreases our loss function foreign did you say how or would you yeah how um I mean this is just a continuous function right um so you can take partial derivatives of the output with respect to the input um it's an it's a very complex very high dimensional function um but it's all continuous at the end of the day I mean we'll talk about this more in the next lecture how you would actually take those partial derivatives but uh the bottom line is like do you buy do you believe me if I say that like since everything in here is like fully continuous um or entirely differentiable that there's no reason why um you can't measure a small change in the output if you make a small change in the input does that sort of scan yeah we'll talk more about how you do it later but yeah yeah well I mean our loss function if we choose different weights and biases our functions Point has different outputs correct so our loss function is going to change so in a certain sense our loss function depends on all of our parameters um which is why we can just think of this as just a standard multivariable you know math 53 um calculus problem um where we're just taking the derivative of some f of x y z w whatever um with respect to those input values x y z w yes friend right yeah and then we're going to backtrack and walk in the opposite direction well yeah well here's what it is basically we're gonna we have all of our parameters and we're going to evaluate we're going to take the derivative of our loss function with respect to those parameters evaluated at their current value I just want to point out this is the correct notation and like we don't really care about what the like symbolic derivative is um we're just going to look at the value of our parameters right now we're going to look at those weights um and I guess you're jumping the gun a little bit but then we're going to update them we're going to take some small step um the size of our step here being scaled by Theta or that not Theta Lambda we have all of our parameters um and we're just going to take that small little step corresponding to the component of that gradient we're going to take that small step scaled Again by something called our learning rate which is Lambda rats I'm running over time um are there are there questions about this idea of like the gradient update and we have all of our parameters Theta we've taken the the gradient here is just the vector of the partial derivative of L with respect to all the individual components that went into our function and we're just making this this little increment we're moving in the opposite direction of greatest change um are there questions comments concerns about this still that I can answer and again just it effectively boils down to just taking the gradient um of our loss function L with respect to all of our scalar parameters which again is hopefully not substantially different from things you've maybe seen before in like vector calculus only now there's just a lot more inputs uh but it is still it is still the same thing at the end of the day so this is how we make a gradient update if we're just looking at like one little tidbit of data and one output this is how we can step so that we can make uh the biggest decrease in our loss function um on that example so if we saw that example again after making this little update to our weights um we would get significantly decrease loss on this one example that is that is the Hope um and we need to make sure that our Lambda our learning rate is like small enough because we're taking two biggest steps then we're gonna like we're gonna overshoot and then all of a sudden we're we're going to find ourselves going up the hill again um so just by taking small measured steps we should see a decrease in our loss function on this one example only other thing we really care about is that our loss decreases across all of our data so now that we've talked about how we would like step our weights in order to decrease the loss on one example we basically need to figure out how to do that for like all of our examples and if you do the math what it turns out is that if you just take the gradient of every single training example and you just average it that will be the step in which you can take um that will result in the the largest decrease in your loss over your entire data set so this is really the the direction we would kind of want to end up stepping to make sure that we're decreasing the loss for our entire data set so if we run our entire data set again and measure the average loss it'll be hopefully a lot smaller than than the last time before we updated our weights this is sort of again a bit of a confusing concept but my hope is that you will believe me in that we can take the gradient we can simply average the gradients across all of our training examples um and just to recap what it's simply going to be is you're going to take every single parameter you're going to take the partial derivative um evaluate it at every single training example with respect to every single parameter and you're just going to subtract off that partial derivative the average of all the partial derivatives across all of our data um we're taking the gradient on every single example and stepping by a consistent amount for every single example updating your weights and then trying again it's kind of a lazy method because there isn't really a good way to concretely find the best value of all the weights and biases for the neural network I'm going to move a little bit quicker here hopefully y'all can review the slides or ask questions on edstem because I don't want to keep you too long um but if you want to do this in a more computationally efficient way you're going to instead of taking this across your entire data set you're going to chunk it up and just look at little chunks and hope that they're a good approximator for your entire data set this is called like batch gradient descent so rather than doing this for your entire data set taking gradient steps for your entire data set just do it for like a little bit of your data set and hope that it's like about as good or about accurate and the idea is that as we get closer and closer to the bottom of the hill we're going to slow down we're going to take smaller and smaller steps um because the the value of our gradient will decrease as we slowly come to a little local minimum um I'm going to skip this part on the network building blocks um you're allowed to pick different loss functions you're allowed to pick different values different activation functions other than the rate Loop um because at the end of the day all we care about is that our loss function quantifies how good or bad we're doing and there's many ways to do that and there's many ways to make sure that all of our weight matrices multiplied together don't end up being a single matrix multiplication you can add a Rayleigh you can add all different kinds of things to make sure that the the function stays complex and wild looking um yeah let's see I apologize for being over time but I want to get to this here um why why does this work we have we have empirically found that the bigger you make your networks um instead of overfitting at a certain point you end up getting better accuracy normally you would start to overfit um you would start to just memorize your training data and that would end up uh with you getting really really bad loss on data you haven't seen before but it has been observed effectively the bigger you make your network at a certain point you stop not overfitting but um you start observing low loss even though you have the capacity to effectively memorize your data if you wanted to and it's it's a strange phenomena that we don't fully understand um but big networks go Burr is is what has been empirically found so we've talked a little bit unfortunately I went a little bit over time I went a little crazy with ask or uh answering questions um but you now know what a neural network looks like all of these crazy things that you've been hearing about this is what it boils down to it's just matrix multiplication at the end of the day um with some non-linearity like array Loop um and you're simply just going to step in a direction that we think is going to decrease um our loss across our data set maximally and that's how we optimize it and your job as an ml engineer is going to be to pick the number and the type of layers in question we'll talk about different types later um you're going to pick how many layers you want them you want how big they're going to be and whether you choose to use something like a raylu or something different um so long as it isn't uh you know so long as it's going to make sure that we can't just multiply all of our matrices together to get some trivial uh linear model and a lot of it just comes down to trial and error so you're going to have to just tune for this kind of thing and figure out um which values how many layers how many uh activations at each layer which ones work best for that data set um so that is my hope now that you sort of have a grasp of how the forward passive neural network looks and how we kind of optimize for this so I apologize I ran out of time and had to rush the end there um but yeah feel free to come and ask questions otherwise you can go home we will try and have the quiz out as soon as possible um it won't be insane I promise it'll be on great scope eventually I just haven't found time to upload it yet I apologize okay yeah sorry I had to rushed out at the end we got on murdered are you recording I am so hopefully it's
CS_198126_Modern_Computer_Vision_Fall_2022_UC_Berkeley
CS_198126_Lecture_15_Vision_Transformers.txt
medication okay um yeah so I don't know well I might want to shower for it so that's why um okay so uh this lecture is just gonna be about uh Vision Transformers um I think so one thing I think someone on SM was wondering a bit about um about Transformers so just sort of like clearing up some of the specifics and like having a bit of a more concrete example I guess um so the idea like the highest level is you have some set of tokens like you have sentence um right like uh cats then you want to uh the objective would be the auto aggressive objective for example so that would mean given the cat you want to figure out what the next hope would be um so what would happen is you'd encode each of these words into embedding vectors and then uh input that into the large Transformer and the Transformer would output some set of embedding vectors and then usually you'd have also like a CLS token right so based on the output representation of the CLS token uh you then uh use that to make the classification of what token would go next after after these first two tokens right so uh the way the attention backups mechanism works right so this Transformer model has some number of blocks uh Transformer blocks and each Transformer block consists of first the attention mechanism first the attention mechanism and then the um uh the MLPs that's the multi-layer percentage so and then you have this input your each embedding goes through this and then uh just have a residual after each block after each like subtle array so this then gets added to the algorithms and then this also gets added to the algorithms right so the attention mechanism itself is you have like some black box selection that converts each of these embeddings to kqv right so that's clear key query value so that's works pretty like intuitively as you'd imagine with like a database lookup right so each embedding okay you compare it to all of or each embedding queue take the query of each embedding and you compare it to all of the key values of all of the other embeddings right and then you try to find the most relevant um most relevant keys out of all of the tokens so so it could be itself right because this also has a key embedding right and then you find the most relevant ones and you take that value and then that becomes the new representation right but of course like this is all um this is soft tension so instead of like a hard lookup instead of just taking and copying the resulting value you do uh dot products between all of the other embeddings and then uh that way you have a probability distribution over um which embeddings would be the most probable and then you can take a weighted average so then you have like the expected most probable like value out of each embedding and that becomes the resulting point and then the the multi-layer perceptron uh just applies the same linear layers over each of the resulting embedding vectors so that just adds um um as a bit more representation uh representation of power to the Transformer um so for for vit specifically uh we essentially just want to apply this architecture um from text where it was originally used and originally very successful in and we want to apply it to images right so this is like hypothetically a very very intuitive motivation for why we want this architecture um right it works well for one domain we want to use it for another so uh Transformers also have some really nice properties that that would work really well for CV so this includes scalability the scalability right there they're really computationally efficient as we talked about last time right they work really really well with gpus and tpus um which is great and also Global receptive fields which is something that we'll talk about um a bit after uh so yeah I just went over a Transformer architecture a bit um main thing though is that the way we pre-process this text right so we start with just a string and then we want to tokenize this so one way we can tokenize this is just by taking each word right so D becomes a token and Cat becomes a token and then we have some vocabulary that we can then um where we map each every single word that's within our vocabulary to some like index value print so our vocabulary might be like Z at I don't know a some set of values right this is going to be like a finite set of uh possible tokens that we can have right so the one might be zero cat might be one and then our when we go through our tokenization process you would have zero and Cat would have one right uh and then to convert this to our embedding Vector we then have to look it up in some dictionary right so we'd also have this other embedding table where zero corresponds to some dense vector and one also corresponds to sometimes Factor right so that's that's how we convert our tokens to to our then like input embedding representation so how do we then transfer this architecture to Vision how do we transfer this to images so the one way is like to start off by thinking about what is like the simplest way to do this like the dumbest way to do this so one way would be just to fly in an image so we usually have like some like 16 by 16 image right we flattened that into a um that'll be 256 length vector by 16 times 16. where each item in that Vector would be a pixel value right but the issue with that is we can't just input that Vector into this um into this um it's a Transformer because color values are continuous right we can't we can't really look them up in a table because then you'd have to have like an infinite length table it's not really possible right so um that's we have to find some way around this well one way um is to just um Pretender discrete right so like like in practice if images are stored in a computer they're not going to be they're going to have like infinite Precision right they're going to be quantized to some amount and you're going to have a limited set of bits to store each color value right so in that sense you kind of have um you kind of have a discrete set of tokens right so each pixel is represented with 256 values per color uh per color Dimension right but that means that you have this huge vocab size of 16 million right so you have like um might have your might have your like image that's like 256 long and then each token so each pixel value is a token right that that's our that's our sum for each pixel value is a token and then each each token has um has 256 times 256 times 256 possible values because that's that's the number of color values uh there are in our representation so that's too much though because um which can't store like embedding uh dimension of like 16 million so uh one way around this is to just like use less colors I guess um to instead of having a 24-bit representation you have a 9-bit representation and eight uh possible values per color channel so RGB and you have eight possible R values possible three values right um this means you have a full cap size of 512. it's a lot more reasonable um that's definitely do it more right so uh like each color value and this representation this 8-bit or nine bit representation uh you look that up in your bedding table they get uh instead of mapping each uh each like word to an embedding Vector you map each possible color to the uh to an embedding picture and then you input it all right so another problem is time collapse time complexity so recall that Transformers are N squared with respect to input length and then also input length is N squared with respect to the length of each side of the image right so if you have a tiny like 256 by 256 image that's like a input length of 65 000 right that's that's way too much um for perspective the the the maximum maximum input length of birds which is another like Transformer based but what's a language model but uh its max length is only 5 out of 12 right I think the max I've seen is like like 2000 um so 65 000 is way too much um one way around this is just to use smaller images um just like make it 64 by 64 instead of 256 by 256 right um and then the way it's trained it's trained like pretty similar to how language models are trained which means it uses the self-supervised objective of given the first uh T minus one tokens predict the teeth token right so uh given the first T minus one pixel values to protect you predict the T pixel value um and that way hopefully you'd build good image representations to check the time um so the good thing about this is that you have really really good image representations so one way we can measure this is through semi-supervised classification right so the task specifically is uh classification but with a limited set of labeled samples so remember our our model is pre-trained on unlabeled samples just like random images you can scrape off the internet but now given that pre-trained model we want to do this task where we classify on a very small set of labeled samples and um the way we do this is we just pass all of our images through our model and we take we take this output representation right we can just like take this conversation and like pull it or something um and then we use those as input into our linear classifier and then we only train that linear classifier to do classification on those labels and the only input features it gets are the output of the image gbt reputation of the um of the Transformer output representation so because we only have this limited set of samples Limitless of label samples we can't we can't train this Transformer that's just too large right we can only train this tiny linear classifier um that only uses the output image representations in order to do classification right uh so in order for this linear classifier to do well the output representations of the Transformers need to be good enough that this like tiny weak classifier can can can classify images right that that the image uh the the output embeddings already have to be like linearly separable with respect to different objects right it already has to encode uh what's inside images because this linear classifier needs to be able to use that presentation in order to do classification right so the result is really good right it's it gets you know some really competitive results um and uh without using labels right so it's able to get like 96 accuracy just um with a very like limited set of of image set labels right so um another way you can sort of look at this is with image generation so recall that the way we train this is is auto aggressive right uh given the first T minus one pixels we protect the teeth pixel um so what we can do is we we just run that over and over again on this input like Max masked like elephant with butterfly ears and then it's able to generate this elephant with butterfly ears um while like with understanding that like this is a weird elephant that has butterfly ears and like this this image probably doesn't appear much um in the training set right but it's able still able to sort of infer based on context uh what's inside the image and make these really good Generations so this is all this also applies for like droplets it can sort of do this like pseudo like physics simulation I guess and these cat drawings too um so yeah I think this is what sort of collectives tension with um with uh this this model and and Transformers applied to images so the bad is that it takes a ton of compute right so they trained a 6.8 billion parameter Transformer and it was trained for 2500 V100 days so V100 is like a type of GPU and um essentially they used a ton of compute and the Transformer is only able to work on images with with a 64 by 64 resolution images right so that's that's pretty bad so well like what's the point of this so the point is mostly as a proof concept right so this this Paradigm of Transformers and massive self-supervised pre-training will apply to a completely new domain so we use this Paradigm of like Transformers and a ton of uh pre-training data unlabeled pre-training data um with with text and that worked really well uh but now we want to apply it to like images right so we see that it it does work great uh but uh we do need like a faster way to do it like this isn't going to work for for any Downstream applications yes um um so you wouldn't actually train the Transformer you'd only train the the linear layer on top right um and then you can just do like a normal I think they do like logistic regression so you can just do like your normal across entropy objective in order to um or to train your train your linear model except um the the model instead of taking as input the images it takes us input the output representations um so yeah yeah yeah so you have a very small set of labels um so there are ways to make Transformers more efficient like architecture-wise but uh we call it like a major appeal of using Transformers they scale really well uh relative relative to the amount of compute you have right so because the architecture is so simple and because they're essentially just like huge Matrix multiplies uh they work really well with gpus and tpus um so we were able to scale them really well with our with the sort of Hardware that we have um so yeah that sort of justifies why we might want to keep the architecture more simple even if they're like N squared with respect to the length right um so a maybe more practical solution uh is from this paper so it's called an image I can't speak today an image is worth 16 by 16 words so rather than quantizing pixels um we sort of downscale images by first splitting them into patches and then projecting them into our input embeddings directly so remember last time with image GPT we sort of uh we pretended that our pixels were tokens and then we looked them up in a table and then we use that table to get our input embedding results right but now what if we just take our image we split it into patches and then for each of these patches um we do something like the the kqv like projection we just pass it through some linear layer and we take that linear layer output to use Azure input embeddings so we we just take that image directly we process it some bit a bit and then we use that representation as the representation of of that um of of that part of the image right so our tokens are essentially the patches and the patches instead of looking down looking them up in a dictionary we we just use a model a tiny model to to um learn what the input representations would be and then we also do the standard stuff of adding like position embeddings uh and like the CLS token uh in order on top of our uh Transformer uh yeah and then we let's see so some miscellaneous stuff for this architecture so one thing is um because we pre-trained because pre-training like takes so much time we go through so much data uh we tend to want to pre-train at a lower resolution first and then fine-tune at high resolution um we can also get patch representations using convolutional networks so it's more of like a hybrid approach I guess so remember this this linear projection this passes the patches through a linear layer instead we can use a convolutional layer that also works um the tasks that they do the benchmarks on is turn off your standard like imagenet classification so um uh given so that's just image classification um yeah so the way they train this is they first do pre-training on like a massive data set so the thing about this is this isn't self-supervised the way they do pre-training right they they have like this weird Google has like this huge like closed Source like uh three billion sized um like image classification data set where there's like a ton of images but the labels are also like really noisy because they're some of them are like automatically available some of them are like hand labeled um so this is a bit different from from before and that we're pre-training with a supervised uh data set and then after we have our pre-trained model we can then fine tune it on the actual data set um so this gets really really good results uh it was state of the art for for imagenet I think or fairly but yeah um but the main thing is that relative to the previous state of the art so this is the convolutional neural network um relative to the previous state of the art um it is way more efficient so so this this tells us how much compute it uses uh and it uses like um like one-fifth of the compute of the previous state of the art so um the claim of this being a more efficient architecture is is yeah true um so sort of a quick aside um on this term like inductive biases um that that we'll probably be using a lot um they're essentially like assumptions that you make uh by choosing some model right so if you choose for example like a linear model then you're assuming that your data is linear right if you if you your data is not linear then your linear model is not going to work really well so the the claim right why Vision Transformers are maybe like better in the most like General sense of the word are are better is because they're more General in the sense that they have less inductive biases right so the reason why a lot of people are really hyped about Vision Transformers is because um or one reason is because they have less inductive biases so let's so specifically uh what they are are for example you have um I I'll just go over I was like comparing contrast that the inductive biases that you have for cnns versus for revision Transformers so CNN's on the left you have the big um two big ones are locality and uh two two dimensional neighborhood structure so locality means that um sort of image features that are closer tend to be like more biased towards like the model tends to be more biased towards image features that are close together so for example if you have um if you recall like the the sort of sliding window architecture of uh of your CNN um the only way uh in a single CNN layer the only way that two pixels can sort of interact with with each other are if they're within the The Limited kernel size right because let's see uh service so you have your younger image uh and then your CNN where your CNN does is you have you have some like set of Weights these get applied and it's like a sliding window of acting so that you do the dot product between this and this and this and this this and this right so the only way that two pixels are going to interact is is if they're within this range right so if you have two pixels here and here then those gets interact right but if your pixel here and here then those are those are way too far away right those um those the output representation are still going to be on the opposite Corners right because it's limited by this tiny kernel size which is going to be like five by five like three by three or something um and obviously like two-dimensional neighborhood structure if you're doing like a kernel with like 2D matrices that's you're gonna have you're gonna assume uh that your input is also electricity right um other thing is translational equivarius so um the thing here is like we apply the same weight Matrix to each part of our image right so if we have an image feature over here an image and an image feeder over here they're going to be treated the same way by this by this um uh weights right because we're just applying it in the same way here and here um so yeah that's another big assumption that's made so whereas with with vision Transformer oh sorry oh yeah sure yeah so uh remember like uh oh this will all be like flattened out but um for a vision Transformer um this will attend to everything right so this will project this out to like k q d and then each pixel will also have like a kqv it's also have like a q b and then this query will attend to this pixel this pixel this pixel right then also 10 all the way over here to this pixel regardless of of how um how far away like each pixel will always attend to every other pixel um or sorry each token or um always tend to have another token so the connection is uh it's a computational notifications uh yeah yeah essentially yeah um questions yeah so you do have different um like functions for different features but each of those functions get applied in the same way for each part of the image right so in like the translational Aqua variance appears if you share the same weight across positions so yeah um whereas if you have a vision Transformer um sort of uh each each position essentially each each position is a unique token and each unique token has its own uh representation for key query value so then um the each represent each representation also depends on the token also you have like your your positional embeddings as well which also affect your your resulting uh representations uh yeah so um that was CNN and then you also have um the induct devices for vits the only ones that you have are sort of image patches so like image if you're if you're splitting it if you're splitting your image into patches and you're assuming that it's going to be like a 2D image right um and then uh also like positional embedding so this is like don't worry too much about this but uh position embedding interpolation during fine tuning so what that is is remember that um with this architecture you want to pre-train at a smaller smaller resolution then fine tune at a higher resolution but the way that the positional embeddings are trained right they're entrained uh they're trained as like an embedding Vector right so you're going to have like this limited set of positional embeddings that you're used to using for your model during pre-training but when you actually fine-tune at a higher resolution then you need more positional embeddings but so how do you like transfer your old positional settings to this this like new set of positional embeddings you're gonna need to interpolate between position and nobody else but but don't don't worry too much about this essentially like the only inductive biases occur sort of at the input of your model like when you're when you're like sort of pre-processing um so it's much more limited than that of CNN so what this sort of means for this architecture is that uh vits perform a lot worse when there is not as much data and they perform better when there's a lot of data so if you look at this graph right so the x-axis is the number of pre-training examples the y-axis is the certain accuracy um so the gray is CNN's these are just resonance and then the the blue green other stuff is vits so in the beginning when there's not much data resonance perform better at the end uh when there's a lot of data um resonance perform worse uh yeah so sort of why is this trade-off happening with vits and then sort of how does this connect to induct devices so the the sort of intuition is um if we don't inherently bias the model towards any particular like mode of data if we don't bias it towards images inherently then it has to learn those representations from the data right so um whereas with cnns a lot of these biases they're already hand engineered right so if we sort of assume that like humans are dumb and um like machine learning is smart then if you if machines if you have the machine learn its own representation if you have the machine learned its own like way of processing images then that might be better than if you have like humans try to try to engineer it themselves right so we can see some evidence for this with the type of the types of representations that are learned by the model so for example you have um the position embeddings so position embeddings uh they're just like learn they're initialized randomly but even then they still uh you can see that they still sort of encode position on the 2D plane right because even if you find everything and input it into the model it still encodes things like in a uh with like two back seats right um another thing is like if we look at the intention map so another thing Transformers are great at is or they're they're kind of okay at this which is that attention lets us sort of see what the model is like looking at makes it slightly more interpretable in some ways um so we can look at uh the attention uh representations uh and see that um when you input an image it tends to sort of look at the object and image and ignores the background and we can see yes um not too sure where this is from but um you can see some of the representations that has are sort of encoding like lines and textures and stuff sure which are nice um if you might sort of like recall sort of similar ones that are learned by CNN's which is pretty cool um so yeah the the sort of takeaway from from this is that um Transformers are able to learn um uh learn these biases learn how to better represent images if we give them a lot of data right and it's able to learn these uh better representations uh because we don't bias it from the beginning right whereas CNN's if we bias it to process images in in some way uh from the beginning then it's not able to learn these better optimizations right um so I mentioned this earlier uh at the beginning which is another Advantage uh the version Transformers also have a global receptive field so um actually yeah we kind of touched this which is if you have um sort of uh something here and something here uh if we want these two things to interact we can't do that in like one or two convolutional layers because after one or two complex because this this kernel size uh limits us to only uh only process things within this tiny area right so we're going to need to wait until like the very end in order to have these two tokens relate to each other or or interact with each other um whereas with vision Transformers everything can attend to everything from the beginning right all tokens explicitly uh sort of like interact with every other token so um from the beginning this this token can if it thinks this this other option is important then it can it can just look at that representation from the beginning it doesn't have to weigh all the way into the end right and we can see evidence uh in practice for uh for this happening uh with this graph so the the y-axis is the mean attention distance so this is essentially the distance between the original query and the um the key that that it ends up using so um the attention distance between here and here would be one second business would be like two pencils here would be like probably the max right um and then you essentially average them and you weigh them by by like how how much they actually use it right so if they're if the dot product between this and this is high then this distance so if if in practice this token ends up uh really wanting to attend to this token then your your mean attention distance will be really high if in practice this token like doesn't really care about all the other tokens just cares about its neighbors then the mean attention distance is going to be really low so we see is in the earlier layers it does in fact use um it does in fact uh use tokens that are far away from it right so um we see in practice that the the Transformer does to take advantage of having um this this Global receptive field um so a couple cons for this for this architecture um uh well one thing is that the patches are not very fine granted right so for something like segmentation if you want like this fine like boundary because because we we uh segment it into like Patches from the beginning you're not going to have a reputation of this right so your output reputation is also not going to be very fine-grained um and then and that might prevent uh present some some issues um other thing is like fine grain image classification and anything essentially where you need a lot of detail because the issue with vits or this particular architecture is that um you sort of just glob all the patches together um so you might lose a bit of detail on that front um so yeah so the takeaway from from these two models is sort of this idea of like generality right so we we want to be moving away from the it's the it's the idea of moving away from models that have a lot of inductive biases and towards more General models uh right so any domain specific biases or domain specific like features should be learned from the data instead of uh being hand engineered right we shouldn't have models that are specifically made for images or models specifically made for text uh which uh it would be maybe a good idea to have models that are um specific to their models that can be used on anything and then they're only they're only specialized uh once you train them on data right so I I'm sort of like putting it back a bit and saying like maybe this is a good thing because there's I don't like it's hard to have like hard evidence of which like research direction is the best but um some some intuitions on why we might want this include um so they're just like the historical aspect of like where where AI is going I guess like if you think of what AI was in like the 60s right it was like expert systems right it was these huge essentially like dialogue trees or like if statements where you can engineer every single possibility every single user input and then uh approval like if if the user inputs hello then you should also output hello right um versus uh more recently with deep learning and stuff right but now that we have access to all this compute and all this data uh we can take advantage of of more learned algorithms uh that that uh learn their own features from from data and uh we see that uh in the past they've worked they work better so um sort of the natural progression to more and more General models more and more models that learn from data instead of being hand and hand engineered uh might be a good thing yeah um so it also lets us easily combine domains so if our architecture and like learning scheme is generally enough then we don't have to craft like if we want to build a model that that does like image capturing right that that needs to process both image and text information so if we have the same model and we're able to pre-train them on both image and text then that's great we we've like saved a step of having to like engineer an action model I guess yeah um so an example would be like Kato it's um it was a model released something like 100 years ago by deepmind where they pre-trained uh Transformers on like text images it's like Atari games and through all models of Transformers uh which is pretty cool um so another thing another reason why you might um another like takeaway would be this shift towards more self-supervised and unsupervised starting so this is Learning Without labels right Learning Without label data so a reason a reason might be like uh recall earlier right Transformers need a lot of data to work but they work really well if you have it right so uh Transformers in particular have like this this GPU and TPU compatibility that allows to scale uh really easily with the hardware that we have right but the issue is that we have a lot of like random data that we can scrape off the internet but we don't have a lot of like labeled data or quality label data I guess right so self-supervised and unsupervised objectives that don't use labels uh would help us take advantage of all this like random data uh online and sort of save us the trouble of having to annotate everything which yeah so yeah Transformers give us generality and where biases are learned from data instead of handcrafted and it is we let everything attend to everything and if we give it enough data if you know what to attend to and hopefully you'll do it better than we could should a hand engineer um so I'll go pretty quick for these last couple of these uh these are just some specific examples of how maybe vits are used uh in practice so um maybe so this is a paper that's sort of moving towards uh what I talked about uh in the previous two slides uh which is uh emerging properties and self-supervised vision Transformer so it uses an architecture that's similar to in images World 16x600 words so it does the patch stuff um and it also does a self-supervised objective similar to imagery BG so that's the first example um so it's able to take advantage of all of this random like image data that doesn't have labels and is also efficient because it uses the patches right so what we found is it produces really good image representations um it sort of does it in a similar way to to before with like the linear layer but instead of a linear layer it uses the canine K nearest neighbor so don't worry about too much but essentially it produces good enough representation such that like a week a really really weak classifier can then use them really use those reputations really easily to do image classification at high accuracy right another Advantage is that representations produced by this model can retain um sort of scene layout information and object boundary information and so it might be able to use be used for uh like video segmentation so see that here so this shows the self-attention on the CLS token on the last layer so the the self-attention weights um sort of showing what the CLS token finds interesting with respect to the the result the rest of the image right so we can see that even though it wasn't trained on any images even though it wasn't trained on any labels um it still is able to um figure out like what the interesting parts of an image are right so it's able to figure out like the the bird or like the bow and ignore the background um uh without without any like explicit supervision of you need to represent you need to classify first or you need to classify boats just by training on random images and with this architecture it's able to figure out these boundaries which is pretty cool um so yeah so another thing is that Beauties are are used a lot uh these days uh like this is the uh imagenet leaderboard so this is just image classification um took this sometime during the summer the screenshot so uh these the top couple models are all Vision Transformers so a couple things to notice is the parameter account which is this middle column right here it's all like one billion plus right so it shows that we can scale Transformers really well you can scale them really easily um another thing to notice is on the right side you have this like jft three billion data set it's this huge like pre-training data set uh with like three billion images I think I don't think it's uh I don't think it's unsupervised but it has has sort of the weekly the the noisy labels that we talked about earlier where like your labels are kind of bad but if you have enough images then they sort of like cancel each other out I guess um or that like it's more like there's still enough signal that your your model is like able to take advantage of this huge data set uh yeah so you need a lot of parameters you need a lot of data and then you perform really want so how you actually get to this point like how do you get from the first architectures we talked about in the beginning to um this like top one on on the leaderboard so one example is just to make them larger and train longer um so it takes the same architecture as an images for a 16x16 whereas that's that's the patch stuff and then um it essentially they they do a bunch of tests with different levels of training time different architecture sizes different amounts of data used to train uh and then they graph them and they try to see insights um they try to Carter insights into into how to scale these models and if they scale um after after a while so what they found is that better representation scales with compute time model size and data set size so if you train if you scale those things at the same time then you perform better which is maybe a bit of a um obvious result but um another thing is like large models are also More Sample efficient so they need less training samples in order to get the same performance right so you need to train them on unless samples in order to um or to get to the same performance so um that's that's another thing that you also see in text as well like language models um they also have some engineering stuff just improve transformance and fine-tuning accuracy and there are two billion parameter model ended up getting the state of the art on image I think they don't have that anymore um see so yeah so some graphs from the paper uh essentially just saying what we had earlier which was the the blue means like the more blue it is than uh the smaller models uh the smaller the model you have um the smaller the circle the smaller the data set you have um so sort of on and then the the x-axis is like trading time right so on the top or more like top left um if you have a really small data it's really small model it doesn't really matter how much data you give it at a certain point so it levels off right so that's the model size bottleneck and then in the bottom right um if if you have a large enough model uh but not enough data so this this like tiny thing right here so you have a pretty large model um but uh a pretty small data set then it tends to level off as well so yeah if you scale up the three factors at the same time data model size and compute at the same time then you perform better which is pretty cool um uh yeah so uh as the last thing is like Vision Transformers uh and CNN so this is like sort of a hybrid architecture between those two so recall that the lack of induct devices is bad if there's not enough data right so um it's often that you know you want to train a model but you don't have access to this like three billion three billion size data set right so solution would be to add some inductive biases into visual Transformers so two um issues with Transformers is that one they're missing translational Echo variants so that's the that's sort of the bad thing I talked about earlier but turns out like it's actually good if you don't if you don't have enough data right it's still a beneficial thing if you don't have all this compute time for the Transformer to learn these good feature replications by itself so how do we add this back into the Transformers well one way is to um add a weight value when you're when you're calculating the um when you're calculating your retention weights so uh and this weight value is only dependent on the relative position so we have uh this feature or this token right and this is projected to key fairly key query value and then we have maybe this token key query value this is spoken in the middle it also gets producted to the key it's very evaluate so we have um our query value is this key whereability equals this key um and then we essentially uh the way like it tends to these tokens is uh by doing just the dot product between this key representation and this or this query representation and this key reputation right so we essentially just add on this learned weight value that's dependent on the distance from period here and that's also the distance from here so for example if you have if you want to encode um sort of this idea that um let's say you want tokens that are closer together to be weighed more right you think that tokens that are closer together will naturally be more relevant or something um so in that case if you were to have just a vanilla Transformer learn that it has it would have to sort of alter the representation for each of these key query values right it also have to probably alter it for the positional embeddings as well so it'd have to do that for every single one of these photos um whereas uh if you have this learned weight value right this learn set of like weight vectors then you can just say like w0 equals to zero and W1 or W like or sorry like w0 if the distance between two tokens are zero uh then you should add this like huge weight right if the distance between two tokens are small then you should have have like a negative weight or something right so um a lot you sort of find with images there's a lot of features that are um that don't depend on uh their absolute positioning and only depend on their relative positioning and this extra weight Vector is extra learn parameter lets you encode that really easily like a really um efficient way instead of having to learn it for all of your all of your uh tokens um so yeah another thing that you need to change is the quadratic time with respect to spatial size so um remember before that if your input sequence is 256 because the Transformer is O of N squared that's kind of not great sometimes um so one way you can sort of fix this is by pooling like doing Max pulling or something uh after the output of of each block um and another sort of thing you can add is just to add excuse me uh add like convolutional blocks in with uh Transformer blocks so just combine um yeah so I think we'll probably make yeah make more sense uh with the full architecture so for the beginning couple you have convolutional blocks because um remember Vision Transformers they don't handle long sequence as well so you want the convolutional layers to handle the the long sequences and then after it's downscaled by by a couple um uh by a couple times then you can use your vision Transformer and then it might be fast um and then also notice the certain dimension of each of these blocks so I think this is the input Dimension and the input Dimension to each block is down sample so you pull before each input which makes it a bit faster uh yeah so to conclude uh Vision Transformers and they're Associated like uh self-supervising like unsurprised training schemes um are are sort of the next step in more and more General models and uh General meaning like less inductive prices and less uh hand labeled stuff right and uh sort of transitioning from hand engineered features to maybe like Vision Transformers later on and also they also require a lot of data as Transformers tend to tend to do uh also like BTS are really scalable uh because Transformers are are work well with like our compute hardware and so they perform really well right we can scale them really easily with with uh our hardware and we can also um yeah or yeah we can scale them during training time really really easily um I think that's it's yep uh does anyone have any questions yes um I think it's I think it's uh supervised um yeah yeah because um I guess the idea would be that you have you don't have a lot of data and that tends to occur for us yeah foreign I'll probably check that and yeah answer any questions and stuff so um
CS_198126_Modern_Computer_Vision_Fall_2022_UC_Berkeley
CS_198126_Lecture_13_Intro_to_Sequence_Modeling.txt
foreign okay So today we're going to be talking about sequence models um yeah so like a little bit of motivation for like why we care about sequence models um technically there's there's a lot of different things um that relate to the sequences you have things you know like the weather um the second image on the left is a spectrogram for audio so you can see uh the the frequencies at every single slice of time um but images if you think about it pixel by pixel like one after the other unrolled it's technically a sequence um and more recently there have been a lot of uh architectures that have treated images sort of similar to the way that people treat uh sequences um so it is it is of interest to us and will motivate some of the lectures that will come like tomorrow and the day after um so for the like next like two weeks or so um we're going to be talking about image processing and image recognition um from sort of more of a sequence modeling perspective um so yeah so that's that's why we are talking about sequence processing today even though this is a computer vision class so we're going to talk a little bit about how we represent sequence data what are the goals that we would like out of a good sequence model we're going to talk about like a simple solution for that a recurrent neural network and then a little bit of fluff at the end embeddings and positional encodings um so yeah um let us jump in so sequence data so like the most basic form of sequence data is like time series data for every single time step we have some data recorded um so an example for that is like weather right you can record like temperature over time cloud coverage over time where every element in your sequence is like a different image um yeah time series data is is definitely the most common um yeah so you can have uh using an audio as another one I think later on we're gonna have we have one person who's really interested in like audio generation and we're going to try and loop that in with like a lecture on like deep fakes so we'll talk about audio generation like a little bit more later um yeah like sound is just changes in air pressure um at different steps in time um the only problem with with sequence data being generally speaking we have too many samples per second um like audio data sampled at it thousands of times a second and that's not really easy to process generally speaking um so yeah like for for audio more practical representation it's like a spectrogram where you have larger slices in time uh and where for that little slice of time you record um all the different frequencies that are present um yeah it's sort of frequency decomposition is not something we're going to talk about too much here uh it's sort of a weird thing like if you all know what a Fourier transform is um but the the takeaway with music and audio is in order to practically represent and be able to process it we need to figure out how to compress this huge number of samples down into like a reasonable more reasonable number of time steps without losing a ton of information um and like an important thing with like audio generation or audio representation yeah is just how do we figure out how to more like practically represented um and that's sort of something something we've talked about like before too um like how we represent data really really does matter um and the same is very much true in Time series data which is kind of what I wanted to get to um and then the last most common form of sequence data is like text it's pretty much everywhere just because it's so so easy to just like scrape the web and just get access to huge amounts of Text data to train your models off um so yeah just like internet scale language models are are a thing um where you have language models that are just trained basically into the open internet um every single word is like a token it's one of the elements in your sequence um and in the simplest form you can just represent every single possible word as a one-hot vector um just to ensure that every single word is sort of like the same distance from each other um like we talked earlier about why we don't just represent the output of our neural network that's classifying digits as being just a single scalar like five why do we not have our model output this um and it's for the same reason like well what if what if our model's not sure if the output is a five or a seven should it then output a six because that's halfway in between like no that's less expressive um and for the same reason we don't want to have um words being represented by the index in the dictionary uh that they're located at like if Rome is like the 5 000th word in the dictionary um and then the five thousand one five thousand and first word is like instead of Rome maybe it's rum um those words maybe don't necessarily there's not like a meaning associated with the fact that they're really close to each other um if we represent our words as one hot vectors the distance between any two vectors is always the same um which is why this is frequently the way that we go about representing Text data um and the big problem with this is that if we have a dictionary with like 100 000 words uh we are going to be hurting pretty bad for space because that is just an enormous like most of those digits in our vectors are zeros so this is kind of the big problem with texted is like uh how can you at least in this naive form um the problem with it being like space efficiency um like it's incredibly inefficient to have one a hundred thousand dimensional one-hot vectors um one for every single word in our dictionary you know um so yeah we'll talk a little bit about text Data a little more later but are there questions on like how we're representing like sequence data um or like text Data specifically um since that's probably like the easiest one to like think about and reason about yeah yeah they're they're definitely chunky um so yeah so again just like with with audio data rather than just looking at every single one of our like maybe 2 000 uh per second samples and and looking at the pressure at that little slice and time we're gonna try and figure out a little bit later how we can better represent um each of these words so that instead of a 100 000 dimensional uh Vector we have something a little bit smaller and maybe more meaningful um yeah yeah good question are there any other questions all right um so we've talked here a little bit about like what are the goals of sequence modeling like what are we what are we trying to do what is it about a sequence model that can't necessarily be captured by like our regular just vanilla um deep neural networks that we learned about all the way back in like the first week so like the first thing is arbitrary length data uh sequences can be arbitrarily long a sentence can be arbitrarily long a book can be arbitrarily long um if you have access to different slices different lengths of like weather data we would still like to be able to process all of those even though the lengths are different whereas with our normal computer vision models with our standard deep neural network right we've fixed the size of our input and if you got an input that was a different size we would start getting errors and it was something our model just couldn't handle so we'd really like to be able to handle data of like arbitrary lengths um so there's also different like paradigms um there's cases in which we can have an entire sentence and we want to Output a single value um of like sentiment like is it happy or sad right so this is a case where we have a really long sequence and we want to Output like a single value um or you can have uh like translation so if you have a sentence in in English and you want to Output like the French translation that's a case where we have an input sequence of a variable length and we would like to be able to spit out another sequence of variable length but it gets more complicated than that too because our English sentence might not have the same number of words as our output fractions so we need to be able to handle not just variable length inputs but also variable length outputs depending on like what kind of thing we're modeling right um this is kind of a weird one do people have questions about this um yeah okay so so there's different there's different not just uh sizes of inputs that we might want to model have our model be able to to take in but different sizes of outputs that we would also like to be able to uh to get out from a model um and and obviously that was something that like our CNN couldn't do our confinet could not spit out variable number of like classes um for any one input we always set out the exact same number um of digits from like our deep neural network our standard deep neural network um and also like long-distance relationships so for example like on on this text example um you have these sentences uh the worst Food I've ever had was from this restaurant and the best food I've ever had in this restaurant if you're trying to figure out at the end of the sentence by the time you get to the end of the sentence um you're wondering like did this person have positive or negative thoughts about this restaurant um you need to be able to figure out you need to be able to look all the way back or have information all the way back from the very start of your sequence we need to be able to somehow connect ideas that are on very different parts of this sentence we need to figure out how to like connect these two together um so that's that's sort of the other thing that we need to to be able to handle whereas like with cnns we only cared about like what was in the little region um of our of our filter right neighboring pixels is the thing we were more concerned about um so how do we how do we do one and two while still handling like big long-distance relationships across arbitrarily long sequences um so these are like the three primary goals of like sequence modeling that we would like to be able to handle that our standard model our standard like deep neural network can't really do um so people with questions or comments or concerns about those thoughts thank you all right um so we're going to talk here about uh recurrent neural networks which is our our sort of first pass attempt they kind of work um but not particularly well there's like a lot of issues um that they have that stop them from performing well but they will cover um all of those three goals that we'd outline that we would like our sequence models to be able to accomplish um so again like why does our naive thing not work um we can't have we can't have variable language like your sentences um if every single word is like an element in a vector um if we have a sentence as an input that is slightly larger or slightly smaller um it's not going to make some sort of neural network we're not going to be able to handle it um so again that is the that is the problem that we're currently facing and so this is our our first pass attempt at solving this so uh recurrent neural network is going to have basically the cell like this a value um that is that is shared across every single sequence element um so in this case we have x0 as our first token X1 as our second token in our sequence um and so on and so forth all the way up to the very last token XT um so we're basically going to do is at the first time step x0 we're going to feed that into our cell a and our cell a is going to sit out like an output h0 and also some information um to share with our next cell um with the next step and we're basically going to continually processing our next sequence element and pass in what we learned at this time step to the next cell where we're going to process it again all of these all of these A's are going to have some some learned weights and then there's going to be some stuff going on there's some deep learning stuff um we're going to have weights that are learned in there but the important thing to note is for every single cell um for every single cell there a they all have the exact same weights they're all shared it's the same cell on the left um that's processing every single input every single input element in our sequence um so again this is what we have um x0 Viva and then quick brown all the way to dog right these are all like one hot vectors or whatever um and say if we're doing like sentiment classification um then something we can do is just pay attention only to the very last guy um out of our sequence so we're going to pass in the our cell a is going to spit out an output as well as information to the next cell let me process quick we take in the information from the earlier part of our sequence right spit out something else and information for the next step of processing as well um do people questions about this so far and like what's happening don't worry about like what's happening inside of a but um like the thing to understand is that it spits out h0 and uh a second Vector as well that gets fed into the next guy um do people have questions about like the mechanics of like what we're doing here to some degree yes friend the output yeah so like if these are um like h0 is a vector H1 is a vector H2 is a vector um so yeah you can choose how to interpret it um so if you want to interpret it as like the probability of like any given word in our dictionary then it could be some kind of like translation thing um where h0 is like a a Vector of probabilities of being like the French version of the word the or any other word in French um they could be probabilities of of the translated word um or if we're doing like sentiment classification let me just ignore everything except for the very last guy in her sequence h t and then we see like um we interpret the output as like how happy or or angry we thought the the input sentence was um so yeah they're all they're all Vector outputs um and we're going to choose like how to interpret them does that make any sense yeah okay yeah and how we interpret them and whatever loss function we put on top of that is going to determine how our Network learns how to Output um proper you know good looking outputs um we can also like stack them too so A and B both have independent weights but all the A's are the same way it's all the B's are the same weights so you can have multiple layers so to speak so your h0 isn't an output so much um as it is the input to your next layer um in this RNN yeah this is sort of a little bit of a strange concept um but it's it's basically like your hidden layers um yeah we're just taking our sequence output of h0 H1 H2 feeding it in to another RNN basically and then the outputs at the very end um so that is that is like the basic form of an RNN um so to kind of peer under and see like what's happening so we take in like a sequence element um and we're very basically because all of our sequence elements have the same length every word in our dictionary um is a one-hot vector of the same length we can just do sort of like a linear layer on it we do a linear layer on the information we took in from the previous time step and our input at this time step so we had we have our a here and it takes in like H1 and then the output from the previous layer right and what we're simply going to do is just do a matrix multiply for both our input and the output from our previous layer just like a linear layer and then basically yeah just stack them on top of each other or add them together um yeah and then we do a 10h just to make sure that all of the the outputs from this are going to be in a reasonable range from like negative one to one um we'll talk about this a little bit more later but it boils down to making sure that when we do repeated multiplications in our back prop we don't have values or even in forward propagation that like blow up or or get super small um yeah so it basically boils down to taking in our our input at this time step multiplying it by matrix taking the output of our previous layer multiplying that by a matrix adding them and then spinning it out are there questions about about this um yeah so again it basically boils down to doing a a simple linear layer on our previous activation and our current time steps data um just to do some kind of processing on it and and hopefully um these the values inside of our weight matrices here whhwxh hopefully they learn um good values that will spit out meaningful data especially when you start stacking multiple layers effectively so you have a leading into the more of these you start stacking the more complex sort of functions you can start learning um yeah are there questions about sort of the mechanics that are happening here okay um yeah and again um the thing I want to emphasize is like the tan H is there because what your what Your output ends up becoming so you have like x0 X1 and then all the way out to like XM right the output from here was the output of a whole bunch of Matrix multiplies all at every single step that led into xn that then got outputted and if after every single output our value is like really large like 100 or negative 50. or like three thousand right all of these multiplies start to get really out of hand and tell your value that you take in at your very last cell is just going to be gargantuan so by doing a tan h on the output of every single cell by doing a tan h on the output here and here we make sure that the values are between negative one and one um so when we're doing all of these repeated Matrix multiplications things don't blow up too badly um yeah so did we achieve our goals for this sort of very basic attempt at trying to to make a sequence model so we can handle arbitrary length Theta since every single a has the exact same weights they're shared um we can we can handle any arbitrary input size so in that sense we have succeeded um we can also handle output sizes of variable length um our model we can choose to only look at some of the outputs from our model um yeah we can we can look at the very last output from a model we can look at all of the outputs all of the output H's from our model um we have we have variable output lengths too so in this sense we have succeeded um just by choosing what we want to pay attention to from our model and in theory um we can handle long distance relationships as well um in that after we processed our first token x0 we outputted some information about what was here um and what has happened in the earlier parts of our sequence as we go deeper and deeper we are we are passing information um and in theory if our Network chooses to use that information wisely and doesn't distort it over time in theory we can pass information from earlier in our sequence so we've sort of this is our naive attempt at trying to create some kind of sequence model um and just to be fair this isn't something that's like really used and it's not something we're going to use too much um but the hope is just that it sort of gets you thinking about these three challenges and sequence modeling that we kind of care about um and an attempt at getting you to see like one possible naive solution um that will work do people have questions all right yeah so it's not like Mission critical that you sort of understand um like this math but just this idea that you're processing the first token passing some information to the next step processing the second token and using the information from earlier in our sequence um as we go yeah so that's that's the sort of important thing that I want to make sure y'all take away from this portion um more so than the actual architecture um yeah so on to the next section uh embeddings um we talked a little bit about how like text Data if you have a dictionary with a hundred thousand words are every single token every single token here x0 X1 all of these are of size 100 000 it's a 100 000 dimensional one hot Vector um and and that's kind of just Insanity um to pass a 100 000 dimensional Vector through like a linear layer um it's just a lot of unnecessary compute and we'd really like to find like a better way to do things so like one one thing you could try is like representing each word with like binary for instance so like for the Thousand or for like the the I don't know fifth word in your dictionary instead of representing it as zero zero zero zero one and then more zeros instead maybe represent it like with binary which would be what five uh which would be like one zero one I think something like that one plus four yeah so maybe represent with like binary it's like a another the idea uh this maybe doesn't work super well um but the idea is that we're trying to figure out like something that still has all of this information um and still makes all of our our different outputs looked distinct um but something like more compact like more reasonable um so there's like a lot of different choices you can try you could you could try encoding the word from one hot to like binary or like trinary or or something other something else that's like really wild um but how do you know like Which choice is best rather than trying to choose which thing is best let's just learn about a representation this is deep learning um so let's just throw deep learning at it so for every single word in our dictionary instead of feeding in the one hot vector for every single word in our dictionary let's just learn a smaller Vector um that we can then feed in so if our word here is like dog for instance we're going to have like a little code book and we're going to go in our code book to the vector that represents dog a vector that is learned I should say all the parameters in it are learned and we're going to pull that out and then feed that into like x0. um yeah so these these again are still like parameters that we can learn um it's just going to allow us to compress the representation from being length 100 000 to something smaller like maybe length 128. um so the word dog will now be represented whenever we see the word dog in our sentence what we're going to feed in is the Learned 128 dimensional Vector corresponding to the word dog um yeah do people have questions about the idea of like text embedding okay yeah it's basically just finding a more convenient learned way to replace our enormous one hot vectors um yeah um yeah it's it's fundamentally about reducing dimensionality and rather than passing this giant Vector through like a linear layer immediately let's do a lookup on our code book to find the smaller version of this 100 000 dimensional vector um and the hope is that these inputs will be sort of like intelligently compressed so maybe the words cat and dog are sort of closer together than the words cat and like grass um so hopefully they're sort of compressed in an intelligent way to and there's some interesting things um on that that you can go and like Google um or we can talk about afterwards um yeah so there are no questions um it's only one little last section yes yeah so just like um well we're gonna like we're literally just going to like hard code just if the word is equal to dog um then then instead of creating this one hot vector and putting it we're just going to like wait could you explain what you mean a little bit more I think I want to make sure I understand your question fully [Music] presentation uh like the Chinese yeah like how does it know it looks like we're just hoping that it is a comparison or yeah so like well like say like okay so say we're we're taking a hundred thousand different words and we're going to embed them in a two-dimensional space so every single like 100 000 dimensional um input here like the word cat dog Harris Rome whatever all of them we're gonna go to our code book and we're gonna look up um what we currently have for our our embedded representation right so they're all just like two-dimensional vectors that you can like plot so the hope is that maybe like cat and dog are like probably kind of close together because they kind of both mean like about the same thing like small furry creature running around um and then like completely in the opposite direction might be something again like brass and the hope is that if if our model here if say the tasks we're doing doesn't really require a distinction between what cats and dogs are if they're both um like if the task is like a I don't know classifying like how good or how positive or negative like a restaurant review was distinct distinguishing between the words cat and dog are probably like not particularly important so when we actually do back propagation we might find that like the gradient to this embedded Vector kind of moves cat and dog maybe a little bit closer together than they were previously because there's not really like an important distinction between the two so the hope is that by doing gradient descent onto our code book of embedded vectors the stuff that are sort of similar will get kind of moved closer and closer together uh and like maybe the word grass gets moved really really close to the word like weeds um more like flower so like at the at the very beginning when our code book is randomly initialized the word flower might be way over here but as we start doing gradient descent it'll get moved over slightly and everything will start getting moved um that is like semantically similar together does that make a little bit of sense okay did that answer your question okay yeah so yeah it's this code book that we're going to be doing gradient descent on hopefully things that mean something similar will kind of move close together things that mean very different things will move far apart um yeah there's like some interesting work what is it on uh is it work to vet yeah [Music] [Music] like how how close they are together and there was like some interesting examples on like someone training on like a very large thing of text um just calculating like embeddings for every single word and there were like it was interesting that what was it like the distance between the vector from like I'm gonna hold on let me erase this here there's some interesting things that like again like it wasn't a two-dimensional embedding but let's just say for the sake of argument that it was uh like the distance between like man and woman like this distance here it was like the same direction as like uh like a king and queen or something like that right so you could do something like um [Music] [Music] yeah I don't know there were just some interesting things like this like the distance between the two is like the same there was yeah there's just some there's some cool stuff that happens with like word embeddings um yeah that was kind of uh just a fun little thing um yeah are there are there questions on like any of that stuff all right um yeah this is a somewhat shorter lecture so there's only like one last little section to go so if there's any questions or if you want to go back at any point like please feel free to let me know um we can have a little discussion about sort of like this idea of like embeddings and stuff like that um yeah so the last section um positional encoding um so like how much time has elapsed like can matter like what position we are in our sequence can matter um like it's not maybe probably as important for like text Data um but in a lot of domains like what time of day something happened um might matter um yeah like the time stamp can make a significant difference um and we would like some way to feed in like a time stamp and sort of the naive approach again is maybe like one hot embedded vectors um for like feeding in the time stamp to make sure that again all of our time stamps look distinct um so that might be like the naive approach um something maybe slightly better might be like binary um so it's it's basically looking for how do we represent a one hot Vector that may be represented uh something yeah this is like the zero element this is the first element of nerd sequence second uh with the idea being that we take this one hot Vector representing the position that we currently are in our sequence and just concatenate it on um to every single element in our sequence so we just concatenate on a little time stamp that currently would just be something one hot um it would look something slightly better uh binary being another not terrible approach um but we're going to end up doing what ends up working pretty well is called um like uh well actually we'll talk about what works here in a second a little more a binary first um so you can do learn positional encodings but we're not going to do that here because we have something that works a little bit better the analog way you can see every single element if we have like a Max sequence length of like 16 um in this case you can depend on like a binary representation of where we currently are in our sequence so if you're at Step Zero you would append on zero zero zero zero if you're at step one zero zero zero one um and so on and so forth like the binary representation pass this into our Network and hope that it can understand from that like what how far how deep we are in our current sequence um yeah to be all those questions about the idea of like appending this like binary encoding that serves like a time stamp like how far along in our sequence we are um does anyone have questions about about that sort of broader idea specifically first um yes um yeah yeah basically which it's sort of unintuitive for like text because like the word like how deep in a sentence the word dog is probably like doesn't add much meaning um but in the case of like weather data if your weather data always starts at like 10 a.m um and you're trying to predict like the temperature like time of day probably factors into that pretty heavily so it's probably worthwhile to stick on a little time stamp that represents like oh this is the second hour since we started measuring this morning um does that make sense yeah okay yeah um so on a slightly more like on a slightly deeper level if we're looking at like what's happening here if you look at the red column as we're going deeper and deeper into our sequence um it's flipping every other one as you go from zero to one two and three to four it's just flipping back and forth every single element is just flipping and if you look at the second most significant digit from the right um it's still doing flipping but it's just flipping every other one um if you look at the green column same thing it's just flipping back and forth except now at half the frequency uh insane thing for the yellow um and the problem with doing maybe like a binary encoding like the reason this is like somewhat undesirable um is because if you would like to still use a length four positional encoding but your max sequence length is like 32. you're sort of out of luck because if you have four digits you can only represent 16 different unique time steps um yeah that's that's sort of the intuition for why we would like to do something that isn't binary um and the question is like is there a continuous analog of this so that given only like a length four Vector can we somehow um encode like maybe 32 or 64 or 128 different encodings into like something small like that um so since this is basically just literally flipping back and forth like a sine wave or a cosine wave um where we're going to append we'll end up just being sine waves and cosine waves with different frequencies um yeah we're simply going to replace every single Vector entry instead of just being like the binary one or zero um with sine or cosine of slowly doubling frequencies so Omega one is going to be half of what Omega 2 is Omega 2 is going to be half of what omega-3 is every every every other entry you go down you're going to double the Omega um so we're basically we've basically just found like a continuous analog so our red column um the frequency is like one it flips like every other I guess it would be what would it be I don't know but it's flipping every other one the blue column is flipping with with half the frequency um same thing for green same thing for orange so we've just found sort of a continuous analog where we're not flipping back and forth between whole steps we're not flipping between entirely like one and zero we're going to slowly decrease and then slowly increase again flipping back and forth in a continuous way um yeah we've simply replaced our ones and zeros with Sines and cosines um that are still going to flip back and forth but just now between 1 and negative one instead um do people have questions on like this idea of trying to find like a a sort of continuous version of like a binary encoding um for like a Time step at a certain point in time okay um yeah it's sort of a weird concept but if we're at sequence element one we're just going to set T is equal to one and just evaluate sine and cosine um at all of these different frequencies um at T is equal to one and then at times step two we're going to set T is equal to two um and if we if we like graph all of this out this is what we'll end up sort of getting so every single row um is the if positional encoding um and every column is the The Continuous version um like of R ones and zeros so like if you look at like any given column just like the columns up here like the columns red uh blue green yellow so if we look at the columns on our like continuous version of that if you look on the right it's doing the same alternating thing between uh low values blue I'm sorry uh High values blue and low values red just alternating back and forth as you go along in our sequence so we basically just found sort of a continuous analog um for doing like a binary positional encoding to append on to all of our elements um it's sort of a it's it's kind of a weird thing um and if if you have questions to make like on your own feel free to go back and like take a look at the slide decks um and and think about why this right here evaluated different values of t will just give you these things here on the right um that just slowly alternate back and forth um yeah are there other questions on positional encodings this is this is definitely like one of the weirder Concepts okay um if not then that is that is all for today um I will show up here and y'all can come up and and ask more questions and stuff um but yeah that that will be it for today
CS_198126_Modern_Computer_Vision_Fall_2022_UC_Berkeley
CS_198126_Lecture_22_Multimodal_Learning.txt
all right so today we're going to be talking about multimodal learning talk a little bit about what multimodal learning is uh and why we do it um I'll talk a little bit about clip and that specific flavor of multimodal representation learning and at the end we'll talk a little bit about the downstream uses of clip what can you do with clip once it has been trained so start off what is multimodal learning so you should start by talking a little bit about what multimodal is in the first place there's many different ways to represent the same kind of object on the right side of the slide you can see we have two different sets of text the gray squirrel the red squirrel and three different images of a squirrel all in a very similar pose similar features um so this sort of describes that you know we can have we can have text represented text representing our objects we can have images of the object you can also have video of an object someone talking about the object an audio clip of them talking about it or you know the object itself making noise right so there's different ways in which we can represent with data the same kind of thing um and you know even objects that are of the same data type can all all have little permutations um as you can see these three different images of a squirrel are different in in their style and the way in which the object is is drawn right not all squirrels are the same uh the way in which we represent all of these can be similar but hopefully it also captures a little bit of the nuances it's sort of Harkens back to what we were talking about earlier in the semester with word embeddings where the objective was to take similar words and map them in a latent space to points which you know if the meaning of the word is similar is is closer in distance um so to cap it off again we have objects that are the same that can be represented in different ways and objects even within the same data type that are different and we would like to to capture these nuances somehow as well as still capture that you know the object is the same so for multimodal data sets our data sets might come in many different forms right so I mean we've talked a lot about image data sets throughout the semester where we have pairs of images and class numbers we can also have Text data sets like on the right hand side where we have raw text oh I'm sorry I jumped the gun a little bit we can have Text data sets with raw text and potentially you know sentiment labels or translations we can have audio data sets similar to Text data sets Right video data sets and we can have data sets with any combination or pairs of the data types enumerated above so we can have perhaps image caption pairs or video audio pairs like on the right hand side of the slide you can see pairs of images with corresponding captions the first image is a picture of a gray squirrel and accordingly we have the caption the gray squirrel on the bottom we have a red squirrel and the caption the red squirrel um and even if you have image class number pairs you can sort of create trivial captions even like an image of and then insert whatever the object name is uh and with self-supervised pre-training as we talked about previously we never really paid attention to whether or not um even if these labels existed you know what could we do with them we always just threw away labels or any extraneous data corresponding to all of our images because in self-supervised learning the goal is simply to figure out or at least in the kinds of self-supervised learning we talked about the goal is simply how do we learn from just unlabeled images but oftentimes throwing away all this extraneous data you know might result in less information so is this really optimal to be throwing away all this extra data that's sort of the the concept of multimodal data sets is we're going to keep all of this information you know all these pairs of images captions um all these pairs of video audio all of this um keep everything so let's talk a little bit about like why multimodal learning like why is this a problem we really care about so our data can come from lots of different places and lots of different data sets like we've talked about imagenet but you know there exist lots of other data sets some presumably that have images that are more cartoon-like like some of these other images of Scrolls that we have um some are photorealistic summer cartoons summer sketches different data sets might contain entirely different objects too right imagenet has a thousand different classes but I'm sure there are other data sets that contain entirely different objects that are not an image at all and you know furthermore imagenet is the data set you know of objects like this first squirrel where the object is you know front and center never really containing more than one object right so we refer to these different you know changes these different these differences between data sets by referring to each data set as having a distribution over possible images so if we have a photorealistic data set like imagenet it's highly unlikely it's very improbable and it contains an image like the ones on the bottom of the slide or it's either a sketch or cartoonish um so we can think about every single possible image has like some probability of being inside this data set and if it's like a photorealistic data set then you know it's probably low it's probabilistically unlikely that you know there's a cartoonish image um so each individual data sets distribution frequently in in machine learning is very small compared to what's possible like we'll have data sets that are of photo realistic images or have a data set specifically for cartoon images um so let's talk about like why like why this is important to bring up distribution shift becomes a really big problem so if you have a computer vision model that was trained on maybe photorealistic data like imagenet and you say like okay how well does it do at identifying what this image of a squirrel that's been hand drawn sketched um if we if we evaluate our computer vision model on sketches it's very likely that we're going to do very poorly and this is savato I mean this image is clearly recognizable as a squirrel to any human we'd really like our computer vision models to generalize to all kinds of data when we refer to this problem again as distribution shift and then if we train on a distribution of images that is more likely to be real than not um where if we if we shift over if we test on a distribution of data that is more likely to be cartoonish than not um you know we have we have shifted the distribution of data um and and we're going to do very poorly so this is a problem that we'd really really like to solve this is the problem with regular learning that multimodal learning is is sort of trying to to aim at in essence um so in the wild you know across the internet across all the different types of data sets that exist there's incredible data diversity we have images of different kinds different styles you know all over the internet with all different kinds of objects in them we have so many different data formats we have images we have video text audio everything and we also might have different relationships in each data between bits of data some data sets might have pairs of images and object numbers others might just have raw images other data sets might have pairs of images and captions others might have pairs of video audio Etc and you know not even to mention again that we can make captions trivially for you know if we have an object number we can create text from audio we can create images from video um so there's all these different kinds of relationships that we can have the goal the goal of multimodal learning is to figure out how do we use all of these different representations between the data how can we throw away nothing um instead of what we used to do which is strip the data sets first specifically the kind of data that we were interested in um you know in self-supervised pre-training for image um you know computer vision models we frequently just strip data sets to get unlabeled images um and and that's not necessarily ideal and the hope is that by using all of this data we can train on more diverse data train better models what's more from more diverse data and you know by virtue of the fact that we're using every every relationship between every single bit of data in our data set similar with representation learning our goal is going to be to learn meaningful compressed representations at least in the context of this slide deck um of hopefully I mean everything images text audio video Everything meaningful compressed representations um so sort of sum it all up right we have data that comes from all different kinds of you know forms with different relationships text video images audio um perhaps paired you know with relationships like text corresponding to the video or audio corresponding to the video text corresponding to an image all of this um it's also you know important you know we have different kind different kinds of data even within data sets um are our data sets are as diverse as they can be they're still a very small segment of what's possible um every data set exists is a very small slice of all the images that are possible in the wild that you might encounter just perusing the internet right models trained on one distribution will frequently struggle on new distributions um how do we go about learning compressed representations in a way that benefits from relationships that we have between different pieces of data so this is sort of what multimodal learning is this is what we're going to try and Tackle um with the the flavor of multimodal learning the paper that we're going to be tackling here this paper it's called clip so clip stands for contrastive language image pre-training it's a relatively new paper although with how fast ml moves you know even a paper from 2021 is still fairly old and this paper looks specifically at relationships between images and text you can think about like Instagram where we have lots of image caption pairs that are very common in the wild um and if we have image data sets with object labels we can also create some basic captions to sort of give us more data to work with more image caption pairs for our data set um and in this paper they combined it with a very very large data set of images and captions to get a ton more data than you could get from um you know any any one data set image net or whatever um they had an enormous amount of image captioned pairs from all over the Internet with all different kinds of images in them all different styles all different modalities um the basic idea with clip is that if we have images corresponding with captions that they are attached to the representations learned for both the image and the caption should be the same so we talked before about image encoders and a little bit about text encoders we're not going to worry about architectural details here we're just going to Black Box just something like a vision Transformer and then like a Transformer that's made for like you know natural language um you know takes in a sentence or in the case of our vision Transformer takes in an image and outputs some fixed latent Vector like 128 dimensional 512 dimensional whatever takes in an image and our language encoder takes in text and both of them output something like a 512 dimensional latent Vector the idea is that if our image if we take the latent representation of our image and we take the latent representation of the caption we would like them to be the same because the caption describes the image they are they should be semantically very similar so ideally the Learned representation should be the same too and this will be despite the fact that the images are getting passed into one model and the text is getting passed into an entirely different model you should note these are not the same model these are two different models we have one for text and we have one model for processing images um we want to compare and contrast all these different pairs asserting that matching image caption pairs have similar representations and different image caption pairs have representations that are very far apart again to note there's no model or architectural Innovations at play we're just going to Black Box language Transformers and vision Transformers training them from scratch so here's an image of what clip is doing and sort of the training procedure so let's look just first here at step one and this is directly from the clip paper I so Step One is is really the meat of it we're going to be pre-training both the text encoder which is again just like a transformer for natural language and our image encoder which is another just a vision Transformer both of them initialize from scratch so what we're going to do is we're going to take a bunch of image caption pairs so you can see on the left hand side we have a caption pepper the Aussie pop which corresponds to the first image in the stack which is an Aussie pup presumably named Pepper we have a whole bunch for we have n different pairs of images and captions we're going to take all of these images and pass them through our image encoder to get a bunch of 512 dimensional or you know however many dimensional embeddings for one for each image you can see that's where i1 I2 I3 up to in comes from same thing for text which is going to give us you know 512 dimensional latent embeddings T1 T2 T3 all the way to TN now remember T1 and i1 came from an image caption pair that matches so ideally if our latent representation of i1 is similar to T1 the dot product between the two latent vectors because you know the vectors ideally are very similar the dot product between them is going to be very large and ideally if say we compare T1 and I2 a caption that does not correspond to the image so the caption does not describe what is in the image if we take the dot product of you know these encoded representations of the image in the caption they should be very dissimilar they should be hopefully even orthogonal to one another and the dot product between the two should be very small or negative perhaps you should be very very far apart and the same thing goes for any other image caption pair so you can see along the diagonal are embeddings for image in the text that are the same so in the top left hand corner is the dot product between our matching pair i1 and T1 which is the embedded version of the Aussie pup named Pepper and the caption saying pepper the Aussie pup and the second element along the diagonal i2t2 is the dot product between our image embedding and our text embedding or our encoded text and image text I should say are encoded text and encoded image the dot products should the dot product is hopefully very large because the image is the same as what is in the text and hopefully our image and text encoder learn to Output you know embeddings that are very similar if what they contain is you know about the same um so every element along the diagonal corresponds to a pair dot product between a a embedded image an embedded text pair everything not along the diagonal corresponds to things that do not match things that we'd hopefully like to be very small given that these two embeddings are very far apart hopefully what we're basically going to do is look at like any given let's say just look at a column for now the first element corresponding to the correct pairing is very large and ideally everything else is very small it's very similar to like a classification task almost or the output of our neural network is a whole bunch of logits and we'd like the logic corresponding to the correct class to be very large and all the other logits to be very small you know after which in image classification we softmax and then just take like a binary cross entropy loss so that's what we're going to do is we're literally just going to look at this First Column we're going to say okay the correct label for this column is logic zero and we're just going to take the binary cross Center sorry cross-centric loss I'm going to do the same thing for the second column where the correct label said index 1 and so on and so forth you'll also notice you can do the exact same thing for the rows just again to a cross-entropy loss where the correct logit of the first row is the zeroth index take the look at the the index one row and set the correct label to be at index one cross entropy loss which has the effect again this loss function of taking these elements along the diagonal and trying to make them as large as possible the encoder for text and the image encoder are going to work together to try and get these elements along the diagonal as big as possible meaning these two models the text encoder and the image encoder are going to work together to make the dot product between matching elements the same and non-matching elements very small it's the basic idea we have a text encoder and an image encoder that are working together to spit out the same thing if they have a true pair of data and at test time if we want to do some classification on a new image we're going to say Okay so this image that we're about to pass in could be one of the following could be a plane could be a car could be a dog someone could be a bird we're going to create dummy little captions that say just a photo of a plane a photo of a bird photo of a car photo of a dog we're gonna pass all of these fake little sentences these captions that we've come up with through our text encoder to get different embeddings um so if the image is indeed a photo of a plane it'll probably match the first text embedding T1 if the image is in fact a photo of a car it will probably match T2 and I should note I'm looking at this section here on the right section two create data set classifier front label text so on so forth if it's an image of a dog it's probably going to match the label T3 because of a bird it's probably going to match TN we're going to pass the image I through our image encoder we're going to take the dot product of that encoded image i1 with all of our different captions which is going to tell us how similar is our image to all of these different captions we created um we're going to see that our image after being encoded creating i1 it's going to be very similar to T3 because the third caption we created was the caption quote a photo of a dog um because our text encoder and imaging goater have worked together to Output the same thing um when you know the caption matches the image we're going to observe that i1.t3 is very large and everything else is very small and because our photo most closely matches the text embedding um a photo of a dog we're going to assert that the image in question is a dog this is called zero shot prediction um and it's going to be used to show that you know after we've trained our text encoder and our image encoder to Output these 512 dimensional vectors that supposedly represent our our image in this latent space it's going to show that these vectors these encoded representations of text and image are extremely meaningful um if we can correctly classify things in a zero shot manner meaning with no fine tuning with zero fine tuning whatsoever if we can correctly classify a new image that we have never seen before um that's going to show you know how powerful is this method so that that is basically the overview of clip in perhaps more detail than I needed the most important things being clip takes two models an image encoder and a text encoder and tries to make it so that the image encoder and text encoder output the same 500 and 12-dimensional Vector when the inputs match however the Aussie pup and a picture of said pepper um and you know dot products are very small when the two things in question do not match so again it's just a black box division Transformer and a black box language Transformer the only unique thing that's happening here is the way in which we formulate our loss function trying to push things that are the same very close together pushing things that are different very far apart this is some of the pseudocode that was included in the paper take an image a bunch of images take a batch of text that corresponds to set images and basically just do a cross-entropy loss on all the rows and all the columns where the correct answer is all of the the blue elements the correct um thing in which we're trying to sort of in this case classifies blue and all the the incorrect elements are in white um yeah so that that is what clip is um it's basically a clever scheme to take data sets um with matching text and images and trying to learn latents from them that are going to be useful for for classification or perhaps Downstream tasks um trying to learn a meaningful representation of text and images so does this work what are the effects so what we observe is that clip features are extremely robust to domain shift since clip is trained on such a wide variety of data um it basically means you can generalize extremely well so clip features can be used to classify images as we see on the right hand side with extremely high accuracy considering that we have no fine tuning going on so on the right hand side we are looking at uh like fine-tuned classification of a resnet 101 model that was pre-trained on imagenet um and on the right hand side is the zero shot clip in this column over here so image.accuracy on data it's never seen before uh is is really bad it's not good um for like sketches of bananas at only 25 of the time does it really get it right um or as clip which was pre-trained on on such a wide variety of data generalizes extremely well and without any fine tuning without any fine tuning whatsoever we can still get like 60 accuracy on these sketches of bananas uh you know again with no fine-tuning which is extremely impressive and and shows you just how how generalizable all of these latents that we've learned are um and zero sharp prediction is even competitive across just a ton of different data sets um so this diagram again Compares a linear layer trained on top of a pre-trained resnet 50 compared to zero shot clip latents you can see that even on like imagenet some things complexes imagenet clip gets like 1.9 percent better performance relative to a pre-trained resnet 50 that's been frozen with a linear layer on top and it shows you just how General clip latents are that clip latents just again which is just to say the output of our image encoder here those outputs after we've Frozen the model and no longer pre-train it are are extremely good um they're extremely meaningful um making it extraordinarily easy to classify images or you know use these latents for any Downstream task you can think of so that's why this is like a really really important idea a really really important paper um so looking a little bit more talking a little bit more about Downstream tasks so we talked about classification you know we can use clip to classify objects we've never seen before objects that come from wildly varying distributions um that's one thing we can do with clip another another thing you can think of is say robotics where you know our robot has an image of the scene and from it it needs to try and figure out you know where is the object how am I going to path to it how am I going to pick it up we have a deep decision making Network that needs some kind of embedded representation of the images that it's taking of this scene and clip latents um are are very useful there if you don't want to have to learn an entirely New Vision model from scratch you can just take clip latents just a frozen again this Frozen image encoder and just use it out of the box and just pass into your decision making Network these clip latents and it actually works um again the idea of representation learning um is is applicable everywhere because these these latents describe these images extremely well the contents of them and this representation learning idea is sort of at the core of like text to image Generation Um things like stable diffusion so it's basically everywhere and just because this makes me happy I decided to include this as well this is also very much applicable in the world of 3D Vision um where we talked a little bit about Nerfs you can optimize a Radiance field with the loss function being that we want the rendered image to match the clip latent corresponding to an input caption that we give so say we start with literally nothing except for the caption washing blueberries and we want to create an object that you know looks like blueberries that are being washed what we will do is we will take this caption washing blueberries put it through our clip text encoder in purple here put it through our clip text encoder take the 512 dimensional latent compare that to what our radians field currently is which we'll start out it'll be randomly initialized we'll render out an image past the image through our vision Transformer which is again the image encoder here in green pass it through the image encoder for clip and compare the latents if our neural Radiance field reconstruction of the object is very very good then the rendered image will have the you know embedding that will be very similar to the embedded caption and in this way you're going to continue optimizing your Radiance field because every single step of this is differentiable from rendering out images passing those images through the vit all of that is differentiable so we can back propagate from the difference between our clip embedding and our rendered image embedding all the way back to the parameters of our Radiance field and you can see some objects that were generated um a bouquet of flowers sitting in a clear vase was the prompt given here and this was the radiance field that was optimized I should note this is extremely expensive to do because rendering out images within our Radiance Fields as we talked about is is not cheap um but it's it's I think a really cool idea that demonstrates just how Universal these clip latents are um that all we had to go off of was the clip embedding for the caption bouquet of flowers sitting in a clear base and we were able to generate um you know a neural Radiance field whose rendered images also um after being passed through image encoder give us latency that are the same the fact that it's it's clearly meaningful the fact that it clearly works is pretty pretty incredible um so so that is the the overarching idea of clip we have because we trained on such a wide variety of data um because we used representations or our relationships between different representations of the same bit of data you know captions and text because we utilized all the information that was available to us um and pre-trained on such a wide variety of data we have these latents generated for any sentence or for any image that are basically usable out of the box in any any possible domain um so that that is what clip is that's why it's important and that's some of the use cases of it um multimodal learning is obviously much larger than just clip but it's definitely one of the places where this really this field subfield I should say really came to life um and it's definitely one of the more important papers since then things have obviously evolved but this is sort of the OG for multimodal representation learning um so yeah
CS_198126_Modern_Computer_Vision_Fall_2022_UC_Berkeley
CS_198126_Lecture_19_Advanced_Vision_Pretraining.txt
good yeah uh so today's lecture is going to be on salt supervised free training for CV I kind of hinted at this lecture when we covered the intro to pre-training lecture in like VQ I mentioned that there will be a dedicated lecture that is entirely going to be covering models that use SSL and CV so since that intro lecture was a while ago um let's maybe go through some review of all the concepts we learned back then so we learned that deep learning here's representation learning what deep learning is really doing is it's trying to extract these representations from your data that are meaningful in some way and can be used to solve some kind of like task at hand we also learned that we can take representations learned from one network and transfer them to a different network this whole process is called transfer learning so there are multiple ways of doing transfer learning you could freeze the first Network and then take game bearings from that Frozen Network and sort of like build another natural on top of those or you could fine tune the first Network directly those are both fine and in in either case the Network that has been already trained is called a pre-trained network so why would you why do we want to do transfer learning um again we talked about how data sets that are huge and diverse can lead to better and more generalizable representations I think it's kind of obvious right so if you take a model and give it more examples and more and more diverse samples than the models intuitively going to learn more but sometimes if you don't have like these large data sets and you're only working with small data because this is a very real possibility in the real world you can't develop these um high quality representations from this like small data set so what you can do now is you could take a model train on a big data set and simply just like your into representations from them and use that for a smaller Network so we talked about why this is useful and today we are going to talk about methods that are dedicated entirely to training these models on like these huge data sets so they learn good representations that can be transferred down for different kind of Downstream tasks later on um I put the slide as sort of review I think at this point people know what supervised unsupervised learning is if not I would recommend you know just pausing the lecture and going through it at a later time but I'm actually going to skip um the slide and good and jump directly and share what self-supervised learning is so self-supervised learning is a branch of unsupervised learning where again you don't have any labels associated with your data but what you do is you create labels from your data sets and some fashion or other so sort of what differentiates each SSL method from each other like the way you create like this these labels in the first place and again this is also a preview from the create running lecture why this is useful I think it can be demonstrated by this analogy by John McCune that he so gave in his 2016 talk at neurops um he said that okay if you look at something like supervised learning you are passing in is like hand labeled it's like that's like hand level data you're the model say pretty regularly if we are doing something like RL you're getting some sort of like supervision or some sort of like labels very very infrequently but if you are doing unsupervised learning you're kind of like getting everything at once so a common unsupervised learning technique is say when you're working with sentences what you can do is you could break a sentence up into like different words you could mask out multiple words and have the model predict the masked out words from the remaining sentence in this case just sort of getting like this entire sentence as the input but in supervised learning you might just get a single label like okay if I'm training a model for cinnamon plus sentiment classification I would I would just get a label that says positive or negative but if I'm doing something like something like RL I might not even get like a vault label I might just like get a single skill or input a scalar um label like every like thousand iterations or something so what Beyond Lincoln argued is that in in this sort of in this sense since you're getting the most information out of unsupervised learning methods that is sort of what forms the if you were to sort of view intelligence as a cake unsupervised learning is is what's going to form the main base because that is how you're learning the most things we're a supervised learning in RL is simply just like de-icing and the cherry on top like the main like the meat of the skate the soul coming from this unsupervised learning process and our goal is to sort of look at different kinds of unsupervised learning methods that are used in CV I think this is a pretty um good quality sort of summarizes this whole topic so Pierre sermon a I think he is a robotics researcher at Google brain he mentioned that if you give a robot a label defeated for a second but if you teach a robot to labels then you hit it for a lifetime so if you can and well this is basically referring back to yourself supervised learning because you are creating labels out of nothing right and then you are training models using this process which lead to like really good results and that's sort of like the sport is like sort of leading you um before I will want like specific kinds of methods does anyone have any questions about this General concept of self-process learning yeah exactly so if you look at this image over here you can sort of view each sort of like row is like the full input but the new region is what's faster with the model and it has to predict the gray region from the blue region so you're hiding some information from the model and has to predict the hidden part from the unhidden part so that's actually just one way of doing SSL turns out that there is a class of methods that don't necessarily follow this um Paradigm of um predicting hidden stuff from the from the unhidden and this is going to be this is a common something called contrastive learning so at a high level the goal of contrastive learning is to uh is to optimize for something called similarity now how do you define this notion of similarity it defines depends on the problem that you're working with but what compressive learning is trying to do is it's trying to learn a low dimensional Link in space where objects that are considered similar are close together but various objects that are considered dissimilar are farther apart and the hope is that if you're able to learn something in space in this manner then you must be learning something meaningful about the objects themselves in order to sort of like cluster them in this particular fashion so constructive learning was actually introduced it was supervised idea but it has actually been more popular in unsupervised approaches and we will go through some of those like particular methods in the next few slides again draw this out so say that you have some like five dimensional phase then you add an image of a dog sitting there with a select dog one you have say a dog two over here and then maybe a pad over here so this is like dog one document type one what we want to learn is a space such that God one and dog food are pretty much like next to each other and just like image for a cat is farther apart so similarly if you were to have say another cat over here then you won catch two to be close to found one so you're trying to learn this like meaningful building space in a sense okay so that is I guess like one way of defining similarity um and how do you train networks in this space like what kind of objective do you use it turns out there are many kinds of objectives that you need to use I want to highlight the two most important ones in this slide so you're gonna have some input to your loss function what you're trying to do is you're also going to take in some positive sample and a positive sample and a negative sample your goal is to push the embedding for the positive sample as close as possible to the embedding for the sorry your goal is to push the embedding of your input X as close as possible to the input of the positive sample X Plus and as far apart from the embedding of the negative sample x minus so if you look at a subjective over here what we are doing is we're looking at the square distances between the embedding of the positive sample and the impedance of the between the embeddings of the input sample X and the positive symbols X Plus n x naught if we are trying to maximize this intuitively we want to minimize the first distance over here while maximizing the second distance in order to sort of like have this like negation effect come into come to place this sort of triplet loss is actually what's used to train um face recognition networks like if you have a network that can separate who the person is in an image by looking at their face this is how those networks are trained um but you could go even further so in the triplet loss we are taking a positive sample and only one negative sample so intuitively what's happening is you're only pushing your input X away from this one negative sample what if you want to push it away from all the possible negative samples now that's that's not really possible because there can be like millions of negative samples right so you have to sort of like limit yourself but the idea is that you could use more than one negative sample and push your encode in this case I'm going to use C for the context as far away from each negative sample as possible now this loss looks way different from the loss above and in a sense this loss should look very similar to how um cross entropy and softmax is the point so in a sense you could view this problem as a classification problem where you are trying to predict so if I were to given some input X you are trying to predict what the um I guess like the what the label of the positive sample is you're going to create that as like sort of the label of the of just like given uh context Vector C and you're going to create the labels of the um negative samples as like the as the um incorrect sort of like label in a sense so remembering in classification your goal is to map an object to some label and this label comes from the from the data set you have one correct label and every other label is considered um incorrect we are sort of like defining yeah like doing that process um for this contrastive loss as well so you're defining a positive sample as a correct label and all the negative sample does the incorrect labels or incorrect classes for example any questions about the loss functions is that not sorry go ahead no how do we decide whether something is a problem we will actually get to that in just a second yeah but the but the idea is that even though you don't have like these explosive labels you can still like create labels in this particular Way by taking something to be positive and everything else is negative so yeah g is some function that sort of Compares two different objects you can do something like crossline similarity for example doesn't really matter too much so one model that appears as an idea is called mocha or momentum contrast so what they do is they Define contrastive learning as a differentiable dictionary income so they say that okay um if I have say um inputs um X1 to xn what I can do is I could pass all these sentence for some T encoder and some query encoder you'd get a K1 and P1 similarly I can be the same for like all of these first I can get like a K2 G2 all the way up to a KN and a qm so what I'm going to do now is I can collect all of my periods and queries in one place and for each query um I Define the positive key as the one that corresponds to it so if I'm looking at say a qi then I'm going to define AI as the positive sample and K1 although it can other than Ki as the negative so my goal is to map the query for a given um sample X because corresponding key so this is one way of defining well this is how the authors in this paper Define um contrastive learning now how do you get these in my past those with a neural network uh which can be subscripted by K and Q to sort of denote what neural network is for the key in which which one is for the query and so on okay now we defined what so okay we have this idea of what is considered um similar in this cylinder let's try to make it more precise so um intuitively uh it's being passed the same sample x i to the same network right so we're not required to use the same network but you want to use this network Community representations so in a sense using Q networks is kind of useless because we want one network is developed like really good representations which we can then use for all kinds of like Downstream tasks so how do we get both the key and the query from the same network bypassing this like single image now one common idea that is very popular and contrastive approaches when it comes to State computer vision is to so you can take say the limit of a dog um you can just like add noise to it or you can drop and resize it even though you like these augmentations to your original input it doesn't really change the semantic content of the image right you know there is so energy so but upon this augmentations changes the potential function of the image which means that if we pass in say we Define the obligation by some is like function B for example because this wave of course this B if I see oil communication for some sampling it returns say or value of that image that's why it is um I could pass it and say V1 and a V2 to the same network X and get a key inquiry in that particular way this idea is called instance discrimination because um you are discriminating between different views of the same instance and this is pretty much how all Stevie networks Define dislike notion of similarity so you want two views of the same image to be similar but if you have two views from different images you want this to be dissimilar um in a sense okay um so now we have all the ingredients we need to Define this network we know how to get our positive examples we now get our negative examples and we can throw this into some contrastive loss that we highlighted earlier and create a network using that um there are a few considerations we have to keep in mind so practical experimentation has shown that if you consider a huge amount of negative examples in a single batch and you have to pick out a single positive almost like huge pool of negatives you're going to get better representations and intuitively that's because a huge pool of negatives is going to make the task harder which means the model has to work extra hard to pick out the positive from this like huge batch of negatives and this harder task sort of like encourages better representations however this is also practically like not exactly diffusible because you generate like each say negative example like say each negative key you have to run this image by this network um that many times to like get all the negative examples in the first place and this can be very very computationally challenging so it's actually not in common you have say anybody in 47 negative examples of 2048. I imagine running a single model that many times for a single iteration in your training process that can be very very time intensive so yeah so this forces us to use like small midi batches but that's not what we want we want to use like large batches so this idea comes up to mind okay what if we pre-compute all the keys and queries from this like single model what if we pass an image take each view of computing that is like store them somewhere okay the sounds like a possible idea um maybe that's some progress um so yeah you can do that so what you can do is you could store the pre-computed embeddings in one place and sort of like hold samples from that when you're sort of like doing this like contrasted learning process during the actual training um and you could store each all of these like embeddings into like a large memory bank and like keep sampling from that but what we're going to do instead is we're going to maintain a queue so we are going to compute this embeddance and here what the permission is going to be um first in last out right so sorry first and first down sorry um so what's going to happen is that you can store these embeddings um in this queue and as the model training progresses you can keep Computing more and more embeddings and keep storing those that are key again and again and one said queue is just some capacity you don't want to click on the older embeddings from the Queue at that point and the reason we do this is keep in mind that these embeddings are coming from this um single Network which is updating over time the network is not static the network has like the way it's changing because of that propagation and stuff like that which means that the embeddings from a previous time step might be very different from the embedding sort of get from a more recent time step for example so in order to just maintain this like sort of consistency across time we can't store embeddings from very far back in the training process okay so that's a good idea and there's actually one final step before we get the whole mocha model is say okay we have solved the problems like Computing um all of these like embeddings and storing those as somewhere so we could use like large patches but think about what's happening during the training when you have a contrastive loss all of the negative samples are involved in the compressive loss right which means if you take the gradient of it the back propagation process must be applied for any necrosample inside the bash if you're that you're considering at a given moment which means for if you have like say two thousand negative samples in your batch you're going to have to compute the gradient from the forward pass for each of those 2000 examples through the uh this like embedding Network right and again this is also very computationally um infeasible so we have only solved the problem of of comparement comes in a forward pass but not quite the backwards Passage okay so you might just imagine that okay you're getting our keys and queries from a um he's like embedding networks what if we don't simply update the weights of the key um of the key encoder uh this doesn't exactly work at all because again there's some costs like consistency issues because again um you want the uh you want the key encoder and the core encoder to sort of be like the same network because you're getting your positive and negative samples by um again if you look at this Concepts over here we have access to each View for the same network which means if you can't query quote almost do the exact same now this means that you can simply copy over the base from the query Network to the key network but this doesn't work uh because of the same consistency issue that we mentioned earlier the key that query encoder is changing too rapidly which means that um the key encoder is also changing too rapidly and if you back propagate from an older version of some embedding that was generated by an older version of this key encoder that can cause some like very weird gradients foreign is instead of like changing the base of the key encoder too rapidly you do like a very slow update you take a moving average of the key encoder's rates and the updated query encoders rates and set that to the key encoder switch so and and in this particular way your key encoder is not changing too fast but not too slow either and you can sort of like get around this consistency issue by making sure you have a somewhat recent copy of the query Network for the key Network so that the embeddings that you produce at a given timestep are somewhat similar and the small contrastive learning process can move it hey um any questions about how this model works at all so what's happening is that um there's key encoders like slightly lagging kind of query encoder but as the training progresses they're both updated over time and the updates from the query are sort of like reflected into their keys as well and it turns out this sort of process is like very um you can actually giggle like really good representations so here are some results from this uh from the representations and uh and how they were using like Downstream tasks so uh the authors train this like um just like process on the imaginal data set again during the whole process we never use any of the labels we were simply taking the images directly and just working with their views we never considered what labels that's like each image had and once this model has been trained they were either like fine tune it or use polynomial classification on something like um imagenet or Pascal VOC for example and you could see that in pretty much all the cases um the smoko model beat out all the previous models and this turns out this turned out to be a very strong objective that people actually still use today this liver came out in like 2019 which is actually pretty old when it comes to like deep learning considering how fast things move but like this technique is like still very popular today any questions about Moco before I move on if not we are going to consider another contrastive learning method so in my slides I have this other method called CPC I didn't go over that in class I have the slicer CPC in the presentation that will be shared later but it's like skipped because I feel like CPC is not that important I didn't want to cover and I didn't want to spend time on it too much but the thing is like CPC is also like a contrastive learning method it's called contrast the predictive coding which is why it stands for CPC and it's a fairly technical model similar to Moga because Moco you had to have like this queue of foreign bearings you also have to like keep track of in moving average just like a lot of details to keep track of when coding these can be can be kind of hairy because all of like these like moving parts that you have to uh make sure that are actually there so what people did in 2020 is they came up with this message called simclr it stands for a simple framework of contrastive learning and visual representations and they said okay you don't have to do any of that technical stuff from before and you can do like a very simple process instead so you can take some sample X you could sample some data augmentation T from your sort of augmentations you could consider different views or text while playing this like augmentation to them so this is what yields like x i and XJ because they're like just different views of the same X you could pass each view through the um in coder Network which is currently represented by F so it's it's performing the same function as the key and query encoder but instead of taking two different models and having them have like very closed weights is going to consider the same model this time instead of having maintaining like a separate key and query encoder you're maintaining a single encoder in this in this framework and it doesn't want to yield the embeddings hi and H A for each View now instead of passing in this embedding directly to the contrastive loss what you're going to do instead is pass the embeddings through a small NLP which is represented by G in this figure over here to yield something like z i and ZJ and you're going to run the contrasts of loss on top of them they actually found out so it turns out that that contrastive learning method so far have been were running the contrast of laws on the direct embeddings hi and h a so far but the authors of this river found out that okay if you've passed these embeddings to say uh through say a two-layer MLP those can yield even better results in the end why this is true is I don't think it's exactly known but this is like this is more of like an empirical fact that uh and this like extra head on top of the embedded Network can yield better representations so this is a process that they use so again they don't have to maintain like a moving average you have to maintain um like different networks that is like sort of like go through this like process of both which means that and they also argue that okay but to sort of like compensate for all the technicality we do have to uh sort of compromise when it comes to compute because this sort of technique only really works well if you use like very large pad sizes which automatically means like um more computation power and longer training times so like you can so like a mocho model could converge way faster than a Sim scalar model but if you train the Sim CLR model for a long time you'll get better results in the end so that is the compromise you have to sort of make when it comes to this framework um again they use a contrastive loss that looks very similar to the uh and parallels that I showed earlier this time they also introduced this like temperature scaling term where they take the embeddings from um but prepared to take the embeddings from this like um from the from the encoder Network they compute the cosine similarity so same here is just like similarly again it doesn't really matter what it is but earlier models did not have the same pressure term so like if you go back to the um if you go to the loss portion that I shared earlier we were just taking like this G function we didn't have like a scaling term in there which means that our temperature by default was one uh the authors of this river found out that if you uh that allowing the temperature to adjust can give you like even better representations in the end so they introduced us as a hyper parameter they also do some oblation studies and they and again the notion of similarity for this model also comes from the fact that Q we use the same image should be considered similar and two abused of different images should be considered dissimilar and the views are generated by data augmentations so they actually looked at what augmentations perform the best so they have this table in the bottom left where the column represents the augmentation that they applied first and the row represents young mission that they applied second and you can see like which of augmentation sort of like yield the best results um again surprisingly um I did not expect this but applying color based augmentation stuff like yield the best results which is sort of unintuitive because if you were to say take an apple and you change the color to sort of like green then incredibly a green alcoholism from a red apple but they kind of show that this sort of augmentation still works the best even though it might not make too much intuitive sense another thing that they sort of mentioned is using longer training sessions and using larger batches will lead to like better representations and sort of like show that with this sort of like um paragraph in the bottom right you can see that as the number of trending epochs increases you see an increasing Trend similarly if you look at the batch size as that increases you also see an increase in current in that size so let's see how well this model performs um okay uh if you look at the table on the right we can see that this model when you take the representations from this model and train a linear cluster on top of that for say Mr classification you're getting the best results that you have seen so far uh simclr was also a huge breakthrough because it was the first time a self-supervised method was able to beat a fully supervised method so typically when people train these like SSL models what they do what they do as a baseline is they take say a resonant model that is trained on imagenet in a supervised manner using the whole labels and they use that and the user representations from that model as a Baseline um this was sort of the first method that even though it might not look like that in this sort of like figure over here but this was the first message that was able to be like a fully supervised Baseline on like um image Net One classification I'm actually not sure why there is also just like 48.4 over here because oh okay sorry I messed up the table so what the table is showing is if you take the representations from like this network and you fine-tune the these like networks using just like one person of the total image that labels and just 10 percent of the Total Image net labels you can see that some CLR performs the best if you want to look at a direct comparison where you use all the labels to fine tuning if you look at the chart on the left which shows that um this this bigger Sim scalar model performs the best now there is a big sort of compromise that you have to make is that even though the simclr Forex model is able to be the supervised Baseline it is about 16 times bigger it has about 400 million parameters whereas you supervised Baseline like about 205 million 25 million parameters so this paper also showed that you need these huge models to learn better and better representations in a subscribe Manner and I think inevitably it sort of makes sense the yeah I like to think about this is um if you're working with a supervised model the late the supervision comes from the actual labels uh and since the label sort of like dictate what's going on you don't really have to learn representations that are like too generalizable and you could fit those representations in a smaller model but if you don't provide the label you have to learn something more general which means you would need a bigger model to sort of like fit that information in this is sort of how I like to think about it this may be completely wrong but I think the sort of explanation Works in my mind any questions about some CLR before I move on okay I'm gonna cover the final contrastive learning method today which is funny because technically byol or build your own latent is actually not a contrastive running method at all it feels similar to one but it's actually not one so the way byol works is you have some input image X again you apply like different data augmentations to it and you generate like different views B and V Prime you pass those of the NB Prime through some encoder networks so in this case we are sort of going back to our local work instead of same CLR we have maintaining like different encoder networks like for a key and for a query but instead of calling it a key inquiry we call it a Target and actually don't quite remember what they call the other network but we have a network and a Target Network you take the representations from those again they import this idea from some CLR as you pass those projects those those embeddings to another small Network say G Theta and G Theta and get like projection C and C Prime and you run your contrastive loss and stuff this so in a sense this method is sort of combining what Moco and simclr did into this like single sort of objective but it is also different in a sense CLR and vocal both use the negative they both had the social negative examples they they took the views from different images as dissimilar um embeddings and they only took the view and the only the views from the same image are some similar embeddings this sort of concept does not really exist in vyol bio all doesn't really compare across different images in a batch it only looks at a single image at a given time and what it does is it tries to make sure that this um projection Z and this projection Z Prime are kind of close together so again uh it doesn't consider any negative examples it only considers like a single image at a given time and and as such it doesn't really have any positive examples either which means it's not really contrastive learning even though it feels like contrast to learning because the techniques that it's using were pioneered by models like mobile and simclr which are contrasting methods by definition um but what it's really doing is how many of you have heard the term bootstrapping before so bootstrapping is a statistics term which basically says that if you have a bunch of data points you can fit a model to the data points and then you can use the metrics from that model to fit another model and look into sort of like keep repeating this process so you don't really have like the true metrics from the data set before you're using the estimated metrics from this like fitting like this particular model if another model and like so on this is kind of what byol is doing as well is it's maintaining two different Networks it's fitting it's it's sort of uh so the loss function for by all is taking the um the projections from both networks and sort of optimizing a loss function define using both of these like projections which means that the second network is sort of driving is supervision for the first Network and vice versa how many of you have heard of deep key learning for a heavy if you know RL this is exactly the same thing that happens in deep free learning and this is sort of what the inspiration for byor was because the server is actually published by deepmind and in like 2020 I think so it uses this bootstrapping process to sort of like learn more and more represent to like build up these like representations and since it's not contrastive learning it's pretty robust to changes in bad size and the type of organizations that you use because simclr show that certain augmentations perform better than others and you should always and you should always be using bigger batch sizes but byol also works pretty well with like smaller batch sizes and you don't have to worry about what exact augmentations you choose now this particular method is very very sort of like janky because uh your somehow learning these representations without any discrimination for example because you don't have any negative and positives but you're still learning some really good results and why that exactly happens is actually not clear uh I know that there have been some empirical investigations as to why vyol works this wall even though there is like actually no contrast of running going on at all but it still rocks that still works well no one very sure why and it actually beats mclr and the exact same results that we showed earlier this is like those exact same benchmarks okay so that is it for contrastive learning um I just realized I did not find the presentation that well because we're at 750 and I have like a and I'm only done with the first half of the lecture so we're gonna move on to like a second class of methods I'm going to sort of like speed through them uh sorry about that I should find this better so this is a method called Dae or recognizing model encoder so we are moving on from contrastive learning methods to methods that sort of destroy the input in some way but then you have to reconstruct the input to get the original input back and uh and and then these sort of methods tend to work well with an auto encoder based structure because you could have the encoder taken the destroyed input and yield some embedding vector and you can take the decoder take the embedding as the input and yield the original image back and then you could compare the reconstructed image with the original imagine uh use that as a loss objective so one of the methods that people used in the early days was the denizing model encoder where they took an image they added noise to it the passwords were not an encoder and your goal was to predict what the denoised image was and the idea for this network came from the human ability to recognize partially occluded images or corporate images so like even we as humans can tell that even though this image of a 4 has not added to it it is still a four right but a machine can't tell that so we are trading a machine to sort of like do the exact same thing that we can already do naturally um you have to make sure that you're adding the right amount of noise that is if you don't add any noise then you're basically putting an auto encoder but if you add too much noise then you're making the tops like unnecessary order and your model won't learn anything so but it does mean that if you if your model is able to learn with a huge amount of noise it's going to be learning some very good representations so you can look at these images over here then we have no noise added the representations that we get from the model are kind of random they don't make too much sense but if you add say but if you'd like to say destroy half of the image and you try to protect the original image from that you can see that the model is trying to learn something so if you look at like each image um on the right you can make out like certain shapes being formed so like you can kind of see that okay this looks like maybe a nine this looks like maybe a six over here it's trying to learn it's it's learning something meaningful which is why this sort of like model works really well um there is another model called the stack dinos model encoder uh this was very popular back in the day because um before these like modern techniques like that and batch Norm drop out um etc etc were invented training neural networks was incredibly hard back then training was very unstable uh so like if you were to like say Define a CNN and uh take the CNN pass an image through it and you just like run your normal classification loss like we do normally today this was not something that people did back then because it would simply not work which I know sounds very weird because this is what we are used to and sort of like um this generation so what people did instead was they would train uh denoising Auto encoder so again this is what the genos are modeling for the process looks like we have an image X you sort of like destroy parts of that image X to get like some extility you pass it through the encoder to get some embedding you pass the embeddings to a decoder should get some prediction see and your goal is to make sure that the CSS goes to DX as possible and you can learn this like encoder in this particular manner so what people did is there was like stack using photos on top of each other so they would take dmhx and train an encoder in this particular way then they would take the embeddings from this encoder and train another encoder that would take these embeddings as input and you could like sort of keep repeating this process so you could like keep training encoders in this particular Manner and you could like stack those up and the output of this chain of encoders is what you would consider as your final representation and this method actually turns out it worked really well back then but then modern deep learning became a thing we got things like bad storm Adam that accelerator training even further so people got kind of like stopped using this um I only showed that Dae and SDA methods over here because they were sort of like historically important but even though people like don't use them anymore okay so that's a pretty good idea it worked kind of well back then see if we can extend this idea even further so in in a Dae we were adding noise to an image what if we go like one step further and like just completely mask out parts of an image so like you take in this image of like two football players playing you completely hide the inner part like you just like throw it out you passed this like mask image to a model and your goal is to predict the unmasked region so yeah and and this model is called the context encoder because the goal is to predict the hidden part from the surrounding context fun fact this working out of Berkeley in 2016. if you guys know professor professor um audio Chef froze that's his lab that produces work so again you have to make sure that so I'm not even implementation of this network would be to you simply like taking the masked image and predict like the masked patch using some sort of like decompositional Network like you might have seen in the segmentation lecture or a dance lecture but and and you might try to like take the L2 loss the predicted patch and the actual passion like try to minimize that but that doesn't exactly work too well because the L2 loss is a I mean it wasn't the Old Law sort of leads to the average quality things that could have been so if there are multiple possible patches that could go into the uh into the mass region many ones in the local laws would like simply take the average of all of those and build this like stop like blurry but then under the altar column so what the authors did is they combined is they also added this like another Network called the discriminator which you might have seen in the dance lecture and what this did is it would take the predicate patch and try to classify classified as real or fake and this sort of objective sort of encouraged more realistic reconstructions and led to better representations again you can see that if you just use the ultra loss you got some like very blurry things out if you just use the adversarial loss you just get like any realistic looking random patch in the center but if you take a combination of the L2 loss in the adversarial loss you get something that is both meaningful and realistic at the same time um this method uh was kind of popular back then even though the results were kind of support so like on classification it could not really be an imagenet Baseline that was strain on in a supervised manner but it was able to get pretty good results in detection and segmentation so I guess that was a that was some progress I I guess in that front but this method generally performed two valve and there have been for other improvements ever since the only reason I put this network in here is to talk about our next Network which is an Mae now this is an important Network because this came out only last year and this kind of blew out everything out of the water so in Mae uses a Transformer backbone so all the methods that you have been seeing so far use CNN backbones so a mocha model would use the encoder for the local model of the same scale or the Sim CLR model was a resident 50 for example what I may does is it replaces with a vit es and it sort of uses the same objective as a context decoder you take in an image you mask out patches from it you pass the unmasked region certain encoder you pass the encoded embeddings to a decoder and your goal is to reconstruct your original image back then your hope is to learn some good representations in that in that particular format so the encoder works very similar to how a regular bit works you take an image you like chop it up and you like different pieces you're randomly mask out some fraction of the um of the patches and you pass in the um unmasked patches for the encoder um just like how a bit does it I'm not gonna go through all the text over there I think we're just in your own time especially if you know how bit Works let's see you know you already know how this works and then what the decoder does is so again keep in mind that you're only passing in the unmasked regions you're not passing in the mass tokens because like there's nothing to pass it's mashed out right what the decoder does is it takes the embeddings from the unmasked regions it adds something called a mask token at each index that was for for the passes that were masked out so you get sort of like this combination of mass tokens and the embeddings of the unmasked patches it's just like sequence and you pass this through a decoder which is again a very simple organ decoder that you might have seen again it's it's another Transformer which is defined very similarly to the transformer for the encoder um there's not much difference between the two so you pass this to another Transformer and you get the reconstructions now your goal is to minimize the ultra distance between the reconstructed patches and the um and the masked out and the um mashed out patches and your your hope is like learn something meaningful again this is sort of what this process would look like so I have like multiple examples from the Mae paper the right column shows what's fed into the encoder the middle column shows what's sorry the left column shows was fed into the encoder the middle column shows what the ma predicts and the right column shows what the actual ground truth is and you can see that it's pretty close it's actually learning most of the high level features that are in an image um again I'll probably leave it up like the next okay seconds you can like look at this even more so I guess like while the sliders have any questions about the Mae model so far the class token so in a Transformer um you have say your pattern of sequence um today the next one and two other dogs there's some XP you also look at so even each [Music] um thing and the sequences one is going to have a token percent to it once you simply what you also do is you can do one it's going to call the CLM token so when you pass just like whole sequence through the attention module you're going to have the glycol tension mechanism a technique like this token as well so what's what's happening is as the modern rooms more and moisture the the hope is that this class token is attending forward like other program that is sort of like capturing information about the sequence so you could use the class token and for something like classification so you can just like people are still being better for the class token and years gone as the base for a classified because the hope is that this past token is going to have information about the entire sequence again uh so I thought I had two slides of these images I guess not so when Mae came out um like I said it's gonna kind of blew out all the previous models out of the water the previous best was an 84.1 success also rather an 85.2 percent success on using a vit large model but you can get up to like 87.8 percent um success rate with an Mae and I think that to this date Mae is still the state of the art when it comes to models of training in a self-supervised model in a self-supervised manner and then fine tune on division of data so this is just a result of image net Mae also performed really well when it was like when its representations were transferred down to like Downstream tasks so this could be something like taking the embeddings and doing something like detection and segmentation on the cocoa data set cocoa data set or using or doing classification on say the places data set or the inet data set or maybe even something as uh passing the representations through the um through a different variants of the emission it just said so the emission internet set has like about a million images and like a thousand classes there are variants of it that have sort of like adversarial um examples in them so there are data says like animation corruption which have like corrupted images from the image from the Emerson data set there's like evidence that image that adversarial this imagenet sketch which contains like sketch versions of all the images the goal is to see that okay if you were if you were to train a model a normal image net how well does it perform on like this out of domain images and turns out that Mae actually performed the best because like these editors are very hard to do well on and yeah you can see that the previous best results that I've shown over here it's sort of like beat everything um so far okay so why does it work well um Mae uses masking as a pretext task we already saw this whole basking task was like very successful in NLP with work and that was actually the main inspiration for Mae that's how the authors like chose this particular objective um it also used a very high masking ratio so if you were if you remember the images that I shared earlier you mask out like about three-fourths of the actual image and you only have about 25 of the actual content and this makes the whole reconstruction task more challenging because the model has like less to go off of and as I sort of alluded to earlier but the more challenging you make a task so in the case of contrasting running the one where you do that is by making the your badge like really big so you have a very high negative to positive ratio in the case of ma this was making sure that your masking out a huge portion of your input so you're only working so you're only reconstructing where as little information as possible and the sort of harder task push the model to learn even better and better representations um the last part is kind of I just kind of threw it through in there you don't have to worry about that but Mae was great it was designed in such a way that it's like very very scalable you could train the this like model for up to like a thousand epochs and it doesn't really take that long yeah so okay we have gone through like these reconstruction based methods and we have gone through like these contrastive learning methods which one's better and so it's very hard to answer that so when you train these models in like the self improvised manner there are two main ways to evaluate the performance on Amazon for example you could take the rows of models you can take them very important models and train in a linear costs for on top of them this process is called linear probing or you could take the models and fine-tune them with image.labels for example and it turns out that those actually lead to very different results so when you take linear probing contrastive learning actually outperforms uh models like Mae for example but if you allow the model to fine-tune and like pass gradients back then Mae actually does much better so why this is true is not something is known I think at the moment it tastes like cell an area of research as to why just like image reconstruction objective is working better with fine tuning but but not linear probing and like vice versa so a natural question to ask is we know that one each approach has still has its like own strengths can you sort of like combine them in some way and the answer is yes so Google actually released this paper called can which actually came out last week um I actually put the slider in like earlier today because again yeah this this actually came out I think last Sunday I might be wrong what they do is they train a single model that does both image reconstruction and contrast the learning at the same time so how this works is you take an image you like generate two views of the same image you mask out the two different views you add noise to it so in a sense it also combines the denoising objective that we showed earlier and and that's actually what stand this is actually what can stands for the C is for contrast of writing the a so Mass order encoder and the N is one voice prediction you take the um noise patches you pass it through an encoder you take the the embeddings pass it through this like um MLP projection that we showed that we had for models like since CLR and by all and you would run some contrastive running laws on top of those so influence CE is like one contrast of learning loss that I didn't talk about uh in this slideshow but it's just um another variant of the losses that you've seen before and you would use sort of a your loss function would be a better combination of the endpoints e loss the Reconstruction loss the denoising loss and it turns out that the small problems really well so if you look at both linear probing and fine-tuning the small group like does better than the best image reconstruction model which is me and the best contrastive running model which is simclr and you can see that achieves like better results from compared to like all the previous models so far so again sorry before I move on to the summary any questions about any of the models that I've talked about so far in this lecture okay so these are not the only models that we I could have talked about um I only have like 50 minutes which I sort of like went past sorry about that but there are way too many models out there there are more contrasted learning models there are also supervisor presentation learning models representation learning models that work really well there are models that train representations that are good for control tasks so if you do RL then these models might be more accurately for example and I didn't really have time to cover any of those but I want to but I would encourage you all to like check it out in your own time um at some point again we have covered like multiple positive models so intersection we covered compressive learning and image reconstruction um if you also go back to the free printing lecture when I introduce just like all the results device learning I've been through some common sense objectives like rotational prediction and jigsaw solving stuff like that but also like yielded pretty decent representations however we have barely scratched the surface there's a lot more in this area than than what I've shown so far this is actually a very hot research area right now in fact my research is also sort of like in this area at the moment um you will have a homework that is based on this slideshow your goal will be to take like these models like Moco or Ma and then um linear program for c410 prediction yeah again these are all the papers that I from the all the models that I showed earlier people would like check these out and yes for that um that's it that's all I have for this lecture any questions before I end yeah okay I'm gonna end the recording then
MIT_18100A_Real_Analysis_Fall_2020
Lecture_18_Weierstrasss_Example_of_a_Continuous_and_Nowhere_Differentiable_Function.txt
[SQUEAKING] [RUSTLING] [CLICKING] CASEY RODRIQUEZ: So we're going to continue with our discussion of the derivative. So now, let me recall the definition we introduced at the end of last time of the derivative. So let I be an interval, meaning it could be open, closed, it could go out to plus infinity, it could go out to minus infinity. But you know what an interval is. And let's take a function from that interval to R. So we say, f is differentiable at a point c in I. If this limit of f of x minus f of c over x minus c, this difference quotient, if this limit exists. If this limit exists, we also denote it by f prime of c. And last time, we showed that-- we gave a simple example, that from last time, that if x equals alpha times x to the n, then this function is differentiable at every c, and its derivative is equal to n times alpha x c to the n minus 1. Now, let's state this in an equivalent way using sequences. Remember, we had this characterization of limits of functions. And the function we're looking at is f of x minus f of c over x minus c. We had this equivalence between limits and limits of sequences from last time, that the limit as, let's say, g of x as x goes to c equals L, if and only if for every sequence x sub n converging to c g if x sub n converges to L as n goes to infinity. So we can restate what it means for a function to be differentiable at a point and its derivative to be L, say, if and only if for every sequence x sub n, with x sub n not equal to c for all n, and converging to c, we have that L is equal to this limit as n goes to infinity of now this sequence of numbers, f of xn minus f of c over x of n minus c. So today, the theme of today is the connection between differentiability and continuity. And we have a very easy implication, which is that if a function is differentiable at a point, then it must be continuous at that point. So that's a statement of this theorem, if f going from an interval to R is differential at c, then f is also continuous at c. So let me also add in here that continuity at the point c in an interval is equivalent to saying that the limit as x goes to c of f of x equals f of c. Now, this is just a subtle point that I want to make, is that for an interval, open, closed, half open, half closed, whatever, any point in that interval is a cluster point of that interval. And therefore, this definition I made up there is actually meaningful. And for continuity, something being continuous at c is equivalent to-- if c is a cluster point, which I just said it is-- it always is, this limit equals the function evaluated at the point. So that's just kind of a subtle comment I want to make. So how do we prove this? We write the limit as x goes to the c of f of x as in this way. So essentially, what I do is, I add and subtract f of c. And then, the part that is f of x minus f of c, I multiply by x minus c and divide by x minus c, which is perfectly fine. Because remember, for a limit, I'm never actually looking at points where x equals c. So that's fine. And so, I get this expression here. Now, what we know about limits, is that the limit of the sum is the sum of the limits as long as all these limits exists. And the limit of the product is a product of the limits, again, assuming all of these limits exists. And the limit as x goes to c of everything you see here do exist. So as x goes to c, this thing in brackets, or in the-- I guess these are brackets. I don't know what those are called. This limit here is f prime of c. As x goes to c of x minus c, that's just 0. And f of c is just a constant. There's nothing changing with x. So the limit of a constant is that constant. So I just get f of c. So the limit of this whole expression is f prime of c times 0 plus f of c, which equals f of c. So we've just proven that the limit as x goes to c of f of x is equal to f of c, which is what we wish to prove. And therefore, a function which is differentiable at a point must be continuous at that point. So as beginning math students, we're learning proofs, but we're also learning questions, what types of questions to ask. And whenever you come across a new theorem that has one implication, one hypothesis, one conclusion, then you should ask yourself, does the converse hold? Does the conclusion also imply the hypothesis? So we've shown that f differentiable at c implies f is continuous at c. So does the converse hold? Namely, if f is continuous at a point, does this imply that f is differentiable at that point? And as you can see already there, the answer is, no. I think you've probably covered this in calculus. But what's the function that gives this counterexample that is continuous at c but not differentiable at c? Well, let's take, for example, c equals 0. And let's look at the function f of x equals the absolute value of x. In this function here, this is continuous at every point. So in particular, at c equals 0. But it's not differentiable at c equals 0. And how do you come up with-- and how do you do that? How do you show something's not differentiable? The simplest way is to use this remark up here by finding a sequence converging to c, so that the difference quotient does not have a limit as n goes to infinity. So that's what we'll do here is, we'll find a sequence x sub n, with x sub n not equal to 0 for all n converging to 0, such that this limit does not exist. Rather than write that, because the limit does not exist, it doesn't make really sense for me to even write that. Let me write this as saying, as the sequence is divergent. Now, how are we going to come up with this sequence, x of n, so that the difference quotient does not converge? Well, what's the logic? So for f of x equals the absolute value of x, it looks like this. It looks like x-- it's equal to x over here minus x over here. So if you were to formally differentiate-- well, I mean, you could actually differentiate to the left of 0, you get the derivative is equal to minus 1. To the right, the derivative is 1. So that kind of suggests maybe there's some funny business going on at 0. So let's look at a sequence which alternates between being negative and positive, but it's converging to 0, and see if that sequence will provide us with this desired sequence which results in the difference quotient being divergent. So let's just take a guess. Let xn be minus 1 to the n over n. It doesn't have to be this one, just something that alternates back and forth would be enough. So this could be n squared here, or 2n, or n to the 2020, whatever you like. Then, this is clearly always non-zero. And it converges to 0. And of course, why it converges to 0? I mean, you could prove this by epsilon m definition, but also if I take the absolute value, that's less than or equal to 1 over n. And this one we know converges to 0, 0. So by squeeze theorem, the thing in the middle must converge to 0. And we're done. So we have this sequence x sub n equals minus 1 to the n over n, which is alternating back and forth. And we hope it's going to result in this difference quotient being divergent as n goes to infinity. And so, we compute that. Let's look at the difference quotient f of x sub n minus f of 0. F of 0, which is 0. So I just get f of x sub n, which is the absolute value of minus 1 to the n over n over x sub n, which is minus 1 to the n over n. This is just equal to-- so this is just equal to 1 over n over minus 1 to the n over n, which equals minus 1 to the minus n. But minus 1 to the minus n, this is equal to the same thing as minus 1 to the n. So this sequence here of difference quotients, so f of xn minus f of 0 over xn minus 0, this is just equal to the sequence minus 1 to the n, which we know is divergent, doesn't have a limit. That's one of the first examples of a sequence we proved does not converge. So in summary, this function f of x equals the absolute value of x is continuous at c equals 0, but not differentiable at c equals 0. So now, a natural question that people asked is, so a function being continuous at a point does not necessarily imply that it's differentiable. But let's take a function which is continuous on the real number line. Is there a point where it's differentiable? So it's already on the board. But this is not completely a crazy question to ask. For example, let's go back to f of x equals the absolute value of x. This function is differentiable everywhere except 0. So there's lots of points where this function is differentiable. It's differentiable for c positive and c negative. So for this continuous function, there does exist points where it's differentiable. And you can imagine trying to draw any kind of curve that you can on a piece of paper. And you can probably find a point on that curve that has a tangent. I mean, if you sit there with a pencil and then try to draw something very jagged, I mean, there are still going to be little sections of your jagged curve where it has a tangent. So I imagine this is why people thought this was the case, that a continuous function has to have at least one point where it's differentiable. And Weierstrass, the godfather of analysis, said, no. He came up with a whole class of examples of functions, which are continuous on the real number line, but are differentiable nowhere. And this was a really surprising set of examples and result to the community. And to me, it's also one of the few results in this class that you really didn't see in your calculus class. So we're going to go through this example. Although in most analysis classes, it's reserved for later. But I think we can do it now. And so. What we're going to prove is-- so we're going to construct a function which is continuous everywhere but is differential nowhere, so a continuous nowhere differentiable function. And I'll even write down the function for you. So what's the idea that Weierstrass had? It was, let me-- well, I'm not going to write down the function just yet. But the idea of the constructing such a function is that it should be highly oscillatory. Imagine, again, you're trying to draw a picture of the graph of a function that's nowhere differentiable. You would sit there and break your pencil trying to draw just a highly oscillating function so that it never has a tangent. And that's his idea, is to build a function which is highly oscillatory but too oscillatory so that it's still continuous at every point. So to start with the construction of this function, which is continuous but nowhere differentiable, we're going to need a few simple facts to start off with. So again, let me write out-- state our goal. We're going to construct a continuous function from R to R, which is nowhere differentiable Not differentiable, it's differentiable at no point. So let's start off with a few simple facts about-- so I said that we're going to build a function which is oscillating quite a lot. There's two functions you know that oscillate cosine and sine. So those are going to be our building blocks. We'll choose one of them. Let's choose cosine, say, as to be our building block. But anyway, so let's start off with some elementary facts about cosine. So first is that for all x, y, and R, cosine x minus cosine y is less than or equal to the absolute value of x times x minus y. The second is that for every real number c, and for all k natural number, there exist a y in c plus pi over k, c plus 3 pi over k, such that-- and let me label this as theorem one-- such that cosine of kc-- that's funny-- minus cosine of ky is greater than or equal to 1. So both of these are simple facts about cosine. Really, this one just follows from the angle sum formula. This one follows from the periodicity of cosine. But so, why are these true? So first off, we did prove-- and maybe I should have used sine instead of cosine in all of this. But if you recall from our continuity section, we proved that for all x and y, sine x minus sine y is less than or equal to the absolute value of x minus y for all x and y. And there was a simple relation between cosine x and sine of something. And that's just a shift by pi over 2. Then cosine of x minus cosine of y, this is equal to sine x plus pi over 2 minus sine y plus pi over 2. And if you like, let me instead of using x and y here, let me use a and b. So then, this is less than or equal to x plus pi over 2 minus y plus pi over 2 using this inequality. So that proves number one. For number two, it's just a simple fact about cosine being 2 pi periodic. So function g of x equals cosine k of x. This is also periodic. But now, what's the period? It's 2 pi over k. Now, this interval has link 2 pi, except it's missing the point c plus pi over 2 and c plus 3 pi over k. Thus, if I look at the image of-- OK. Then, so this function g of x equals cosine k of x. If I look at the image of this interval, this will contain all of the real numbers between minus 1 and 1. So this is a period. This is a interval of link 2 pi over k. Cosine kx is 2 pi over k periodic. So I should get everything between minus 1 and one except possibly the value at the endpoints. So take away-- and the value at the endpoints is minus cosine ck. Because cosine of k times c plus pi is equal-- pi over k is equal to cosine of kc plus pi. And cosine of something plus pi is equal to minus cosine of that thing. So the image of the set by cosine kc contains everything between minus 1 and 1, except for possibly the endpoint. This really only happens at if cosine of k times E equals 1. So OK. Then, if cosine of kc is bigger than or equal to 0, then we choose y in this interval c plus pi over kc plus 3 pi over k. Let's write it this way. So if this thing is bigger than or equal to 0, then we choose y-- why am I confusing myself? So this is simple enough. We choose y in this interval so that cosine of ky equals minus 1. And if cosine kc is less than 0, then we choose y so that cosine of ky equals 1. So again, here's-- so forget-- just to get an idea. So I'm quibbling over a minor point, the fact that we have to take away possibly the point where-- possibly the value of cosine kc. Anyways, that's erase this for a minute. And let's imagine that the range of this guy-- I mean, it's a 2 pi over k periodic interval. So this should basically cover all values between minus 1 and 1. So cosine of kc will either be plus or minus. So if it's not negative, then I can find a y in this interval, since it's 2 pi over k periodic. So that cosine of ky equals minus 1. And since this is not negative and this is equal to minus 1, the difference between these two in absolute value will be greater than or equal to 1. Now, if cosine of kc is less than 0, then again, since this interval contains minus 1 to 1, it's 2 pi over k periodic, we choose a y so that cosine of ky equals 1. And then, this is negative. This is 1. So the difference between something negative and 1 is greater than or equal to 1. And that's the proof. I made a kind of a mess of it. But this is very easy to understand if you just draw a picture really. So let's continue. And again, what these two parts of this theorem say is that cosine can be quite oscillatory if you insert a k here. Because the difference between this interval is actually quite small. The length of this interval is 2 pi over k. If k is very large, that's a very small interval. Yet somehow, we can find two points which differ by at least 1 in value if we plug them into the function. But cosine is not too wild, because it satisfies this bound. So these two are kind of-- they're going to be the ingredients in that idea I said at the start, that we're going to build a function which oscillates, which is quite oscillatory, but it's not too oscillatory that it's still continuous. And we're going to build our function out of cosine kx, where k is going to be changing. So I'm going to need one more very simple fact. So this one, I won't mess up too much. Which is the following. For all a, b, c, and R, the absolute value of a plus b plus c, this is bigger than or equal to the absolute value of a, minus the absolute value of b, minus the absolute value of c. And the proof of this is just to use the triangle inequality twice. We have the absolute value of a. This is equal to a plus b plus c, minus b plus c. And this is less than or equal to, by the triangle inequality, the absolute value of a, plus b, plus c, plus the absolute value of minus times b plus c. But that minus goes away with the absolute value. And then, I use a triangle inequality one more time here to get-- and then if I subtract-- oh, I didn't want to do that. And so, if I subtract this side over to the other side of this inequality, I get the statement of the theorem. So now, we're going to introduce-- call this theorem two-- now let me introduce the guest of honor. So first, I have the following claim. For all x in R, the series given by sum from n equals 0 to infinity-- and instead of n, I'm going to use k, cosine 160 kx. So here's our very oscillatory guy over 4 to the k is absolutely convergent. So for each x in R, this series converges absolutely. So it spits out a real number. So I can define a function in terms of this series. Let f from R to R be defined by a prime-- f of x is just the number that I get when I stick in x to the series. Which is meaningful, because for all x, this series converges absolutely. So this is just a function. I put in x. I get out a real number. Then the claim is, f is bounded and continuous. So this is going to be our function, which is continuous but nowhere differentiable. As you can see, it's built out of a bunch of very oscillatory functions, cosine of 160 kx, this is just a really big number here. And as I said, when we were talking about this theorem here, each one of those pieces is very oscillatory. On a very small interval, it oscillates between two values that differ by at least 1. And so, all of these guys are oscillatory. And they're oscillatory on smaller and smaller intervals. And somehow, we're adding them all up in a way to get a function that's oscillatory on arbitrarily small intervals. And therefore, it will not be differentiable. So the proof of one is very easy. We just use the comparison principle. Cosine of 160 k times x over 4 to the k, this is less than or equal to-- because cosine of no matter what you plug in is bounded by 1. This is always bounded by 1 over 4 to the k. And therefore, by the comparison principle, we get that this series is convergent. And therefore, the original series is absolutely convergent. So each of these is bounded by 1 over 4 to the k. This series converges. This is a geometric series. Therefore, by the comparison principle, this series converges. So now, we have this function defined by this series. Note, this is not a power series, because a power series involves polynomials in x. This is cosine of x, or a number times x. So let's show it's bounded. The same proof actually that we gave here shows it's bounded. Let x be in R then, f of x equals-- this is equal to the limit as n goes to infinity of this sum. And now, for our absolute values, this limit pulls out. Whenever the limit exists, the limit of the absolute value equals the absolute value of the limit. And this is less than or equal to the limit as n goes to infinity of bringing the [INAUDIBLE].. So by the triangle inequality, this is just a finite sum. So I can bring the absolute values in. And I get sum from a equals 0 to m of cosine 160 k, x over 4 to the k, absolute value. And this is less than or equal to limit as n goes to infinity of sum k equals 0 to m of 4 to the minus k, again, because cosine of anything is bounded by 1. And this equals 4/3. This is just a sum from k equals 0 to infinity 4 to the minus k. So this function is always bounded by 4/3. So the function is bounded. Let's now show it's continuous. So to show a function's continuous, remember, we have that other characterization of continuity that a function is continuous at a point if and only if for every sequence converging to that point f of xn converges to c. So before I start writing all this down, let c be in R, and let xn be a sequence converging to c. So what we want to show is that limit as n goes to infinity of f of xn minus f of c in absolute value equals 0. That's the same as saying the limit as n goes to infinity of f of xn equals f of c. Now, f as a bounded function. So this sequence, f of xn minus f of c in absolute is a bounded sequence. So it has a lim sup. And we did an exercise on the assignment that the limit of a sequence equals 0 if and only if the lim sup equals 0. So equivalently, we'll show that lim sup of f of x sub n minus f of c equals 0. This thing always exists for a bounded sequence. Which is one of the reasons which is what makes lim sup so useful, is that they do always exist. So if we show that this lim sup equals 0, then this is equivalent to showing this limit equals 0. Again, this was an exercise, where you can take it as the lim inf of something that's non-negative is always bigger than or equal to 0. So if we show this is equal to 0, we would have 0 is less than or equal to the lim inf of this thing, is less than or equal to the lim sup, which equals 0. And therefore, the lim inf equals the lim sup equals 0. So that's another way of saying that this is equivalent to this. So we're going to show this at the lim sup of f of x sub n minus f of c equals 0. But we don't-- so that might be a little tough. But what we can show, and what we will show, is we'll give ourself a little room. We'll show that for all epsilon positive, the lim sup of f of x sub n, minus f of c, which is a non-negative number, is less than epsilon. So this is a fixed number, non-negative number, which is always smaller than-- put less than or equal to there-- which is smaller than any number that I want. And therefore, it has to be 0. And proving thus, lim sup. So just to recap, we want to show-- we have a sequence converging to c. We want to show that f of xn converges to f of c. Another way of stating that is that this limit of the absolute value between-- of the difference between these two converges to 0. That's again a equivalent way of stating the limit. And in another assignment, we proved that for a sequence of non-negative numbers, or if you like, just for a sequence of numbers, it converges to 0 if and only if the lim sup of the absolute value converges to 0. But this is a sequence of non-negative numbers, this absolute value of f of xn minus f of c. So this is equivalent to this. So this thing we want to show is equivalent to this thing right below it. Now, rather than show directly that this limit lim sup equals 0, we're going to show that for all epsilon positive it's less than or equal to that small number. And therefore, proving that it's 0, because it's a non-negative number smaller than every positive number. So this is our goal, to show this lim sup is less than epsilon. So let epsilon be positive. We're now going to prove that that lim sup is less than epsilon. So first off, let m0 be a natural number such that the sum from k equals m0 plus 1 to infinity of 4 to the minus k is less than epsilon over 2. So this series here, right this is a convergent series if I go from k equals 0 to infinity. And we have this Cauchy criterion for convergent series, which can be equivalently stated as for all m0 and natural number, there exists-- or for all epsilon, there exist-- so first off, rather than go through all that, we can actually just compute this. That the left-hand side equals 4 to the minus m0 minus 1, times sum from L equals 0 to infinity, 4 to the minus L, which equals 4/3. 4 to the minus m0 not minus 1, which equals one 1/2, 4 to the minus m0. So if m0 is chosen very large-- this is 4 to the minus m0. So this is-- I can write this as 1/2 over 1/4. And as long as m0 is chosen very large, I can always make this very small. That's the left side over there. So I can always find m0 so that this is the case. And now, we compute the lim sup of f of xn minus f of x. We split this up into two pieces. This is equal to lim sup then of two parts, sum from k equals 0 to m0 of cosine of 160 k xn minus-- so each of these is defined in terms of a sum. So I'm going to break the sum up to m0. And then, everything past m0, this is equal to 1 over 4k times cosine 160 k xn minus cosine 160 kx, plus sum from k equals 0 to m0 plus 1 to infinity, 1 over 4 k, same thing, absolute value. And now, we use the triangle inequality and the fact that lim sups preserve inequality. So the absolute value of this thing is less than or equal to the sum of the absolute values. And the lim sup of those sequences is less than or equal to the sum of the lim sups. That was another exercise from an assignment, or you did something similar with the lim int. So this is less than or equal to lim sup of-- now I use the triangle inequality, sum from k equals 0 to m0, 4 minus k, cosine 160 k x of n minus cosine 160 k-- oh dear. So I've been writing x. I meant to write c. Sorry about that. c, c, c, c. OK. And then, same thing in these brackets here. Now, [INAUDIBLE] might keep that. So we have that this lim sup of f of x sub n minus f of c is less than or equal to the lim sup of this guy, plus the lim sup of this guy. And now, I'm going to use the triangle inequality, again, bringing the absolute values inside of the sums, which is perfectly valid, even for the infinite sum by the same argument I used basically over there. This is less than or equal to the lim sup as n goes to infinity of minus cosine of 160 kc, and sup plus lim sup of sum from k equals m0 plus 1 to infinity of, now, the absolute value of the same thing. Now, I applied the triangle inequality to that. Now, cosine of anything is always bounded by 1. So this is bounded by-- first off, let me come back to this second one. m0 is fixed. Remember, this is not changing within. It's just fixed. It depended only on epsilon. So m0 is fixed. And this is less than or equal to lim sup of int. Now we use what we know about cosine, that cosine of something minus cosine of something else is bounded by the difference in the argument. So this is bounded by minus 4 to the minus k times 160 k xn minus 160 kc, plus lim soup of k equals m0 plus 1 to infinity, 4 minus k times 2. As this is bounded by 1, that's bounded by 1. Now, there's no n left here. So this is really just now equal to that. And we chose m0 so that this quantity here is less than epsilon over 2. So times 2, this is less than epsilon. So this is less than or equal to lim sup, sum from k equals 0 to m0 of 40 k. Now, again, this is just a fixed number. The lim sup is in n. This is just a fixed number times x sub n minus c. The difference in this is just equal to 160 k times 4 to the minus k, that's 40 k. And then, we chose m0 so that this quantity over here is less than epsilon. And now, as n goes to infinity, x sub n is converging to c. So this thing here converges to 0, equals times 0 plus epsilon equals epsilon. So to me, this is a real first proof of analysis, where you're using everything that you've been exposed to up to this point to prove a really deep theorem. So go through this slowly. But there's not too much-- the estimates are not all that tricky once you have them in front of you. So we've shown that for all epsilon, this lim sup is less than or equal to epsilon. Because remember, we ended with epsilon here. But we started off estimating this lim sup here. Now we're in a position to prove the final theorem that we want to prove. And this is due to Weierstrass. The function which we've been studying, f of x equals sum from k equals 0 to infinity cosine 160 kx over 4 to the k is nowhere differentiable. So with everything we've done so far, so we this function is continuous. We've proven that. So this theorem provides you with an example of a function which is continuous but nowhere differentiable. And with what we have on the board, namely the key parts are going to be what's in that first theorem there. And with this triangle inequality we have over here, we'll be able to prove this. So in fact, let me give myself a little space. And I'm going to state theorem two from over there, because I need some space to right in a minute. For all a, b, c, the absolute value of a, plus b, plus c is bigger than or equal to a minus the absolute value of b minus the absolute value of c. So let me re-summarize. Since I've summarized it already. What we've done up to this point, we've shown that this function here is well-defined. Absolutely, the series is always absolutely convergent. And therefore, this function is bounded, and/or we also proved that it's bounded and continuous. That was the previous theorem. So this function is bounded and continuous. And we're going to prove it's nowhere differentiable. And again, what's the idea? The idea is that we've built it out of functions which are highly oscillatory at smaller and smaller scales. So somehow this function is highly oscillatory at every scale. And if you have a function which is highly oscillatory of every scale, if some of you have heard of Brownian motion, which is a function which is-- I mean, which is a path which is highly oscillatory, then that function will not be differentiable anywhere. So proof, let c be any real number. And what we're going to do, just as in when we looked at f of x equals the absolute value of x, we're going to construct or find a sequence x sub n, such that x sub n does not equal c for all n, xn converges to c. And the sequence, f of x of n, minus f of c, over x sub n minus c, in fact, is divergent. But we'll go even further, is unbounded. And therefore, the sequence cannot converge. And therefore the function is not differentiable at c. So we're going to use theorem one to find the sequence. By theorem one, namely part two, for all N, a natural number, there exists the x of n, such that what? x sub n is in this interval of the form. So c plus pi over 160 n is less than x, is less than c plus plus 3 pi over 160 n. And cosine of 160 n x of n minus cosine of 160 nc is bigger than or equal to 1. So let me call these two properties a and b. So another way of writing a is, we could have instead written, by subtracting c across the board, that x sub n minus c is between pi over 160 n and 3 pi over 160 n. So by a, for all n, x sub n does not equal to c, because their difference is bounded below by a positive number. I mean, their difference is actually positive, and it's non-zero. So-- and by the squeeze theorem, again, if you like putting the c on both sides, we get that x sub n converges to c. So this will be our sequence-- our bad sequence, for which the difference quotient is unbounded. So in fact, let me write this out a little bit more. And x sub n minus c, which is equal to x sub n minus c, because this is positive, is less than 3 pi over 160 n. And this goes to 0. Therefore, the absolute value of xn minus c converges to 0. And therefore, x sub n converges to c. Now, to lessen the number of times I have to write cosine 160 k, let fk of x be cosine 160 kx over 4 to the k. So f of x is equal to the sum of k equal to 0 of fk of x. And now, what we're going to do is, we're going to find a lower bound on the absolute value of f of xn minus f of c over x sub n minus c. So now, find a lower bound on-- if I can find a lower bound on this absolute value, which is getting large, as large as I wish, then that proves that this sequence is unbounded, and I'm done. So let's look at the absolute value of f of xn minus f of c. Let's just write this in a few different ways. Or not a few different ways. But let's split it up as a sum. So all of these guys are equal to a sum of f sub k. So I have the n'th one. So remember, f sub k, this is just one of these building blocks. Plus sum from k equals 0 to n minus 1, plus I put x here. I meant to write c. And so, I'm going to let a sub n be this first number, b sub n be the second number, including the sum. And this will be c sub n. So this is equal to a sub n plus b sub n plus c sub n. And now I'm, going to use that triangle inequality I proved. Then f of x sub n minus f of c is greater than-- which is equal to a sub n plus c sub n plus c sub n is bigger than or equal to a sub n, minus b sub n, minus c sub n. Now, a sub n, this is going to be-- this is just the difference between these guys. This is bounded below by 1. So it's kind of large. Not 1, but 1 over 4 to the n. These other guys we'll prove are very small compared to the a sub n. And then, this lower bound-- this upper bound we have on x sub n minus c will be the nail in the coffin, as we say. So our goal is now to estimate from below a sub n-- remember, we're trying to find a lower bound for this quantity over x sub n minus c. So we need to estimate these things from below, or this sum from below. That means, we need to estimate this from below, and then b sub n and c sub n, since they have minus signs, from above. Now, by b, a sub n, which is equal to 4 to minus n, cosine 160 nx of n, minus cosine 160 to the nc. This is-- remember how we chose these x sub n's. This is bounded above by 1, or bounded below by 1. So this is bigger than or equal to 4 to the minus n. So that's a sub n. Now, let's look at how big b sub n is. Now, we want to bound this from above. Because when it hits the minus sign, this will flip the inequality, and we'll have that this would be bounded from below by something. So the absolute value of B7, that's equal to sum from k equals 0 to n minus 1 of fk of x of n minus f of c, k of c. And bringing the absolute values inside by the triangle inequality, this is less than or equal to sum from k equals n minus 1 of f of fk x of n minus fk of c. And now, so this is equal to sum from k equals 0 to n minus 1, 4 to the minus k, cosine 160 k, x sub n minus cosine of 160 kc. And now we use theorem one, number two-- or number one. I'm sorry, theorem one part one. So this is theorem 1, 1. The difference in these is bounded by 160 k times the difference in those two. So that's-- and now, these things we can sum up in closed form. Well, actually, x sub n minus c, we proved is less than 3 pi over 160 n. So this is less than a sum from k equals 0, n minus 140 k. And this equals 3 pi over 160 n. Now, sum from k equals 0 to n minus 1 40k, we used this formula for summing a geometric sum. This is equal to-- what was this equal to? 40 to the n minus 1 over 39. And which is less than-- OK, so take away the one. This is 1 over 13, four to the minus n times pi. And that's less than-- pi is less than 4. So plus 1. So let me summarize this. It is less than or to 1 over 13 fours and minus n plus 1. Now, for the last box, this is less-- the absolute value of c sub n. And absolute value is less than or equal to-- now, we just kind of do a brutal estimate. We bring the absolute values inside. So this is equal-- so just as we had before, with the b sub n's, except now we're summing from k equals n plus 1 to infinity. We brought the absolute values inside. And then, let's use the triangle inequality on that. This is-- remember, fk is equal to cosine of 160 k times x over 4 to the k. So that's no matter what you plug-in, that's bounded by 1 over 4 to the k. And now this we can actually sum. This is equal to 4 to the minus n minus 1, times sum from L equals 0 to infinity of 4 to the minus L. What does this equal? This equals 2 times 4 to the minus int minus 1 times 4/3 equals 2/3 4 to the minus n. So, my board work's getting a little shoddy, but it'll be OK. I.e. we've proven that the absolute value of c sub n is bounded by 2/3 4 to the minus n. So combining everything that we've done. We've shown that f of x sub n minus f of c. Which remember, is bounded below by a sub n-- remember, so the first box is bounded by 4 to the minus n. So first off, everything, the a sub n's, b sub n's, c sub n's, they have some forward to the minus n involved. What happened to my estimate for b? Oh, covering it up-- 4 to the minus n. So this is one coming from the a sub n's, and minus 4 over 13 coming from the b sub n's, minus 2/3 coming from the c sub n's. And if you do the arithmetic, this is 4 to the minus n, 1 over 39. So f of x sub n minus c is bounded from below by 4 to the minus n times 1 over 39. Now I just divide by x sub n minus c, which is bounded above by 3 pi over 160 n. And therefore, when I take reciprocals, I get that. I get that the absolute value of f of x of n minus f of c over x of n minus c is bounded from above by 1 over x of n minus C times 4 to the minus n, 1 over 39, which is bounded below by now inserting this estimate here. I get 40 n over-- I think that's 1017 pi. Yes. And therefore, the absolute value of the difference quotient is bounded below by 40 to the n over some fixed number. And this thing on the right-hand side is unbounded, as in, I guess, as in varies because from 1 to infinity. And therefore, the absolute value of this difference quotient is unbounded, which finishes the proof. So we've done some things, maybe a few things that you haven't seen in calculus course. Definitely haven't seen this, most likely. And really, this is the first result that involves a lot of math that we've covered up to this point to prove a very non-trivial and deep theorem that differentiability is a bit of a miracle. It's a miracle when it happens. OK, we'll stop there. .
MIT_18100A_Real_Analysis_Fall_2020
Lecture_7_Convergent_Sequences_of_Real_Numbers.txt
[SQUEAKING] [RUSTLING] [CLICKING] CASEY RODRIGUEZ: OK, so last time, we were talking about sequences, and I introduced the notion of a limit of a sequence. So we say that x n converges to x if for all epsilon positive there exists an M, a natural number, such that for all little n bigger than or equal to capital M we have x sub n minus x is less than epsilon. So what this definition says is that given any little bit of tolerance, as long as I go far enough out in the sequence, the entries are getting within that tolerance to x. And when x then converges to x, we'll write x is equal to the limit as n goes to infinity of x sub n, or using this notation, x sub n arrow x. And so I've said before that if you ever have a reasonably complicated or interesting definition, you should try to come up with examples and negate it. So the end of last time, we saw a few examples of-- at least one example of-- a convergent sequence. Let's negate this definition because then we'll also come up with an example of a sequence which does not converge. And so the negation of this definition is that x n does not converge to x. What does this mean? So this is the actual negation part. If there exists an epsilon positive-- so whenever we negate a "for all," it becomes "there exists," and whenever we negate a "there exists," it becomes a "for all." So there exists some bad epsilon 0 positive such that for all natural numbers N, there exists an n bigger than or equal to M such that x n minus x is bigger than or equal to this bad epsilon 0 in distance. So we have a picture that goes along with convergence. We can have a picture that goes along with a sequence not converging to x. So this means if I go out within this bad epsilon 0, and if I go arbitrarily out in the sequence, I can always find an element x sub n which is not in that interval. And we use this negation to prove a certain sequence does not converge to x. But let's go back do another example of a sequence which does converge, so-- equals 0. So again, how does the proof go? You have to verify the definition of the limit. I mean, there's nothing else that we have to use right now. So let epsilon be positive. And now, off to the side over here, I'm going to do a little bit of work to show you how exactly in practice one would find such a capital M or choose a capital M to get that 1 over n squared plus 30n plus 1 would be less than epsilon for n bigger than or equal to capital M. So typically how this works is, unlike in calculus where you have some inequality that you want to get and then you solve for n, in analysis what's great is that you can replace that thing you want to make small by something simpler, and then solve that inequality. What do I mean? So what I want to do, I want to find M so that if n it is bigger than or equal to M, then-- so I'm saying the limit is 0, so I want to show 1 over n squared plus 30n plus 1 is less than epsilon. Now, I could try to solve this inequality for n, but this is a quadratic function and imagine I put 20 there, so I can't exactly solve exactly for n to guarantee this. But what's great about analysis is that I don't have to work that hard. I could start with this thing which I want to bound by epsilon, replace it by something bigger, and then find capital M so that that bigger thing is less than epsilon. So what do I mean? So let's start with this thing-- 1 over n squared plus 30n plus 1. So this one is only making things bigger. I'm going to do this very slowly. So this is less than or equal to this because for this, I've just added 1 to the denominator. And 1 over n squared plus 30n-- this certainly looks a little bit simpler to solve for n less than epsilon. But I can make it even simpler by dropping the n squared, because n squared is positive, so that's only making the bottom bigger. So I've 1 over 30n, and this is certainly less than or equal to 1 over n. So what does this computation show? This shows that if 1 over n is less than epsilon, then this implies that 1 over n squared plus 30n plus 1 is less than epsilon. If this thing is less than epsilon, then by this string of inequalities, this thing will be less than epsilon. So we choose capital M so that we get this. Now, when we actually write the proof, it'll look something like this. But the actual style of the proof is a little bit different. And if you didn't see this, it would just seem like I pulled this capital M out of nowhere, but this is the thinking that goes behind it. So let epsilon be positive. Now I'll tell you how to choose capital M. Choose M a natural number so that-- so we want it for all n bigger than or equal to capital M, 1 over n is less than epsilon. So this will certainly be true if n is bigger than or equal to capital M so that 1 over capital M is less than epsilon. And the reason we can choose capital M like this-- again, where is this coming from? It's coming from the Archimedean property of the real numbers. So we've chosen capital M. Now we need to show that this capital M works. Then for all n bigger than or equal to capital M, if I look at 1 over n squared plus 30n plus 1 minus 0, my proposed limit, this is equal to 1 over n squared plus 1. And this is less than or equal to, if I drop n squared and 1, 30n, which is less than or equal to 1 over n, which is less than or equal to 1 over capital M, which is less than epsilon. So something of a non-example-- let's show that the sequence minus 1 to the n DNC does not converge-- not Democratic National Committee-- does not converge. So this is just a sequence minus 1, 1, minus 1, 1, minus 1, and so on. So we have to show it does not converge, meaning for every x in R, we'll show that x n does not converge to x using the negation of the definition-- so proof. Let x be in R. And what do we want to show? Minus 1 to the n-th does not converge to x. And we'll use this definition. So this definition means we need to find a bad epsilon 0. And here's the kind of thinking that goes along with this-- there's 1, there's minus 1. This sequence just keeps alternating, so intuitively there's no way that every element of the sequence can be getting-- or every entry in the sequence can be getting-- close to some single x. So what's the idea? The idea is that the distance between these two guys is always 2. So somehow, the distance between every entry, or basically, some entry has to be within distance-- if this is within distance 1, say, to a given x, then this will be greater than or equal to 1 in distance to x. So you can imagine x is over here. Minus 1 is within distance 1 to x, but then 1 would be greater than distance 1 to x. So our bad epsilon 0 was going to be 1, as we'll see. Let epsilon 0 equals 1. That's our bad epsilon. So now we have to show for all capital M in the natural numbers, there exists a little n bigger than or equal to capital M so that we have that inequality. So let M be a natural number. Then what do we see? So then it says, they're exists an n greater than or equal to capital M. And I will show that n is, in fact, either capital M or capital M plus 1, then 2, which is the distance between minus 1 to the M and minus 1 to the M plus 1 is less than or equal to-- now using the triangle inequality, I add and subtract x and then use the triangle inequality. So I have the sum of two numbers is bigger than or equal to 2. And the only way that can happen is if one of these numbers is bigger than or equal to 1. If both of these numbers are less than 1, then the sum is less than 2, and I would have 2 is less than 2. That's not possible. So one of these numbers has to be bigger than or equal to 1. And therefore, I could take n equal to capital M if this is bigger than or equal to 1, or if this is bigger than or equal to 1, I take little n to be capital M plus 1. So that's the end of that. So that's good for examples for now. Let's prove a general theorem about convergent sequences-- namely, that if I have a convergent sequence, then it's bounded. And let me just give you a quick reminder about what being bounded means for a sequence, i.e., there exists some number such that for all n, x n is less than or equal to capital B. Now, as beginning students of math, what types of questions should we start asking ourselves? So one of the types, or a type of question that we should ask when we come across the theorem that is a one-way street, meaning if something happens then something else happens, is does the converse hold? Namely, if I assume x sub n is a bounded sequence, does this imply that x n is convergent? So if you ever hear the words, "does the converse hold," that's what they mean. Now, for this statement, this is false. If x n is bounded implies x n is convergent, that's false because we have an example right there of a bounded sequence. This sequence here, minus 1 to the n, is bounded since the sequence in absolute value equals 1 for all n, so we could take B equals 1. So minus 1 to the n is a bounded sequence, which is not convergent. So the converse does not hold. So let's prove the theorem. And let me draw a little picture to go along with it of what's going on. So here is 0, let's imagine. Let me make this picture a little bit bigger before I go into the proof. So let x be the limit. So this is not the proof yet. This is just more discussion. So if this x sub n is converging to x, then let's say I go out distance 1 to x. Then all of the x sub n's eventually have to land in this strip here. So x sub n, n bigger than or equal to some capital M. So I know that they're all within distance 1 to x. So in absolute value, they'd be bounded by, let's say for this picture, it'd be x plus 1. But if this was over there, capital, it would be the absolute value of x plus 1. So that takes care of all x sub n's with n bigger than or equal to M. And then there's only finitely many guys to deal with. Maybe there's a bigger one out here, x sub n minus 1 to deal with, and that's how we define our number, is by the absolute value of x plus 1 plus the absolute value of these finitely many. And then that would be something that's bigger than or equal to every element in the sequence. That's the picture that goes with it. How do we take that picture to proof? So suppose x n converges to x-- that's our assumption, and there exists a natural number M such that for all n bigger than or equal to capital m, x sub n minus x is within distance 1, or x sub n is within distance 1 to x, then for all n bigger than or equal to capital M, if we look at the absolute value of x sub n, add and subtract x, and now use the triangle inequality, this is less than or equal to absolute value of x sub n minus x plus the absolute value of x equals 1 plus x. So for all entries in the sequence past this point capital M, they're bounded in absolute value by 1 plus x. That's just a number. So then, we just have to take care of the 1 up to capital M minus 1 guys. We just have to find a number bigger than those guys and this number, and we'll have found a B. So then let's take B to be x sub 1 plus x sub 2 plus x sub capital M minus 1 plus this. Then for all n a natural number, the absolute value of x of n is less than or equal to B. Because if n is less than capital M, then its absolute value is bounded by one of these. Its absolute value is equal to 1 over these, and therefore, less than or equal to one of these plus some non-negative numbers. And if n is bigger than or equal to capital M, then we have this bound that we use. And this number is certainly less than or equal to capital B, again, because it's this number plus a sum of non-negative numbers. Now in general, there's no easily checkable criterion for a sequence to converge. All we can do is verify the definition. But there are some sequences which you can easily check or figure out if they converge just by telling if they're bounded. Those are what are called "monotone sequences." So this is, in some sense, the class of sequences that does kind of satisfy the converse of that theorem I stated a minute ago. So sequence x sub n is monotone increasing if for all natural numbers little n, x sub n is less than or equal to x sub n plus 1. So this means that x sub 1 is less than or equal to x sub 2 is less than or equal to x to 3, and so on. Sequence is monotone decreasing if we have the other inequality. So things are getting smaller, so x sub n is bigger than or equal to x sub n plus 1. And if we have a sequence which is monotone increasing or monotone decreasing-- so it's one or the other-- we say x n is monotone or monotonic. So it's simple enough to come up with examples of sequences which are monotone increasing, monotone decreasing, or neither. So 1 over n-- this is just 1, 1/2, 1/3, 1/4-- this is monotone decreasing. And if I take minus the sequence, then that reverses basically all of the inequalities, and I get a monotone increasing one. So then this is minus 1, minus 1/2, minus 1/3, minus 1/4, monotone increasing. And then something that's not is-- we've already come across that-- minus 1 to the n, and so on. And so the theorem about monotone sequences is that there is a simpler criterion than the definition for determining when they're convergent, or if you like, they satisfy the converse of this theorem up here. So a monotonic sequence is convergent if and only if it is bounded. So we proved a minute ago that convergent sequences are bounded, so for monotonic sequences, the converse holds. So let's do the proof of this. I'm going to do the proof for monotone increasing sequences, and as an exercise, I'll leave it to you to do the proof for monotone decreasing sequences. So suppose x sub n is a monotone increasing sequence. So there's two directions to prove convergence if and only if bounded. So we have one direction, meaning convergence implies bounded. This is just the previous theorem, where we proved that every convergent sequence is bounded, not just monotonic sequences. So the meat is in proving the converse direction. And in fact, we'll be able to pick out what the limit is of this sequence. So suppose x sub n is bounded. Then if I look at the set of entries-- so not the sequence but if I look at the set of values that this sequence takes-- so x sub n is just a natural number, this is now a subset of the real numbers. This is a bounded set, meaning it's bounded above and below because there exists some capital B so that x sub n is, in absolute value, less than or equal to capital B for all n. So that means x sub n is between B and minus B. All right, so what's the picture again that goes with this? So we have these x sub n's, x sub 1, x sub 2, x sub 3. We know that they cannot go past a certain B, and they're steadily increasing, but they can't keep strictly-- they can't increase without bound. Not only that, they're bounded by some number x, which I'll define to be the supremum of this set. And these guys are just getting, in essence-- so this is a picture, just me trying to explain to you what's going on. If x is the sup of this set of entries, then what do I know about x sub n? I mean x is that if I go a little bit to the left of x-- so let me draw this again. So let me back up a minute here. So since this set is bounded in R, it has a supremum in R. And what I claim is that this supremum is, in fact, the limit of this sequence. This is a supremum of a set. I'm saying it's the limit of the sequence. So what's the thinking here? So x is the supremum of all the entries in x sub n, so nothing's ever going to be bigger than some tolerance x plus epsilon. So we just need to worry about what's to the left of it. And we need to find a capital number M so that all the x sub n's are in this interval here, because we're trying to show that the x sub n's converge to x, the supremum of this guy. So let's walk through this by this picture. So x is the supremum. Nothing's bigger than x, so all of the x sub n's are to the left of x plus epsilon. So now we just need to worry about x minus epsilon. Now, since x is the supremum of this set, x minus epsilon can't be an upper bound for the set of entries of x sub n. So that means there has to exist some x sub M so that it's bigger than x minus epsilon. But now, this is where we use the fact that this is a monotone increasing sequence. Because then, if n is bigger than or equal to capital M, then x sub n is strictly to the right of x sub M, or it's equal to x sub M and still less than or equal to x because x is the supremum of all the entries. So then, for n bigger than or equal to capital M, they all have to lie in this interval. And that's why the sequence converges to the supremum of this set. So we're using, in a crucial way, the fact that this sequence is monotonic increasing. I'll prove this claim. Let epsilon be positive. Then since x minus epsilon is not an upper bound for this set, x sub n, there exists an M0, natural number, so that x sub n minus epsilon is less than x sub M0 is less than or equal to x. And we'll choose M to be this M0. Then for all n bigger than or equal to M, we have x sub n minus epsilon, which is less than x sub M0. And because n is bigger than or equal to capital M, and this is a monotone increasing sequence, x sub M0 is less than or equal to x sub n. And because x is a supremum of the set of all entries of this sequence, this is less than or equal to x, which is less than x plus epsilon. Or to summarize-- so this should not be x sub n, that should be x minus epsilon. Or x minus epsilon is less than x sub n is less than x plus epsilon. And that's the same as showing the absolute value of x sub n minus x is less than epsilon. Now, what's the change for monotone decreasing is that we'll be able to identify the limit of the sequence as being the imf, the infimum of this set. But I'll leave that to you as an exercise. So let's use this real quick to prove the limit of a little bit more interesting sequence than 1 over this one, which is reasonably interesting, I guess. But typically in math, we just don't prove theorems for the sake of proving theorems. Typically, there's some concrete reason we do things. There's some concrete sequence we're trying to prove converges, or has a certain property, or doesn't have a certain property. And so the basic one is if you like a geometric sequence. So if c is a positive number between 0 and 1, then we'll prove that this sequence c to the n is convergent and it converges to 0. And we'll prove if c is bigger than 1, then the sequence c to the n is unbounded. In particular, it can't converge. So let's go in reverse order. Let's prove that if c is bigger than 1, then this thing is unbounded. It doesn't require 1, but it's shorter than 1, and we can go ahead and do it. And we'll use this fact that we proved, I think, in the first or second lecture using induction. So what does it mean to show something is unbounded? Again, we get to use our negation skills. It means that-- what do we want to show-- for all B bigger than or equal to 0, there exists an n, a natural number, so that c to the n is bigger than B. Now, how are we going to find this n? This seems like a complicated thing. So let's replace it by something simpler. Again, this is analysis, which means we get to use our wits, and try to replace complicated things by simpler things, and work with the simpler things. So let me do a little bit of-- again, this is off to the side, how would one think through this. If you look at c to the n, remember we have this inequality from infinite time ago that as long as x is bigger than or equal to minus 1, then I get 1 plus x to the n is bigger than 1 plus n times x for all natural numbers n. So that means c to the n, which is equal to 1 plus c minus 1 to the n-- so this is my x-- is bigger than or equal to 1 plus n times c minus 1, which is bigger than or equal to n times c minus 1. You see? So if I want to make this big, it suffices to make this big. So that's what I'll do. So let n be a natural number such that n is bigger than B over c minus 1. Then now this we did off to the side we'll just put in the proof now. And c to the n equals 1 plus 1 minus c to the n is bigger than or equal to 1 plus n times c minus 1, which is bigger than or equal to n times c minus 1, which is bigger than B over c minus 1 times c minus 1 equals B. So now let's prove the first claim, that if c is between 0 and 1, then the limit as n goes to infinity of c to the n equals 0. So first, I want to prove the following claim-- that we, in fact, have a decreasing sequence, which is bounded below by 0. So claim for all n, 0 is less than c to the n is less than, I should say, c to the n plus 1 is less than c to the n. So the proof of this claim is a very simple induction argument, so we'll do this by induction. So we have the base case n equals 1. So we are assuming that c is between 0 and 1. And if I multiply through by c, I conclude that 0 is less than c squared is less than c. And that's the n equals 1 case. And the inductive step is essentially the same proof. Suppose 0 is less than c to the m plus 1 is less than c to the m. Then multiplying through by c, we get that 0 is less than c to the m plus 2 is less than c to the m plus 1, which is n equals m plus 1. Here, we're using this fact here-- that c is positive so it doesn't flip the inequalities, so I can multiply it through and preserve the inequalities. So this shows that this sequence is monotone decreasing and it's bounded below by 0, and in fact bounded, because c to the n and absolute value, these are all positive, is equal to c to the n. And c to the n is less than c to the n minus 1, which is less than c to the n minus 2, so on, and so on, which is less than c. So it's bounded. I guess I could have built that into this inequality and proved that as well, but that's OK. So it has a limit. So by the previous theorem, has a limit, and I'll call it L. And now what I want to show is that L is 0. And how we'll do that is one of these analysis tricks, where-- not really tricks-- but rather than show with L is directly equal to 0, we'll show that the absolute value of L is less than epsilon for every epsilon positive, and therefore it has to be 0 because it's just a fixed number. So let epsilon be positive. Again, we're going to show that the absolute value of L, which is just a fixed number, is less than epsilon. And therefore, capital L has to be 0. Then there exists, since this sequence converges, an M, a natural number, such that for all n bigger than or equal to capital M, c to the n minus L is less than 1 minus c times epsilon over 2. Now, maybe you're wondering why didn't I just use epsilon here? Well, in the end, it's just going to come out to this being less than epsilon in absolute value. Otherwise, I would have come out with less than epsilon times some number, and I did away with that number by choosing a different number here. So now we compute that if I look at 1 minus c times the absolute value of L, this is equal to L minus c times L. And this is less than or equal to L minus c to the capital M plus-- let's put a plus 1, plus c to the M plus 1 minus c to the L. And by the triangle inequality, this is less than or equal to L minus c to the M plus 1 plus-- now c to the M plus 1 minus c to the L, so I can pull out a c which is positive, so I can get L. Now, M plus 1 is bigger than M. So it satisfies this inequality, and so does this one. And therefore, this is less than epsilon over 2 times 1 minus c plus c times epsilon over 2 times 1 minus c. And C is less than 1, so this whole thing is less than epsilon over 2 times 1 minus c plus another epsilon over 2. So I get epsilon over 2 times 1 minus c. And thus, the absolute value of L is less than epsilon. And since epsilon was arbitrary, that implies the absolute value of L is equal to 0, i.e., L is equal to 0. So that's a very concrete application of some of these theorems we've been proving, which is really-- I think the only real reason one proves theorems is you typically have a concrete example of something in mind that you want to study. But in order to study it, it often requires some general machinery, i.e., theorems. So now we'll talk a little bit about sequences obtained from other sequences. So these are called subsequences, or sequences obtained from a single sequence. So what do I mean by this? Let me give you the precise definition. So we started off with a sequence and an increasing sequence of integers, n sub k. So let me just say what this means rather than write out the word and then say what it means. So this is the sequence of natural numbers, which are strictly increasing. n sub 1 is less than n sub 2 is less than n sub 3, and so on. The new sequence, x sub n sub k-- so now the index is not n, but the index is k-- is called a subsequence of the original sequence x sub n. So how should you view subsequences of a sequence? You should think that I line up all the entries of x, of my original sequence x sub n, and then I just start picking entries out of the sequence. But every time I make a choice, I have to move to the right and make another choice. That's what I was just saying there, expresses this condition that this sequence of natural numbers is increasing. So I pick an entry in the sequence. That's going to be my first guy, the first element of my new subsequence. And then I move to the right, and maybe I pick the next one, maybe I pick one three down. I pick that one. And then I move to the right of that one and pick a new one. And then I move to the right of that one and pick another one. So if you want to generate a subsequence from an original sequence, how do you think about this? Again, line them all up, start picking entries, but every time you pick an entry, you have to move to the right in order to pick your next entry. So let me give you some examples and non-examples. So for example, if our original sequence is 1, 2, 3, 4, 5, 6, and so on, what would be examples of some subsequences? The odd numbers 1, 3, 5, 7, 9 and so on. You see how I'm taking the original sequence, so this is x sub n. I'm just picking entries from the original sequence, but every time I pick an entry, I move to the right and pick a new one. So I picked 1, now I get to choose anything to the right of that. I pick 3, now I can choose anything to the right of that. So in the language of this definition, the increasing sequence of integers is just the odd integers, so 2k minus 1. So here, x sub n is equal to just n. I could pick another subsequence would be the sequence of even numbers-- 2, 4, 6, 8, 10. This is a subsequence of this original sequence. Here, the increasing sequence of integers is 2k. I could pick the subsequence could be the sequence of prime numbers-- 2, 3, 5, 7, 11, 13. And the increasing-- so I don't have a general formula for that. Good luck finding one, but we'll just write this as k-th prime number. Now, what would not be an example? So these are examples of subsequences. Not examples of subsequences would be, for example, the sequence 1, 1, 1, 1, 1. This is not a subsequence of this original sequence. Remember, I have a sequence of natural numbers which is increasing. And x sub n sub k means that this new sequence is obtained from the old one by picking entries-- by picking an entry, moving to the right, picking the next entry, moving to the right, and picking the next entry. Here, in order to get this sequence from this one, I just stay on the first entry and keep picking it. But that's not how a subsequence is defined. So here, if you like, this would mean n sub k is equal to 1. And this is not a strictly increasing sequence of natural numbers. Or 1, 1 3, 3, 5, 5, and so on-- that's also not. So these are not subsequences of the original sequence given here. So I hope that's clear. Now, I want to emphasize this-- I'm not saying that the number you pick each time has to be different from the number you picked before, meaning the value. You couldn't have 1, 1, 1, 1 as a subsequence of this guy because 1 is only in the first entry. You can't keep picking one entry. You have to pick an entry and move to the next entry or the one after that and pick that entry to obtain your sequence. But that doesn't necessarily mean that those actual numbers in those entries have to be different. So for example, if I look at the sequence minus 1, 1, 1, minus 1, 1, one meaning x sub n is equal to minus 1 to the n-th then a perfectly good subsequence is given by minus 1, minus 1, minus 1, minus 1, where it looks like I'm just picking one of the entries, but really I'm not. Here, the increasing sequence-- so what I'm doing here to get this new sequence, I picked the first entry, then I picked the third, then I picked the fifth, then I picked the seventh. So n sub k equals 2k minus 1. So all the actual numbers appearing in the sequence are the same, but I'm actually picking different entries from the original sequence, where they appear. So this is also fine. So is 1, 1, 1, 1, 1. This is n k equals 2 times k. So these are all good subsequences. So these were subsequences of the original sequence. These are subsequences of this sequence. So from a given sequence, we can obtain new sequences from this original sequence in this way. So a natural question is to ask, how does going from a sequence to a subsequence behave with respect to limits? If the original sequence converges to a limit, does a subsequence or does every subsequence converge to that limit as well? And this shouldn't come as a surprise that it does. Because all of the elements of the sequence are getting close, all the entries of the sequence are getting close to a given number. And a subsequence is just, if you like, picking certain ones along the way. So certainly if I'm in a subsequence, then as long as I'm far enough out in the subsequence, I'll be close to that limit as well. So if you didn't get all that rambling, I'll go ahead and state the theorem and prove it. x n is a sequence which converges to x. And x sub n sub k is a subsequence of x sub n. Then the subsequence converges to x, as well. Limit as k goes to infinity of x sub n sub k equals x. So let me just start off by making a simple observation based on the fact that this increasing sequence-- that the n sub k's are an increasing sequence of natural numbers. Since n sub 1 is bigger than or equal to 1 is less than n sub 2 is less than n sub 3 and so on, this implies that for all k, a natural number, n sub k is bigger than or equal to k. So this is not so hard to believe because n sub 1 has to be at least bigger than 1. And n sub 2 has to be at least bigger than n sub 1, so it's either n sub 1 plus 1 or bigger. And therefore, by a simple induction argument, which I'll leave to you, you can prove that for all k, a natural number, n sub k is bigger than or equal to k, induct on k. So we want to prove that this sequence converges to x. And we're going to do that using the only means we have. So this is just some arbitrary subsequence of x sub n. All we have is the definition to use. So we're going to now prove this. So we have to show for all epsilon positive, blah, blah, blah, so the first thing we have to do is let epsilon be positive. Now we'll use the fact that x sub n converges to x. So x sub n converges to x. There exists a natural number M0 such that for all n bigger than or equal to M0, x sub n minus x absolute value is less than epsilon. This M sub 0 we'll choose for our subsequence. Choose M to equal M sub 0, so now we need to show that this capital M works for our subsequence, meaning if k is bigger than or equal to capital M, then x sub n sub k minus x is an absolute value less than epsilon. But this just follows from this fact. And if k is bigger than or equal to M, this implies, by this inequality, n sub k is bigger than or equal to M, which remember, is equal to M sub 0. So n sub k is some integer, so natural number bigger than or equal to M sub 0. So that implies by this inequality that x sub n sub k minus x in absolute value is less than epsilon.
MIT_18100A_Real_Analysis_Fall_2020
Lecture_5_The_Archimedian_Property_Density_of_the_Rationals_and_Absolute_Value.txt
[SQUEAKING] [RUSTLING] [CLICKING] CASEY RODRIGUEZ: Last time, so we're talking about a set of real numbers. And last time, I stated the following theorem about the existence and properties of R that make it special. So the theorem is, there exists a unique ordered field with the least upper bound property containing the rational numbers. So remember, a field was a set that had operations plus and multiplication. An ordered field also this set has an order on it. And this order interacts with the operations of addition and multiplication in natural ways. And the least upper bound property means every non-empty subset which is bounded above has a supremum in the set. So least upper bound property, meaning the supremum belongs in R, not necessarily in the non-empty set, which is bounded above. And we saw last time that q, the rational numbers, does not have this property. By looking at that set, q positive rational q squared less than 2. That set was bounded above but didn't have a supremum in rational numbers, because the square root of 2 essentially is not a rational number. And the square root of 2 would be the supremum. So just based on this theorem which characterizes and constructs R, the set of real numbers, we're now going to prove facts about real numbers and soon turn to limits. So as I said in the beginning, limits is or are the central object of study in analysis. That's what analysis is, the study of limits. The real analysis part, or the real, and that is that we're doing this within the setting of the real numbers. So now, let me just point out something kind of simple about the real numbers. So I mean, extremely simple fact about the real numbers is it's not discrete like the integers are. So for the integers, if I take one integer and another, it's not necessarily the case that there's an integer strictly in between them. There's not an integer between 0 and 1. And but R does satisfy this property, essentially because it's a field. So and it's an ordered field. So what's the simple fact, if x, y are real numbers, and x is less than y, then there exists a real number r, a little r, so that x is less than r is less than y. And I can give you r explicitly. r is equal to x plus y over 2. All right, that's not so surprising. Now, this statement here is also true if I replace r with rational numbers. Namely if I had two rational numbers, 1 less than the other, then there exists a rational number in between that's between x and y. I just again, define r in this way. Now, a natural question is the following, is that if x and y are in R, and x is less than y, then does there-- I'll put a question mark over that-- does there exist an R in q such that x is less than R is less than y? I can't necessarily define R by this formula here and guaranteed that R will be rational. So for example, if x equals root 2, and let's say y equals 2 root 2, then x plus y over 2 equals 3/2 root 2, which is not a rational number. Because if it were a rational number, then I can multiply 2 through by 2/3 and say square root of 2 is a rational number. So all of that to say, that simply by doing this trick of just taking the average does not necessarily mean that if I take two real numbers, 1 less than the other, then there exists a rational in between them. That's not so clear to see. But this is one of the most-- this is one of the basic facts about R, is that the answer to this question is, yes. And that in some sense the rational numbers are dense in the real numbers. For any to real numbers, I can find a rational in between them. It's the first main property of the real numbers we'll prove. And it'll be a consequence of a second property we'll prove, which is called the Archimedean property. So this theorem has two parts. Its first is called Archimedean property of R. And the statement of this is, if x and y are in R, and x is positive, then there exists a natural number N, such that nx is bigger than y. And then the second part of this theorem is the statement that the rationales are dense. That the answer to this question is, yes. So this usually goes by the name of the density of the rationales. So it states that if x and y and in R, and x is less than y, then there exists a rational number little r, such that x is less than r is less than y. So let's prove the first theorem, the first part of this theorem, the Archimedean property. So what do we have to use to prove this theorem? Just what we know about the real numbers, the fact that it's an ordered field with the least upper bound property. And you'll see how this least upper bound property plays a major role in all of these elementary things we prove about R. So let's restate our hypothesis. So suppose x and y are in r. And x is less than-- and x is bigger than 0. So we wish to show-- so just restating this inequality here, we wish to show-- the word show, you should read as synonymous with prove-- that there exists a natural number N, such that n is bigger than y over x. That's just restating that. So we're going to prove this by contradiction. So the proof will go by contradiction. So this is, again, this is our hypothesis, which we will assume throughout the proof. The thing we're trying to prove is this statement here, the second sentence. So this is the thing that we will negate. We will not negate the hypothesis, we're negating what we want to prove in the end. And then arriving at a false statement. Therefore showing our conclusion is true. So we assume that the second statement is false. Again, we're assuming the first statement. So suppose not, i.e. for all N a natural number, n is less than or equal to y over x. That's the negation of the fact of there exists an integer that's bigger than y over x. So then that means then the natural numbers as a subset of the real numbers is bounded above. Now, the set of natural numbers is a non-empty subset of R, which is bounded above. Therefore, it has a supremum. So by the least upper bound property of R, N has a supremum, call it a in R. So since a is the supremum of N, anything smaller than a cannot be an upper bound for the natural numbers. Because remember, a is supposed to be the least upper bound. Anything smaller than that cannot be an upper bound for the natural numbers. Is not an upper bound for N. But what does that mean? So if a minus 1 is not an upper bound for N, that means there must exist some integer in N which is strictly bigger than a minus 1. There exists a natural number, I'm going to call it m, such that a minus 1 is less than m. But then, this implies that a is less than m plus 1, which implies that a is not an upper bound for the natural numbers. Because m plus 1-- m is an integer, a natural number. So m plus 1's a natural number. And I've just found a natural number bigger than a. That means a can't be an upper bound for the natural numbers. But and therefore a does not equal the sup of N. And this is a contradiction. Because A is defined as a sup of N. So from the assumption, just to recap, just from the assumption that for all natural numbers N, N is less than or equal to y over x. We concluded that it has a supremum that's not its supremum. So that's a false statement. And therefore, our initial assumption that for all natural numbers N, N is less than or equal to y over x, that must be false. And therefore, there exist an N, which is bigger than y over x, which is what we wanted to prove. So for the proof of the second theorem, the density of rational numbers, we'll do-- so we have three cases to consider. So both x and y are real numbers. x is less than y. There are three cases, call it A, x is less than 0 is less than y. B, 0 is bigger than or equal to x is less than y. And C, x is less than y is less than or equal to 0. So we want to find a rational number between x and y. So for this case, this is pretty simple. We just take R to be 0. So I'm not going to say anything about case A. I'm just going to move on to case B. So case B. So suppose x is bigger than or equal to 0 and less than y. Then by AP I'll refer to-- So that'll be shorthand notation for the Archimedean property. So part one, which we've already proven is true. By the Archimedean property, there exists a natural number N, such that N times y minus x is bigger than 1. Now, we're going to use Archimedean property again. There exists an integer l and a natural number l, such that l is bigger than n times x. Thus, the set of all integer-- of natural numbers k, such that k is bigger than nx is non-empty. So this is a subset of the natural numbers. So it has a least element. By the well ordering property of the natural numbers, S has a least element m. Now what does this mean? What does m have to satisfy? What we're going to show is that m over n is in fact, a rational number that is between x and y. So the goal is, I guess-- so let me just make a comment here. So once we've made-- once we've found this natural number, little n, so that n times y minus x is bigger than 1, what does this mean? This means n times y is bigger than nx plus 1. So our goal is then to find-- so this stuff in brackets, you should not consider as part of the proof. This is me trying to explain to you where from here we would like to go to conclude the proof that there exists a rational number between y and x. So we have this inequality, the Archimedean property tells us there's a natural number that satisfies this. So what we would like to do is find another natural number. And I'm foreshadowing what's to come-- a natural number m-- well, let's not-- I don't want you to give you any false hopes. I mean, they're not false. It'll happen in a minute. But let's call this j. So that two things happen. n times x is less than j. And j is less than or equal to nx plus 1. So if we're able to find such an integer j satisfying this, these two inequalities, then from this inequality we would get that ny is bigger than-- or this is bigger than nx plus 1, which is bigger than or equal to j, which is bigger than nx. I.e. let me just rewrite this. nx is less than j, is less than n times y. Or in other words, x is less than j over m is less than y. And here's our rational number which we would choose. So that's the game plan, which maybe I should have said right after I came up with this integer little n. You may be asking why I come up with this integer little n. Well, we have to start somewhere. And the Archimedean property gives us this. And this somehow gives us a scale to at least a scale 1 over n to kind of work on. But anyway, so going back to the proof. So we have by the well ordering property of the natural numbers, this set S, which is a set of all natural numbers so that k is bigger than nx has a least element little m. Now m, so S has a least element m, means m is in S. So since m is in S, this means simply by the definition of what S is, nx is less than m. That's very good. That's one of the properties we wanted-- I wrote it as j, but we wanted an integer to satisfy. Since m is the least element of S, m minus 1 is not in S. So that implies that m minus 1 not in S means m minus 1 is less than or equal to n times x, i.e. n is less than or equal to n times x plus 1. Now, we'll combine these two inequalities along with the first one involving n times y minus x. So I'm just basically rewriting what I told you our goal was here in this bracket. Then, n times x is less than m. And this is less than or equal to n times x plus 1, which by our definition of n in the first inequality we have up there involving y is less than n times y. So we have nx is less than m, is less than y. So that means x is less than m over n is this less than y. So r equals m over n, which is a proportion of natural numbers, is our choice. That's an awkward way to finish a sentence. But my mind went blank. Anyways, so that handles the case B, that x is bigger than or equal to 0. So one small thing that I-- just a very minor comment. If you've got all this, and you understand it, that's fine. But you may be asking since I never actually said it out loud, where did I use the fact that x is bigger than or equal to 0? So where I used that-- so let's just assume x is bigger than 0. So what where did I use is really right in this place here. So I don't want to spend too much time on that. But by being able to claim that m minus 1 is not in S. So what about case C? So case C, we'll just reduce to case B. Suppose x is less than y, which is less than or equal to 0. Then 0 is bigger than or equal to minus y, which is bigger than minus x. So by case B, there exists a rational number, R tilde, such that minus y is less than R tilde is less than minus x. Then this implies that x is less than minus R tilde is less than y. So R equal to minus R tilde does the job. So that proves the theorem. So we'll use this in just a minute to prove a simple statement about sups and infs. But before I use it, let me go on. And what I'd like to do is state a different way to verify that a number is a sup of a set or the inf of a set. So I'm just going to state it for sups. And there's an analogous statement for infs. So the theorem is this. And this will actually be on the assignment. So assume S is a subset of R is non-empty and bounded above. So S has a supremum. So I'm going to tell you what the supremum satisfies. And it's an if and only if statement. Then some number x equals a sup of S if and only if-- so either I'll do a double arrow, or I'll write if and only if-- two conditions are satisfied. The first is that x is an upper bound for S. And the second, so at one point, or in the very near future, we're going to be seeing epsilons and deltas. So I'll state it this way. So you should start seeing them now. For every epsilon positive, there exists an element y in S such that x minus epsilon is less than y is less than or equal to x. So why is this-- why should this be the case? Well, if S is all over here to the left of x, then and if I take something smaller, so I'm actually kind of giving you a picture proof of one direction. And so, if I take something smaller, x minus epsilon, then this cannot be an upper bound for S. So therefore, there must exist some y in S bigger than x minus epsilon. And it's less than or equal to x, because x is an upper bound for S. So in fact, this picture is a proof of this direction, namely if x is the supremum of S, then these two conditions are satisfied. And then, on the assignment I'll have you prove the other direction, basically, that these two conditions imply that a real number is the supremum of S. So let's use this theorem and Archimedean property to prove a simple statement about the sup of a simple set. So I should-- so let me also write here-- that I'm not going to state it as a theorem. But I'll just-- this is a remark meaning I'm being a little bit loose with what I'm writing down, but it's true-- x equals inf of S. So for S, a non-empty subset which is bounded below, there's an analogous characterization as x is a lower bound for s. And for all epsilon positive, there exists a y in S, so that x is less than or equal to y is less than x plus epsilon. So this is the analogous statement for the inf, for something to be the inf. So this is, I guess you could call it a theorem. It's not a very spectacular theorem. But if I look at the set 1 minus 1 over N, in a natural number, and I take its sup, this is equal to 1. So this shouldn't surprise you too much. Because what is this set? This is the set 0, 1/2, 2/3, 3/4 4/5, and so on. So you see 1 is always bigger than or equal to everything in here. So it's certainly an upper bound. And everything is kind of progressively getting closer to 1. So it should satisfy the second property. Namely, if I go a little bit to the left of 1, then I'll be able to find something less than or equal to 1 and bigger than that thing to the left of 1. And that'll be how we prove this theorem. So first off, since 1 minus 1 over n is less than 1, for all natural members N, this implies 1 is an upper bound for this set. Now we're going to verify that one satisfies that second property of the theorem, that for every epsilon positive, I can find a natural number little n, so that 1 minus 1 over n is bigger than 1 minus epsilon. And we'll use the Archimedean property for that. So if you're ever supposed to show something for every epsilon positive, then every proof should start off with epsilon be positive. In fact, I'll give you a couple of points on the exam if there's epsilon delta m's whatever proofs, which means you should prove something for all epsilon-- I'll give you a couple of points on the exam if you just at least state let epsilon be positive. So let epsilon be positive. So then, there exists-- so by the Archimedean property, there exist a natural number N, such that 1 over epsilon is less than N. This is taking, for example, in the statement of the Archimedean property, if you like, it's taking x to be 1 and y to be 1 over epsilon. Then 1 minus 1 over n-- so let's do it this way. Then 1 minus epsilon is-- so since epsilon is less than-- 1 over epsilon is less than n, this is equivalent to saying 1 over n is less than epsilon. And therefore, minus epsilon is less than minus 1 over n. So then, 1 minus epsilon is less than 1 minus 1 over n, which is as we have up here, is less than 1. Thus, there exists a natural number N, so that-- so that went kind of quick. So what I did was, I found a natural number N, which is bigger than 1 over epsilon. And from now on, typically, I won't state it like this, I'll state it more like this. Which this follows from this. So when I make a statement like, choose a natural number so that 1 over n is less than epsilon, that follows from the Archimedean property just by taking 1 over this inequality. And so. From there we concluded that 1 minus epsilon is less than 1 minus 1 over n, which is still less than 1. And therefore, we have proven the second-- number two property of 1. And thus by the theorem, 1 is equal to the supremum of this set. So let's get a little more familiarity with using sups and infs, and in particular using that theorem, which I stated right there, about this characterization of the sup as being an upper bound and satisfying the second property that for every epsilon positive you can find something in the set which is bigger than x minus epsilon. So to do that, let's look at a couple of different sets-- types of subsets of real numbers. So for x a real number and a subset, we define two sets, x plus a-- this is just a shift of a by x. So this is a set of all elements of the form x plus little a, where little a is a capital A. x times a, this is a set of all elements of the form x times a, where a is in capital A. So from now on basically towards the end, until almost the end of the course, we're working as in subsets of the real numbers. So these things are meaningful, plus and multiplication. So we have the following theorem. So let's-- let me state the assumptions. And then we'll probably guess the conclusions. If x is a real number, and A is bounded above, then the conclusion is x plus a is bounded above. And the supremum of this set x plus A is equal to x plus the supremum of A. We should be able to guess this. Because let's assume A is this interval here. That would make the sup A-- this, let's say right endpoint of this interval, then when I shift everything by x, then this point sup A shifts to x plus sup A, which should be the supremum of x plus capital A. So this is not too surprising. There are surprising theorems in analysis. We already saw of one, at least, I thought it was surprising. Hopefully you found it surprising, about the cardinality of the power set compared to the cardinality of the original set. You come up with this strange-looking-- or the proof of that, remember, you came up with this strange-looking set which you had to-- which in the end basically referenced itself in its definition, which led to the conclusion we wanted. But this one is not so scary. This one you should be able-- this theorem you should be able to believe and maybe even prove without me telling you how to. If not, that's fine too. The other statement is that if x is positive and A is bounded above, then x plus A, or x times A, sorry, is bounded above. And sup of x times A equals x times the sup of A. And again, so if I were to draw a picture, and let's say A is kind of symmetric with respect to 0. So there's sup of A, and I multiply it by x. Then this either fattens the interval or makes it smaller. So let's say I made it smaller. Now to x times A, then x times sup of A would be the supremum of this set. Why x positive? Well, the reason is because if I multiply by x negative, this not only shrinks it but it flips things. So in fact, there's a statement in the book about if x is negative, then here you would need to assume A is bounded below. So the corresponding statement for x negative is, if x is less than 0 and A is bounded below, then x times A is bounded above. Because multiplying by something negative flips inequalities. And the sup of x times A is equal to minus x times the inf of A. Or there's no minus. This should be sup of x times A is equal to x times the inf of A. All right. So let's prove these two theorems using this previous theorem that I stated without proof, but you'll prove in the assignment about how to characterize sups of sets as upper bounds and satisfying this epsilon property. So let me just restate our assumptions. So suppose x is in R, and A is bounded above. Then the number sup A exists in R, because R has the least upper bound property. A is a non-empty subset. So I should have said that, so all of this is for a non-empty subset of A, so that I'm talking about something. So I have a non-empty subset, which is bounded above by the least upper bound property. The supremum of A exists in R. So now I'm going to show that x times the sup of A satisfies that it's an upper bound for x times A. Then for all little a in A, since sup is an upper bound for capital A, little a is less than or equal to sup A, which implies that for all a in capital A, if I multiply through by x, I mean, if I add x to both sides, x plus little a is less than or equal to x plus sup of A. Which implies that x plus sup of A is an upper bound for the set x plus A. So that's the first property we wanted to prove of x plus sup A. Now, we'll prove the second property, this epsilon property. Let epsilon be positive. Then by the previous theorem, there exist a y in A such that sup minus A is less than epsilon is less than y is less than or equal to sup A. This is just from the fact that the supremum of A satisfies those two conditions up there. Now I just add x through all of these inequalities, which implies there exists a y in A, such that x plus sup A minus epsilon is less than x plus y, which is less than or equal to x plus sup of A. And that proves the second property. Because what have I done? I found that for every epsilon positive, an element of the set x plus A-- so I'll even-- I'll restate this-- which implies there exists an element z in the set x plus capital A, namely x plus y. So that x plus sup A minus epsilon is less than z is less than or equal to x plus sup A. And that's the second property which we wanted to prove, this epsilon property. Now we've proven it for x plus sup A, for the set x plus A. Thus, sup of x plus A equals x plus sup A. So we proved that x plus sup A is the supremum of x plus A by showing it was an upper bound and it satisfied this epsilon property. I mean, it's essentially the same proof for x times capital A. You just replace pluses with multiplication. How much time do we have? So in fact, I'm running a little short on time. I guess a little slow-moving today. So I will not go through actually write the proof of the second part, simply because the same logic works. Only now, I replace everything by multiplication by x instead of addition by x. Well, almost. OK, why not. Let's go through the proof real quick. I just won't spend as much time explaining stuff. So now we want to do-- we want to show x times sup A is a supremum of x times A. So suppose x is positive and A is bounded above. Then sup A, again, exists in R, because A is bounded above. And because sup A is an upper bound for capital A, then for all a in capital A, a is less than or equal to sup A, which implies that for all a in capital A, x times little a is less than or equal to x times sup A. Which implies that x times sup A is an upper bound for x times A. So we've proven that x times sup A is an upper bound for the set x times A. We now want to verify this epsilon property for x times sup A with respect to the set x times A. So let epsilon be positive. And I'm going to put it in brackets, again, what we want to do, what we want to define. We want to show there exists z in x times A, so that x times sup A minus epsilon is less than z. z is always less than or equal to x times sup A, since we've proven x time sup A is an upper bound for this set. So I'm not going to keep writing the second inequality. So this is what we want to prove. We haven't proven it yet. So let epsilon be positive. Then just as we did in the plus case, then there exists an element y in A such that sup A minus-- now here, we're going to choose y not exactly for epsilon here. Remember, the statement for number two holds for sup A for every epsilon. In particular, I can choose anything I want here and find a y in between sup A minus whatever I want here, which is positive in sup A. So instead of putting epsilon here like I did before, I'm going to put epsilon over x. Which I can do, because x is positive, which means epsilon over x is some positive number. Now, why did I choose epsilon over x? Because magic happens. Then that means there exists a y in A, such that if I multiply through by x I get x times sup. A minus epsilon is less than x times y, is less than-- so I'm going to stop writing this inequality. Because this is always true, because x times sup A is bigger than or equal to x times y. Which implies there exists a z in x times A. Namely, z equals x times y, where y is from here, such that x times sup A minus epsilon is less than z. And therefore, x times sup A satisfies the second epsilon property with respect to S given by x times A. And that's the end. So you see that I wanted this in the end. So I chose y to give me this slightly different thing for epsilon over x. Because in the end, I would multiply through by x. And I wanted this. When we do proofs for limits, you'll see we're always trying to make something less than epsilon. So quite often, we'll have to choose something to be less than epsilon over 5, or epsilon over some number for everything to work out in the end, just like we did here. So that's kind of a preview of things to come. And one last simple theorem about sups and infs. This really doesn't use that theorem, but just basically what the definition of sups and infs are. Namely that if A and B are subsets of R, and for all, let's say, with A bounded above, B bounded below, and for all x, y, for all x in A, and for all y in B, x is less than or equal to y. Then sup of A is less than or equal to the inf of B. So picture-wise, here's A. Everything here sits below everything in B. So B has to be over here. And therefore, the sup of A, which is there, has to be less than or equal to the INF of B, which is there. That's the picture that goes along with this. But how do we actually prove this? I mean, we have to use the definitions. Pictures do not suffice. Although they inform us, they don't suffice. So this is quite simple. So I'm not going to rewrite the hypotheses now, because it'd take a little bit. But so if for all x in-- let me-- basically what we're going to do is, we're going to take a sup and then an inf. So let y be in B. Then for all x in A, x is less than or equal to y, which implies y is an upper bound for A. And therefore, the supremum of A, which is the least upper bound, has to be smaller than or equal to y. Thus, we've proven for all y in B, sup A is less than or equal to y. Which implies that sup A is a lower bound for B. And by that same logic a minute ago, so remember the infemum of B is the greatest lower bound. So if I take any lower bound of B, it has to be less than or equal to the inf of B. So which implies sup A is less than or equal to inf B. So we're starting to close out here our discussion of-- or at least our discussion of the elementary properties of the real numbers. So let me say just a couple of things about the absolute value. And let me recall how this is defined. At least, this is how it should have been defined from your calculus class. If x is in R, we define the absolute value of x. This is either x if x is bigger than or equal to 0, or minus x if x is less than or equal to 0. Note that these two things agree when x is 0. So that I'm not defining the absolute value of x to be two different things when x equals 0. So and what is this really meant to be? It's supposed to be-- it's supposed to measure the distance from-- or it's supposed to represent-- I shouldn't say, is the distance, because I haven't told you what a distance means. But what is it supposed to represent? It's supposed to represent the distance from x to 0, which is why it's always non-negative. I mean, if I have x was over here. And this distance is meant to be absolute value of x [INAUDIBLE] that y. So in fact, that's the first thing that [INAUDIBLE].. I'm just going to prove some very simple properties of the absolute value. I mean, again, these properties should not be surprising to you. You should know all of them. But we're trying to get familiar-- that's a tough word to say-- familiarities, or familiarity with proofs. So I'm going to do is many proofs as I can for you. So the first statement is, for all x in R, absolute value of x is bigger than 0-- bigger than or equal to 0. And the absolute value equals 0 if and only if x equals 0. The second property is that for all x in R, the absolute value of x equals the absolute value of minus x. For all xy in R, if I take the absolute value of the product, this is equal to the product of the absolute values. For all x in R, the absolute value of x squared equals the absolute value of x squared. The fifth property is, if x and y is in R, then x is less than or equal to y if and only if minus y is less than or equal to x is less than or equal to y. And the sixth property that we'll prove is that for all x in R, x is less than or equal to its absolute value. So again, these are not too surprising. But we'll go through the proofs, because that's the point of this class. Later on in life you'll come across some more interesting theorems than definitely this one. So if-- so we're going to prove this first statement, that if x is in R, the absolute value of x is bigger than or equal to 0. So if x is bigger than or equal to 0, then the absolute value of x is by definition equal to x, which is bigger than or equal to 0. If x is less than or equal to 0, then the absolute-- then minus x is bigger than or equal to 0. And by the definition of the absolute value, the absolute value of x is equal to minus x, which is bigger than or equal to 0. So that's proven that the absolute value of x is always bigger than or equal to 0. So now, let's prove this statement. Absolute value of x equals 0 if and only if x equals 0. So when you see an if and only if, or two arrows here, that means there's two statements you need to prove, that this implies this, and this implies this. So let's start with this direction. Usually in if and only if, there is an easy direction. If x equals 0, then this is clear simply from the definition. Then absolute value of x equals x equals 0. Let's go with the other direction. So suppose the absolute value of x equals 0. And if x is bigger than or equal to 0, we get that x is equal to its absolute value of x is equal to 0. If x is less than or equal to 0, then minus x is equal to its absolute value of x, which equals to 0, or x equals 0. So assuming the absolute value of x is 0, we've proven that x equals 0 in both cases. So that proves the first property. So now, we're on to proving number two. So number two, for all x in R, the absolute value of x equals minus x. So we'll do that by, again, we have to consider two cases. x is non-negative or x is non-positive. So if x is bigger than or equal to 0, then minus x is less than or equal to 0, which implies that the absolute value of minus x equals minus minus x, which equals x, which equals the absolute value of x since we're in the case that x is non-negative. If x is less than or equal to 0, n minus x is bigger than or equal to 0, which implies that the absolute value of minus x is equal to minus x, which is, since x is less than or equal to 0, and by the definition of the absolute value, equal to absolute value of x. So that proves too. So for all xy real number, the absolute value of x times y is equal to the absolute value of x times absolute value of y. So we need to consider two cases. One of them is-- both of them are non-negative, both of them are non-positive. And both of them are-- or one of them is positive, one of them is non-negative, or one of them is non-negative, one of them is non-positive. So if x is bigger than or equal to 0, and y is bigger than or equal to 0, then x times y is bigger than or equal to 0, which implies x, the absolute value of x times y is equal to x times y. And since x is non-negative, that's equal to its absolute value. Since y is non-negative, that's equal to its absolute value. If x is bigger than or equal to 0, and y is less than or equal to 0, then minus x times y is bigger than or equal to 0, which implies that the absolute value of x times y is equal to-- so I should say, let's write it this way. This is equal to minus xy, which is equal to x times minus y. And since y is negative, minus y is equal to its absolute value. And since x is non-negative, x is equal to its absolute value. So this is equal to that. Another piece of chalk. Now the case that-- so you might be thinking about what about x negative and y non-negative, so y bigger than or equal to 0, x less than or equal to 0. It's the same proof, just exchange x and y. So I'm not going to do that case. And if x is less than or equal to 0, y is less than or equal to 0. Then this implies that minus x is bigger than or equal to 0. And minus y is bigger than or equal to 0. Which by this first case, which we've proven, now applied to minus x and minus y, I get that the absolute value of minus x times minus y, which is equal to x times y, so minus x times minus y is equal to x times y. This is equal to the absolute value of minus x times the absolute value of minus y. And by number two, which we've already proven, which is that the absolute value of minus the number is equal to the absolute value of the number again, we get that. So that's three. And for four, this is just a special case of three. Take y equals x in three. Now, for number five, it's an if and only if. So we need to prove two directions. Namely, we'll assume this, and then prove this, and then assume this, and then prove that. So suppose-- so this is this direction, meaning suppose-- our assumption is going to be, suppose the absolute value of x is less than or equal to y. So then, x is bigger than or equal to 0. Then this means that-- so if the absolute value of x is less than or equal to y, that automatically tells you y is non-negative. So therefore, minus y is less than or equal to 0, which is less than or equal to x, which equals the absolute value of x, which is less than or equal to y. I.e. minus y is less than or equal to x is less than or equal to y. So in the other case, that x is less than or equal to 0, I can apply basically this part which I've already proven. n minus x is bigger than or equal to 0. And the absolute value of minus x, which is equal to the absolute value of x, is less than or equal to y, which implies by this first case I've proven applied now to minus x here, that minus y is less than or equal to minus x is less than or equal to y. And multiplying through by minus 1 flips all the inequalities and also flips the sign, which basically means it leaves this inequality unchanged if I replace this by x. So multiply through by minus 1 flips the inequalities and proves what we want to do. So in either case-- so for the first case, we actually wrote down a proof, and we reduced the second case, x negative, to the first case that we've already proven. So we've proven that this inequality, the absolute value of x less than or equal to y, implies that this. So now, we need to prove the converse direction. So the converse direction. So suppose minus y is less than or equal to x is less than or equal to y. And I want to prove the absolute value of x is less than or equal to y. So if x is bigger than or equal to 0, then the absolute value of x is equal to x, which by this inequality is less than or equal to y. If x is less than or equal to 0, then minus y is less than or equal to x, implies that minus x is less than or equal to y. And because x is negative, minus x is equal to its absolute value. And therefore, we've proven the absolute value of x is less than or equal to y. So we've proven number five. And number six, so for number six, what do we do? We just take y equals the absolute value of x from five, to conclude that x is less than or equal to the absolute value of x and bigger than or equal to minus the absolute value of x. And that's the proof. Now, I'm going to prove one last theorem. So this one is actually very important, about the absolute value. It's probably-- so this inequality is one of the most important tools in all of analysis. And you get to see it right here in your first analysis class. There's two-- basically two other things we'll prove at some-- at one point, which I don't know, are the two other most important things in analysis. This is the triangle inequality. And the other two are integration by parts and change of variables. Those three things fuel the analysis machine. So this theorem, so this is the triangle inequality. And it states for all xy in R, the absolute value of x plus y is less than or equal to the absolute value of x plus the absolute value of y. Why is it called the triangle inequality? Well, let's try to think, so although x and y are elements of the real number line, let's instead try and think of these as two vectors instead. So here's the vector x. And then, let's say-- so actually, I'm running out of time. So I think we'll stop here.
MIT_18100A_Real_Analysis_Fall_2020
Lecture_17_Uniform_Continuity_and_the_Definition_of_the_Derivative.txt
CASEY RODRIGUEZ: So last time we finished by proving Bolzano's intermediate value theorem, which states that a continuous function achieves all values in between the function evaluated at the endpoint. So if I have a continuous function, neither-- I take a value y between f of a and f of b, so either f of a is less than f of b and y's in between is bigger than f of a and less than f of b, or f of a is bigger than f of b, but y is still in between them. Then there exists c in the interval a, b so that f of c equals y. So every value in between f evaluated at the endpoints is achieved. I drew the picture that went along with this a, b, f of a, f of b, and that could be-- and if we take anything in between, then there has to be some c so that f of c equals y. At least in the picture, I've drawn there's three different guys, but there's always at least one. And then the last thing we talked about at the end of last lecture was that the image of a closed and bounded interval by a continuous function is, again, a closed and bounded interval, where e corresponds to the absolute max of f, and f corresponds so that-- I shouldn't call this f since we already have f being used-- let's make it d-- corresponds to the absolute min of f. So it's what we proved last time is that if f is continuous, then there exists e, d and r, such that the range of this closed and bounded interval is, again, a closed and bounded interval. So f is very well-behaved on closed and bounded intervals. And we'll see another way in which continuous functions are well behaved on a closed and bounded interval in just a second. Now, a simple application of the Bolzano intermediate value theorem is the following-- really is an application of that theorem that I labeled the bisection method. But let's chalk that up to the Bolzano intermediate value theorem, which is the following that if f of x is any odd polynomial, then f has at least one real root. So any odd polynomial has at least one real root. So by the fundamental theorem of algebra, every polynomial has exactly n roots, n corresponding to the degree of the polynomial, but these roots can be complex valued. But for odd polynomials, there must be at least one real root, and this is a consequence of the Bolzano intermediate value theorem. But instead of proving this theorem in its full generality, let me just give you a representative example. So let's take f of x equals x to the 2021 minus x to the 2020 plus-- I don't know. What's today? I think the 25 or 24. I don't know. --1025x minus 300. So what's the point? As long as I plug in a big enough x, this will swamp everything here. So, for example, if I stick in f of 100,000, I leave it to you to check that this is positive because this will be 100,000 to the 2020 minus 100,000 to the 2021, and then plus this, which will be positive minus 300. This doesn't matter. So what really matters is this guy with the largest power will swamp all the other ones and eventually be positive if I stick in a large enough positive value. In fact, for this guy, if I stick in a negative value, then I get something very big, but now with a minus sign, which will again swamp all these earlier guys. In fact, I don't have to choose minus 100,000. If I just stick in 0, I will get minus 300, which will be negative. But you can check that if I stick in minus 100,000, this will still be negative. Thus, by the intermediate value theorem, there exists a c in 100,000 so that f of c equals 0 because 0 is something in between f evaluated at these two endpoints. And the idea is the same is that as long as you go far enough out and positive, then the polynomial will be positive if the coefficient out in front is positive; and if you go far enough out negative, then you'll get something that's negative. And then you apply the intermediate value theorem to be able to find a root. Now, of course, one other thing we did with the statement of the min-max theorem was, can we drop any of the assumptions on this theorem? And you can see that the example is no. So the question is, if f from a closed and bounded interval is not continuous and, let's say, f of 0 is less than 0, f of a, f of b is bigger than 0, then does there exist c, a, b, such that f of c equals 0? Let me put that question mark at the end. And it's easy to come up with an example of a function that is not continuous that satisfies these conclusions, and there does not exist a point in between so that f of c equals 0. Simplest being, I'll just draw a picture. Let's go from minus 1, 2. So then the function is f of x equals x minus 1. And this is for x not equal to 1, and let's make it a 1/2 there. So this function looks like 1/2 there. So there is no value, there is no c in between. So x is between 0 and 2. So you see f of 0 is equal to minus 1, that's negative; f of 2 is equal to 1, it's positive; but there exists no c in between, so that f of c equals 0. So what's the point I'm making? We really do need this hypothesis that the function is continuous for this theorem to be true. If you drop that hypothesis, then the theorem is no longer true. So now we're going to move on to a new concept called uniform continuity. And before I do this, I'm going to rewrite the definition of continuity. So this is an old definition, but I want to highlight a specific aspect of it. So this is me writing the definition of a continuous function again. So function f from a set s to r is continuous. So we said that means it's continuous at every point, meaning if for all c And s, it's continuous at c, which means for all epsilon positive, there exists a delta. And we saw this in examples that we did proving continuity. This usually depends on epsilon and c, the point where I'm looking at, such that for all x and s, x minus c less than delta implies that f of x minus f of c is less than epsilon. So this is, again, a bit of review. But I just want to show you that, in general, when we verify that a function is continuous on a set, we have to show it's continuous at every point in the set. And that means for every epsilon, there exist delta. But this delta usually depends on epsilon, but it can also depend on the point c, which is where we're checking continuity at. So for simple example, let's take f of x equals 1 over x on the open interval 0, 1. So I claim f of x equals 1 over x is continuous. So proof we have to show that for every c and 0, 1 for all epsilon, there exists that. So let's see b and 0, 1, but epsilon be positive. Again, since this kind of review, I'm not going to go through the process of really the thinking behind choosing this delta. I mean, there's calculations you do off to the side to try and get this delta. But let's choose delta to be the minimum of c over 2 and c squared over 2 times epsilon. So what I want to again emphasize is this delta depends on c, the point we're looking at, and epsilon. So I claim this delta works. Suppose x is in 0, 1, and x minus c is less than delta. Then absolute value of x, this is by the triangle inequality. Let me write it this way. c absolute value is equal to c minus x plus x, which is less than or equal to c minus x plus x, which is less than delta, which is less than or equal to the minimum of these two. So it's certainly less than or equal to c over 2 plus x. And now c and x, these are in 0, 1, so they're all positive, and therefore I get that-- Let's go to the next board. So if I subtract the c over 2 over to the other side, I get that c over 2 is less than x. So this is as long as x is in 0 and x minus c is less than delta. This should not be a shock. There's 0, 1, c, c over 2 3c over 2. And so if x is in this interval, it certainly bounded away by c over 2. Or if x is in here, then x is certainly bigger than c over 2. And now we look at f of x minus f of c. This is equal to 1 over x minus 1 over c equals c minus x over x times c, which is less than delta over x times c. And now x is bigger than c over 2, so if I take 1 over that, I get 1 over x is less than 2 over c. So this is less than or equal to 2 delta over c squared, and delta is the minimum of now c over 2 and c squared over 2 times epsilon. So this is less than or equal to 2 over c squared times c squared over 2 epsilon equals epsilon. So I just went through the proof of continuity for 1 over x, which using better machinery we could have proven. But I did this to highlight this point that's showing continuity of 1 over x on 0, 1, that in showing that, this delta that you are choosing depends on the point you're looking at. So what the definition of continuity says is for all c, for all epsilon, you can find a delta depending on epsilon and c. Now, uniform continuity removes that necessity of delta depending on c. And I'm going to make a possibly bad analogy in just a minute, but first let me write down the definition of uniform continuity. Let s be subset of r, f going from s to r. We say f is uniformly continuous on s if for all epsilon positive, there exists a delta depending only on epsilon, such that where all x, c and s, with x minus c less than delta, we get that f of x minus f of c is less than epsilon. So a function being continuous means that for all epsilon you can find delta depending on one of these points, depending on c so that if I had this, then I have this. Uniform continuity says you can find delta that works across the board. So I'm going to try and make an analogy that hopefully I don't have to cut out of this lecture. But it's a bit like this. Let's take a trip down memory lane and imagine being at a party. So you know everybody's in the party and let's say that everybody can hear, their ears work just about the same, and the goal is to try and understand each other's conversation, try and hear each other's conversations. Now, in order for me to hear this person's conversation, depending on how loud they talk, I have to be close enough to be able to hear them. And this changes from person to person in the room. Maybe I can stand a foot away from this person over here. Maybe this person over here is a low talker and I have to be almost up in their face to be able to hear them. Maybe that person just whispers and I have to be almost nose to nose to be able to hear them. That's a bit like a function being continuous. The threshold of hearing what they're saying is the epsilon and I have to change my delta depending on what point I'm at so that I can hear them. Now, uniform continuity, is kind of like everybody's more or less speaking around the same level, speaking at the same loudness. So I just can be within, say, one foot of everybody and be able to understand them. Go to this person if I'm within one foot I can hear them well. If I go to this person and within one foot I can hear them well, and so on. So that's kind of at least a loose analogy of the difference between continuity and uniform continuity. So this is a reasonably interesting definition. So we should see examples and negate it. First off, you should be able to convince yourself that a function that's uniformly continuous is actually continuous. I'm not going to write the proof of that down. It follows essentially straight from the definition. But let's look at a function, which is uniformly continuous. So the function f of x equals x squared on the interval 0, 1 is uniformly continuous. So what's the proof? I want you to notice I'm going to use kind of in a very essential way that we're on this closed and bounded interval. So we need to show that for every epsilon, there exists a delta so that no matter what two points-- So if you like, here's a picture that goes along with this-- so that if I look at maybe two points that are close to each other less than delta, their function's value is within epsilon. And so let's say I'm on an interval a, b. But not just this point, I could go over here to two other points that are within delta of each other. And if I look at the function evaluated at the two points, then the difference between these two guys is also less than epsilon. So I can move through the party and as long as I'm close enough within a uniform distance to somebody, I can hear them just fine. So what a nice trip down memory lane when we could be around other people. So let's show this function is uniformly continuous. Let epsilon be positive. Choose delta to be epsilon over 2. Now, you see this delta just depends on epsilon, then if x and c are in 0, 1, and x minus c is less than delta, then if I compute the difference, x squared minus c squared, I can write as x plus c times x minus c. And now I use the triangle inequality, this is less than or equal to x minus c. Now, these guys are in 0, 1, so their absolute values always bounded by 1. So this is less than or equal to 1 plus 1, and then x minus c is bounded by delta, which equals 2 times delta, and delta is chosen to be epsilon over 2. So I get epsilon. Now, again, how would you choose delta? So it looks like I just chose this delta out of nowhere. In practice, how do you choose delta? This is the computation you do. You take x squared minus c squared, split it, you have this. You can estimate with what you know above by 2 here times delta. So how do you choose delta so that 2 times delta is, say, equal to epsilon? You choose delta to be epsilon over 2. So let's negate this definition and then look at 1 over x once again and check to see if it's uniformly continuous. We'll show it's not. So the negation f from s to r is not uniformly continuous. So I saw a little bit of the new Borat movie that came out. So I almost wanted to say f is uniformly continuous. Not. But I didn't. It's not uniformly continuous. And now you negate the rest of the definition. If there exists some bad epsilon, such that no matter what delta you choose, there exists some bad x and c and s, such that they're close to each other, but the values are not bigger than or equal to. So let's revisit 1 over x. This is not uniformly continuous on the open interval 0, 1. So you see that this function is continuous but it's not uniformly continuous. We just proved that a minute ago that it's continuous. So let me now give you the proof that this function is not uniformly continuous. And what's the idea behind this? Here's one. This is not to scale, but there's one as well, and 1 over x shoots up to plus infinity as x approaches 0. So what uniform continuity says is that if I take any two points in the interval that are close to each other, then the outputs have to be close to each other. Continuity, remember, says if I fix a point, then as long as I'm close to that point, the outputs will be close to each other. Uniform says, no matter what two points you choose, as long as they're close together, the outputs will be close together in a controlled way. Now, if I take two points very close to each other here, very close to 0, they can be pretty close to each other. I mean, their values can be quite large. Even though the difference between the arguments are small, the values can be quite large just because 1 over x shoots up. So let's quantify this. I have to tell you what the bad epsilon not is. Choose epsilon not equals 2. Now, any epsilon not will work just fine. Let delta be positive. So now we have to find x and c and s within distance delta to each other whose values vary by at least two. So let's choose c to be the minimum of delta and 1/2, and I'm going to choose x to be c over 2. So just ignoring this half for a minute, c is equal to delta, x is equal to delta over 2, and then 1 over c minus 1 over x is going to be 2 over delta. And if delta is very small, that's certainly bigger than or equal to 2. But I had to throw in this 1/2 here because I chose epsilon not to be 2, and delta's arbitrary. So then we see that x minus c, this equals c over 2, which is certainly less than or equal to since c is the minimum of delta and 1 over 1/2 is less than delta over 2, which is less than delta. And if we compute 1 over x minus 1 over c, this is-- remember x is c over 2, so this 1 over x is 2 over c, so 1 over c. And now from c being less than or equal to 1 over 1/2m I get that 1 over c is going to be bigger than or equal to 2. And that's the end of the proof. So we see that even though these two arguments are very close to each other, the outputs are, in fact, separated by some fixed distance. At least two. But we could actually make it bigger than 3, bigger than 4, whatever you like. And I'm not going to go over this example. But in fact, I'll leave it to you or maybe I'll put it on the assignment, f of x equals x squared is not uniformly continuous on r. So we saw that it's uniformly continuous on a closed interval 0, 1, but it's not uniformly continuous on all of r. The reason being again kind of because x squared is starting to get big. So I can take two things close to each other, stick it into x squared, and their outputs could be very different from each other. Didn't write out the proof of this one, so we're going to go by a little bit by the seat of our pants. So we want to show this is not uniformly continuous. So I think epsilon 0 equals 1 is going to work just fine. Let delta be positive, and the idea is we want to choose x to be some number. Let's call it a plus-- well, c will be some number, and c equals some number, x equals c plus delta over 2. So this is kind of scratch work off to the side. And let's see if we can somehow choose c. So first off, x minus c will be delta over 2, which is less than delta. And now let's see if we can choose c so that x squared minus c squared is bigger than or equal to 1. So c is kind of the thing that we get to play with here. I mean, this is how it actually goes if you want to figure out the proof before you write it down, the scratch work to the side. So choose c so that this-- now, let's start off with the thing that we're trying to bound from below and start computing some stuff. This is equal to x plus c-- and c will be positive. So this will be c, and x will be positive. So x plus c times x minus c; now x plus c will be 2c plus delta over 2; x minus c will be delta over 2. So far all of these things are equality, and we get this is equal to 2c plus-- so delta is 2c plus delta squared over 2, and here we're going to choose c positive. So we're trying to show it's not uniformly continuous on r. So we just need to find a value of c and a value of x so that they're within delta distance to each other, but their outputs are bigger than or equal to 1. So far if c is just something positive, x is c plus delta over 2, then they're close. Their difference, however, is going to be just by this computation equal to 2c plus delta over 2 times delta over 2 squared. Now delta squared over 2 is always non-negative, so it's certainly bigger than or equal to delta times c. And now remember we wanted this thing to be bigger than or equal to 1. Right now we have it's bigger than or equal to delta c. So let's choose c to be 1 over delta, and let's see that that works. Choose c to be 1 over delta, x to be 1 over delta. So c plus delta over 2. Then these two values are close. x minus c equals delta over 2 is less than delta, and x squared minus c squared equals x plus c times x minus c. x plus c as we did a minute ago is going to be 2 over delta plus delta over 2. That's x plus c. x minus c is equal to delta over 2, which equals 1, plus delta squared over 4, which is bigger than or equal to 1, our bad epsilon. So f of x equals x squared is not uniformly continuous on r, even though it is uniformly continuous on this closed and bounded interval. So uniform continuity is really a relationship between a function and the domain on which it's defined. So this is kind of to wrap up our theme of continuous functions and how they're behaved on closed and bounded intervals. And we see here that uniform continuity is a much stronger notion than continuity, as I've said. I mean, you can easily prove from the definition that if a function is uniformly continuous on s, then it's continuous on s. And we've seen that in general the reverse inclusion does not hold. So let me just write this up as a remark. In general, f uniformly continuous implies that f is continuous. This just follows from the definition. This is not difficult to show. But the converse does not hold, as we've just seen We have two examples here. f of x equals 1 over x on the open interval 0, 1, which is continuous, but it's not uniformly continuous; and then we have the function f of x equals x squared, which is not uniformly continuous on r. But in fact, it is uniformly continuous on this closed and bounded interval 0, 1. This example here, though, is, in fact, the rule; that if I'm looking at a function on a closed and bounded interval, then, in fact, these two notions are equivalent. So that's the following theorem. Let f be a function from a to b to r. So closed and bounded interval now, not just an arbitrary set. Then f is continuous on a, b, meaning it's continuous at every point, if and only if f is uniformly continuous on a, b. And the necessity of working on a closed and bounded interval is seen again by this example we had here of f of x equals 1 over x and f of x equals x squared on r and 0, 1. But as long as we're looking on a closed and bounded interval, continuous is equivalent to uniform continuity. Now, this is also not the sharpest thing you can say. In fact, what you can say is that if I replace this with what's called a compact set, which I may put in the assignment, I may not, then the statement is still true. And that's in some sense the sharpest statement. But I think this is interesting enough to just work on a closed and bounded interval. So let's prove this. I leave this to you as an exercise, uniform continuity implying continuity. I leave it to you. And I will do more difficult one, and we'll use kind of a philosophy like we did in the previous lecture where we use the definition of continuity or this theorem about continuity in terms of sequences and also the Bolzano-Weierstrass theorem. Good. We still have the negation of the definition of uniform continuity, and that's what we'll need. So we're going to prove f continuous implies uniformly continuous, and we're going to do this by contradiction. So suppose f is continuous on a, b, but not uniformly continuous. And then we're going to break the universe some way, which shows that our assumption that f is not uniformly continuous does not hold. So we use this negation of the definition of uniform continuity, which says that there exists delta 0. So let me write this out. I'm just going to rewrite this here, then there exists a delta 0, such that for all delta positive, there exists x, c and a, b, which depend on delta, such that x minus c is less than delta and f of x minus f of c is greater than or equal to epsilon not. Now, this holds for every delta. You can find x and c so that x minus c is less than delta and f of x minus f of c is greater than or equal to epsilon not. So let's choose delta to be 1 over n for each natural number n, and for all natural numbers n, there exists xn, cn, and a, b, such that-- so this is the statement now with delta equals 1 over n. xn minus cn is less than 1 over n, and f of x sub n minus f of c sub n is bigger than or equal to epsilon 0. Now, I have two sequences here, and they're bounded because they're in this closed and bounded interval. So I can pass to a subsequence of one of them to start by the Bolzano-Weierstrass theorem-- not B-Z, Bolzano-Weierstrass. I think that's probably somewhere else in the lecture or the previous I wrote B-Z, but I meant B-W by Bolzano-Weierstrass-- there exists subsequent xn sub k of x sub n and x and a, b, such that limit as k goes to infinity of x sub n sub k equals x. So Bolzano-Weierstrass from any bounded sequence we can find a convergent subsequence. And because this subsequence is always between a and b, the limit x will be between a and b. Now, since the subsequence cn sub k is bounded between a and b-- so I have a subsequence of x sub n sub k, and I then obtain a subsequence of the c sub n by just picking these integers to be the same. And so now that subsequence is of the c sub n is they're all coming from a, b. So that's bounded by a, b. So it's bounded. So by Bolzano-Weierstrass applied now to this subsequence, there exists a subsequence c sub n sub k of j, of c sub n sub k, and an element c in a, b, such that the limit as j goes to infinity of c sub n sub k sub j equals c. So I found a subsequence of the x of n's, and now I look at a corresponding subsequence of the c sub n's where now the subsequent cn sub k's are coming from the n sub k's chosen for this subsequence, that's still a bounded sequence. So by Bolzano-Weierstrass, I can take a subsequence of that and an element c so that I have convergence. So now let me just summarize. So in summary, the sequences x sub n sub k of j-- so now this is even a subsequence of x sub n sub k and c sub n sub k sub j. These are subsequences of the original sequences x sub n and c sub n. And this x and c and a, b, such that the limit as j goes to infinity of x sub n sub j equals x, and limit as j goes to infinity of z sub n sub kj equals c. Now, I claim x equals c. Now, 0 is certainly bigger than or equal to x sub n sub k sub j minus c sub n sub k sub j, which is less than or equal to by how we've defined these sequences is less than or equal to 1 over n sub k sub j. So first off, by the definition of subsequences, j is always bigger than or equal to n sub k-- n sub k sub j is always bigger than or equal to j. So this is always less than or equal to 1 over j. This is just a consequence of how some sequences are defined. You always have to move to the right after you've made a choice of entry in the sequence that you're going to take as an element of your sequence. So 0 is bounded, bounds this from below, and is bounded above by 1 over j, which converges to 0. So the middle thing has to converge to 0 by the squeeze theorem. But these two things, that converges to x, this converges to c. And so the absolute value of x minus c equals 0, and therefore x equals c. Now, we're almost home free. This limit as j goes to infinity of x sub n sub k sub j equals c, and limit as j goes to infinity c sub n sub k sub j equals c. Now, where's the kicker? This. So these are subsequences of x sub n and c sub n. But each of those sequences is converging to a number c. So by the theorem about continuity, since f is continuous at c, this implies that 0 equals the limit as j goes to infinity of f of x sub n sub-- write it this way. f of c minus f of c equals limit j goes to infinity of f of n's of c sub n sub k sub j, which is bigger than or equal to epsilon 0. All of these are supposed to be bigger than or equal to some fixed epsilon 0, which is positive. But we've just shown that's equal to 0. And that's the contradiction. Thus, f is uniformly continuous on a, b. So that concludes what we'll say about continuity. And as you'll note kind of going forward, we're going to cover topics faster. That's just because we have more machinery to work with. At the start it was very slow going just because one, it's the start of your first proof based class. So I'm going to go slow; and two, we didn't have much to work with. We just had the real numbers are in ordered field with the least upper bound property. But now we've covered sequences, continuity we know more, and it'll be easier to prove new things and also hopefully familiarity with proofs is getting better so that my arguments can be a little bit shorter and not so involved. So we're moving on to something now. This is supposed to be about calculus. We've covered limits, continuity. The next big topic has to be differentiability. So this is a new chapter about the derivative. So let me first give you the precise definition. So first off, I'll be saying, let I be an interval, f goes from I to R, and c an element of I. Now, when I say an interval, I don't mean it has to be closed and bounded like ones we've been working on. I don't mean it has to be open. It doesn't have to be a bounded interval like from 0 to 1. It could be 0 to infinity. It can include the endpoints, except, of course, infinity. But I think what I mean and I don't have to write all this out for what exactly an interval is. So let I be an interval and f be a function from I to R and c a point of I. So first off, convince yourselves of this, but I think it should be pretty clear. For an interval, every point in the interval is a cluster point of the interval. This is not too difficult to prove. And I will not require you to prove it, but I'm just saying this because I'm about to define a limit at c. So it had better be a cluster point of a set that this limit is defined using. So of I, take away c. And if you don't like the statement, just forget I even wrote it and we'll continue on. You say f is differentiable at c-- so this is the new bit of terminology. If the limit x goes to c, f of x minus f of c over x minus c exists. If this limit exists, we say the functions differentiable at c, and we use a notation f prime f apostrophe c for this limit. If f is differentiable at every point of the interval I, we'd get a new function, this function f prime of c, which we denote the function that's a derivative of f as f prime or dfdx. So the simplest guys that are differential are, of course, polynomials. I think this is one of the first things you learn back in calculus. And I'm not going to go through what this is supposed to represent. You know this from calculus, the best way to say what this number represents is the instantaneous rate of change of the function f at this point. How is f changing at that point in the sense of increasing, decreasing, and so on? Or if you like, you interpret this as being the slope of the line that is perfectly tangent to the curve to the graph of the function f. But what does that mean to be tangent to the graph of a function? So that takes a bit of explaining. So I'm not going to get into what the actual interpretation of this number is. I think you have a pretty good idea of what that is. We're just going to start proving some properties of it. So the simplest example of a function which is differentiable is that of a monomial. So f of x equals x to the n. Let's put even a number in there. So let's write it this way. Let f have been R in the natural number 0, then function f of x equals alpha times x to the n is differentiable. So I should have said here as well if a function is differentiable at every point where it's defined, or if a function is differentiable at every point in its domain I, then we just say the function is differentiable. So then this monomial is differential and for all c f prime of c equals n times alpha x to the n minus 1, c. So let's give a proof of that, and we'll use this kind of simple formula. So let's take a look at x minus c times this kind of weird looking sum to x, n minus 1 minus j, c to the j. So multiply the x through, I get sum from j equals 0 to n minus 1, x to the n minus j, c to the j minus-- and I'm not going to use j again in the sum when I carry this c through. I'm going to use l-- sum from l equals 0 to n minus 1, x to the n minus-- I'll put it this way-- j plus 1, c to the j plus 1, l, and now I'm going to make a change of variables in this second sum and let j equals l plus 1. So in that case, when l equals 0, j starts at 1. When l hits n minus 1, j equals n. So this equals the sum from j equals 0, n minus 1, x, n minus j, c to the j minus sum j equals 1 to n, x, n minus j, c to the j. So it's the same terms in the sum, except starting at different places. So what doesn't get killed here is the j equals 0 term because this one starts at j equals 1. This one ends at n. And so it kills the n minus 1 term. So the only thing that survives from this sum is the j equals 0 term, x to the n minus 0, c to the 0 minus. And now from here, everything gets killed except the n terms. So x to the n minus n, c to the n equals x to the n minus c to the n. So this is therefore equal to this. So we use this to compute this limit. So then f prime of c is equal to the limit as x goes to c of x to the n alpha times x to the n times alpha times c to the n, where x minus c-- now, alpha is just a number. It can pop out-- x to the n minus c to the n divided by x minus c is this thing divided by x minus c. So this is equal to alpha times the limit as x goes to c of sum from j equals 0 to n minus 1, x to the n minus 1 minus j, c to the j. Now, this is just a polynomial in x. Polynomials are continuous functions. So the limit as x goes to c is just to plug in c to x. And so this equals alpha times sum from j equals 0 to n minus 1, c to the n minus 1 minus j times j equals-- so c to the n minus 1 minus j times c to the j gives me c to the n minus 1-- I'm summing in j. There's no j here, so that just pops out-- times the sum j equals 0, n minus 1. And there's n of the times 1. So this is just n. And that's the proof. So I'm sure you saw that at some point. Next time we'll give an example of a function which is not differentiable out of a point. Of course, I think you probably already know this, f of x equals the absolute value of x. If there's time at some point, I'll go over an example of a function, which was very surprising to people at one point. So now that we have differentiability and continuity on the board, for a long time since dead people believed that if you have a continuous function, any continuous function, then there has to be at least one point where it's differentiable. There must be at least one point where it's differentiable. And people tried to prove that for a long time and they couldn't because it's false. Weierstrass, who was, again kind of the godfather of everything we're doing, came up with a whole class of examples of functions that are continuous, but are differentiable nowhere. They're not differentiable at a single point. And perhaps depending on time, will go over this example. And think I'll stop there.
MIT_18100A_Real_Analysis_Fall_2020
Lecture_13_Limits_of_Functions.txt
[SQUEAKING] [RUSTLING] [CLICKING] CASEY RODRIGUEZ: So, all right, so let's start talking about some math. So there was a last result I was going to prove for series, which is that if you have an absolutely convergent series, and you rearrange it, then that series is also absolutely convergent and converges to the same thing as the unarranged series. But I'm going to leave that to the lecture notes, and we're just going to start on a new topic. So the topic that we're now on is continuous functions. And so continuous functions, as we'll see, is a statement about how a function behaves near a point in relation to how it behaves at the point. Now, what it means for a function to behave near a point-- that's the notion of the limit of the function, which is what we'll first discuss. OK? Now, at what points are we going to be looking at? So this is where limits take place. And somehow, we want to look at a function near points where there's a set nearby to actually look at f at. So f is usually defined on some set S, and we want to look at points so that there's a lot of S nearby. And we kind of already dealt with these types of points when we encountered cluster points in the assignment. So this is exactly where we start looking at limits. So this is a definition that I'll recall from the assignment. Let S be a subset of R and x and R. We say x is a cluster point of S if, for all delta positive, the interval x minus delta, x plus delta intersect S take away x is non-empty, OK? So the new terminology-- not so new since we dealt with it in the assignment-- is that of a cluster point. And another way to say this is that-- so an equivalent way of stating this is that, for all delta positive, there exists, let's say, a y in S such that 0 is bigger than x minus y is less than delta, OK? So what this means is that, as far as the picture goes, x is a cluster point if, when I ever take any small interval about it, I can find some S in there other than possibly x. And this should be able to be done for every interval. So let's take a look at some examples. Let's say we take S to be 1/n in a natural number, OK? And so since now I actually have people in front of me, I can actually ask questions. So whoever first says it, what would be a cluster point of this set S? So what would be a point in the real numbers So that there's a lot of S near it? Feel free to blurt it out if it comes to you. Why don't you just take a guess? 1? So that's one possibility. So there has to be a lot of S near this proposed cluster point. So let me mark out the points that we have. There's a-- I'll put 0, and then 1/2 is there, and then 1/3, 1/4, 1/5, 1/6, so on, and so on. And so I see a guess of 0. So that's a good guess as well. So why would 1 maybe not be a cluster point? Well, it's because if I draw a little interval around 1, then there's only one point in S in here, and it's the proposed cluster point, 1, OK? So remember, what we should be able to do is find in every small interval [? sum ?] of S in the interval that's not equal to the point, OK? And we can't do that for the point 1. But for 0, if we draw any interval around 0, there's plenty of S in the interval not equal to 0, OK? So here, 0 is a cluster point of S. And let's give a quick proof of this. So we have to verify for every delta positive this interval, 0 minus delta, 0 plus delta intersect S take away 0 is non-empty. So the delta would be positive, and now here's the picture. Here's 0. Here's delta, minus delta. We have to show that there's some S in here. So we simply choose natural number n large enough so that 1/n is there, and we can always do this because of the Archimedean property of the real numbers, right? So let delta be positive. Choose natural number n so that 1/n is less than delta, and it's certainly bigger than 0. So then n is a natural number. Then 1/n is in x minus delta, x plus delta intersect S take away 0. So this should have been 0, which implies this set is non-empty because it has something in it, OK? All right, was that example OK? Did you all get that all right? OK, so feel free to ask questions as they come up. So let's do another example. This one, I will not give a full proof of, but we'll just kind of talk our way through it. Let's say S is now the open interval 0, 1. What is the set of cluster points of 0, 1 equal to? So what would be a reasonable guess or any guess? Yeah, so, certainly, this will contain 0, 1, right, since-- so 0, 1, everything between 0 and 1 should be a cluster point of 0 and 1 because there's always a lot of numbers. I mean, there's a lot of numbers between 0 and 1. So if I draw this 0, 1, and [? have ?] everything in between 0 and 1 is certainly a cluster point. What about the point 1? Is that a cluster point of this set? That's a good observation that, yeah, it's a sup of the set. So for every-- right, so for every delta positive, if I draw a little interval around 1, there's going to be a number less than 1 that's bigger than 1 minus delta, right? The set of cluster points of the open interval 0, 1 is the closed interval 0, 1. OK? Now, let's do another example. Let's say we look at now, instead of rational numbers-- so you should, again, think of cluster points as being the set of points where there is a lot of this set nearby, OK? Now, what is the set of cluster points of the rational numbers? OK, so perhaps it will contain the irrational numbers. So let's think about this for a little bit. So, for example, let's look at square root of 2, OK? Perfectly good rational number-- I mean, irrational number. If I draw a little interval around square root of 2, can I find a rational number in there that's not equal to a square root of 2? Right, because we have this theorem about the density of the rational numbers, right, that we have-- so remember that this was a theorem we proved at one point. For all x, y real numbers with x less than y, there exists a rational number r such that x is less than r is less than y, right? So we can always find a rational number here, or we could have taken another rational number here, yeah? So for every interval we draw around square root of 2, we can find a rational in there not equal to square root of 2, obviously. But we can find a rational in that interval, yeah? So this suggests that the set of cluster points contains the set of irrationals. What about rational numbers? Are those also cluster points? Let's say I take 0. Now I look at a small interval around 0. Can I find a rational number in there other than 0? I'm not asking like, actually give me one. I'm just saying, does there exist a rational number in this interval other than 0? AUDIENCE: Yeah. CASEY RODRIGUEZ: By this theorem, right? You can also use the theorem to find irrational in this interval as well. So no matter if it's square root of 2 or any irrational number, no matter if it's a rational number, everything is a cluster point. So the set of cluster points of rational numbers is equal to R, OK? Is that clear? OK, feel free to stop me if you have any questions. So now let's look at, for example, S equal to just a single guy, say, 0, OK? Remember, cluster points of a set means there's a lot of that set nearby. Now, I claim that this guy has no cluster points because if you think so intuitively why is this, this is because, for something to be a cluster point, it has to have a lot of the set near it, a lot. And there's not a lot of set to begin with. There's just one element. So let's see why this is the case, OK? And this gives us a chance to negate the definition of cluster point, which is always a good idea. OK, so S equals just the singleton set, 0. This has no cluster points, and so the negation of the definition of a cluster point-- so x is not a cluster point of S means if this is some bad delta-- remember, so the definition of cluster point is for all delta. So negation means there exists one bad delta so that x minus delta 0, x plus delta 0 intersect S take away x equals the empty set, OK? So the picture that goes along with this is that x does not cluster point if there's some interval and possibly the only S there is the point x, but nothing else, OK? All right, so let's use the negation of this definition to show that there are no cluster points, OK? And we have to deal with two cases. I want to show x is in R, and x is not a cluster point of S. And I'll just do the case x not equal to 0. x equal to 0 is much easier, so we'll just do x not equal to 0. Let's prove this. And what's the idea? We have to be able to find-- so here's 0. This is all that S is, is just this point, 0. Now we have a point x. What would be an interval containing x that contains none of S? So x here is just some nonzero number. S is what's highlighted. The interval around x that doesn't contain any S. So like a x/2? 3x over 2, meaning delta equals x/2, at least for this picture, yeah? OK, and that's essentially the proof. So choose delta 0, and we'll simplify this further and suppose x is positive. For x less than 0, you choose delta 0 to be the absolute value of x. But for this one, this is simple enough-- choose delta 0 to be x/2 and x minus delta 0, x plus delta 0 is equal to x/2, 3x over 2. And since 0 is not in this set, in this set, [? implies ?] intersect S take away x equals empty, OK? So we had to find a delta so that this interval around x doesn't contain anything from S, and we've done that by choosing delta 0 to be x/2. OK. So I could have done it for just-- so I did it for a set containing just one element, showed that this set has no cluster points. You can also show that if it has finitely many points, it has no cluster points. And so therefore, any finite set has no cluster points, but being a cluster point has nothing to do with the cardinality. So if you see the example in the notes, which you can think about, and I'm not going to go over right here, but, for example, x equals z, the set of integers-- this has no cluster points, OK? So being a cluster point means, where is this set clustering? So it has nothing necessarily to do with the size of the set but where the set is taking up space in the real number line. All right, so with the definition of cluster points, let me first state the theorem that, in fact, you proved in the assignment, which is that for cluster points-- let S be a subset of R, and x is a cluster point of S if and only if there exist sequence xn of the elements of and S take away x such that xn converges to x, OK? And that was in the assignment. You proved that basically using the squeeze theorem in the definition. All right, so with the notion of a cluster point, these are the points in a set where we're going to start talking about how a function behaves near a point, which is the notion of a limit of a function is our next definition. Let S be a subset of R. And even though I was referring to cluster points earlier as x, we're now going to switch over to c. Let c be cluster point of S. And let f be a function from S to R, all right? So you say f of x converges to L as x goes to c. So this is the new terminology here. So here, L is a real number if, for every epsilon positive, there exists delta positive such that if x is in S, the absolute value of x minus c is bigger than 0 less than delta, then f of x minus L is less than epsilon. So let's try and compare this definition a little bit to what happens for a sequence. So for a sequence, the sequence is getting close to L as long as we go far enough out in the sequence, as long as we look at terms far enough out. Now, for limits of functions, that going far enough out along the sequence is replaced by, as long as we look close enough to the point c but not at the point c. So if you'd like, this says that if x is near c-- and so this is just an intuitive interpretation of what this definition says. If x is near c, then f of x is near L. And let me also throw in here that we're saying that x is near c but not equal to c. But for a limit, you don't look at what happens at f of c. The function doesn't even need to be defined there to be able to define what a limit is. We just care about what happens nearby, OK? So you could have-- the picture you should have in mind is that here's c, and maybe f is not even defined up to there, but there is f, the graph of the function f there. There's L, and as long as I get very close to c, I'll be very close to L, "very close" being measured within some strip here, OK? OK, so this is the notion of the limit of a function as we approach a cluster point of the set S where the function f is defined. Just a little bit of notation-- typically write like we did with sequences f of x converges to L as f of x arrow L. And then I'll write, as x goes to c. Maybe I won't write as x goes to c. We note this by limit of f of x equals L as well. OK, so first question about limits is, are they unique? And this is, in fact, why we require that c be a cluster point of S because if we didn't require c to be a cluster point of S in this definition, limits would not be unique. In fact, you could have functions converging to whatever you want because the definition would then be vacuous. But if you don't want to think about that, that's fine. It's not a big issue. But I'm just going to point out at one point, we're going to use the fact that c is a cluster point to prove the following theorem. So let c be a cluster point of S, which is a subset of R. And let f be a function from S to R. If f of x converges to L1 as x goes to c, and f of x converges to L2 as x goes to c, then L1 equals L2, OK? So this is what I mean by uniqueness, that a function can only have one limit as it approaches a point c. Let me give the proof. So we're going to play this game again, where instead of showing something as 0 or something is equal to something else directly, we'll give ourselves a little room and prove the following. We'll instead show that while epsilon positive, L1 minus L2 is less than epsilon. So L1 and L2, these are just two fixed real numbers. If their absolute value is less than epsilon for arbitrary epsilon, then this is a number which is smaller than any positive real number. That implies that that number is 0, so L1 equals L2, OK? Why we give ourselves a little room is because, in the definition, we have a small parameter there which we hope to use. So let's prove this. Let epsilon be positive. So since f of x converges to L1, there exists-- so just from the definition, given some small tolerance, I can find a little number delta so that if I look inside this interval, I get mapped to the interval L plus epsilon, L minus epsilon. So there exists a delta 1 positive such that if x is in S-- and I'm going to probably keep forgetting to write that, but that should be understood that x is in the domain of the function f. If x minus c is less than delta 1 and bigger than 0, then f of x minus L1 is less than epsilon/2, OK? And the same thing for L2. I mean, if f of x is converging to L2, then there exists a delta 2 so that I have the same statement. So instead, I'm just going to change this 1 to i, i, i, i. And here, i equals 1 too. So since f of x converges to each of these two numbers, L1 and L2, there exists a delta 1 and a delta 2 so that if I'm within this tolerance delta of 1 or delta 2, then f of x minus delta 1-- L1 or L2 is less than epsilon/2. Let delta be the minimum of these two numbers. Now, since c is a cluster point of S, in fact, there does exist an x in S such that 0 is bigger than x minus c is less than delta, all right? And I'm going to use this x to compare L1 and L2. Then since x is in this interval, or I should say, since it's-- so remember I had this, which means I have this inequality or i equals 1, 2. I get f of x, or I should say, L1 minus L2. This is equal to if I add and subtract f of x and now use the triangle inequality and use the inequality that's in green, the f of x minus L1 is less than epsilon/2. And same for L2, I get-- equals epsilon, OK? And that's the end because remember we started off with the absolute value of L1 minus L2. Are there any questions about that? OK, so not the most exciting theorem in the world that this thing that we've defined. A limit is a unique thing for function. But let's start looking at some examples. And at one point, we'll look at the negation of this definition. So let's suppose-- so let's look at the limit as x goes to c of the function ax plus b. And I claim this is equal to ac plus b, OK? So here, S is, although I didn't write it out, this function ax plus b, this is-- I'm looking at it defined on all of R, and c is a real number, OK? So we have to verify this definition. And since it's for every epsilon, there exists something. The beginning of every proof, I'll give you a few points if you can just say, let epsilon be positive. How will we choose delta? Well, you do similar computations as you would if you were doing epsilon m arguments for sequences. So what are after? We want to find delta so that x minus c less than delta and also bigger than 0 implies f of x, which is ax plus b, minus ac plus b is less than epsilon. So let's start off with this thing, play with it, and see if we can find how to choose delta. Remember, we don't solve this inequality. We want to find how to choose delta based on estimating this thing and trying to make it less than epsilon. So f of x minus ac plus b, absolute value-- this is equal to if you just plug in f of x equals ax plus b, this equals a times x minus c, which equals absolute value of a times x minus c. And now we're in business because the thing that we do have control over is the delta, right? It is how big x minus c is. So this is less than absolute value of a times delta if x minus c is less than delta. And remember, the thing that we want in the end is the thing in yellow, and the thing we get to control is the thing in green. So if I want the thing in yellow, and I have the thing in green, how do I choose delta? So in the end, I want f of x minus ac plus b to be less than epsilon. Right now, I'm at the absolute value of a times delta. Delta is the thing I get to play with. So how do I choose it? Epsilon to be absolute value-- right? Because if I choose delta this way, then x minus c less than delta implies that, just by this computation here, I would get epsilon in the end, yeah? So that's kind of the scratch work that goes into it. Again, delta's the thing you get to choose. You get to choose based on epsilon. And so you look f of x minus ac plus b. You estimate it to be no bigger than the absolute value of a times x minus c, which is less than the absolute value of a times delta. And so if you want that thing to be less than epsilon, you should choose, for example, delta to be epsilon over the absolute value of a. There's not a unique choice. I could have chosen delta to be epsilon/2 absolute value of a, but this is a choice, OK? So let epsilon be positive. Choose delta to be epsilon over absolute value of a. If x minus c is less than delta, then-- and when you write the proof, it's essentially a rehashing of the best parts of your computation. And f of x minus a times c plus b, this is equal to absolute value of a times x minus c. So if we take f of x and subtract our proposed limit ac plus b, we get the absolute value of a times absolute value of x minus c, which is less than the absolute value of a times delta, which equals epsilon since we chose delta to be epsilon/a, OK? That's the proof. Are there any questions? OK, so let's do another example. Let's look at the limit as x goes to c, a square of the function square root of x. And I'm going to show this is equal to square root of c. And here, although I'm not writing it, S will be, for us, the domain of the square root, which is the closed interval 0, infinity, not including infinity, of course. And for what I'm going to do, c is positive, OK? You can also do c equal to 0, but for this proof, I'm going to do c positive. All right, so I have to verify this definition, which means for every epsilon, I can find a delta, so let epsilon be positive. Let's go over to our box of scribbling and scratching. So remember, our goal is to find a delta which I can choose so that f of x minus square root of c, in this case, is less than epsilon. So if I look at f of x minus square root of c, this is square root of x minus square root of c. And perhaps you remember from your days in calculus that whenever you see the difference of two square roots, it's a good idea maybe to multiply the top and bottom, or, I guess, multiply by 1 in a very special way, so that I get the difference of squares. So this is equal to the square root of x minus square root of c times square root of x plus square root of c all over square root of x plus square root of c, OK? And so now I have-- on the top, I have product a minus b times a plus b. So that's the difference of squares, which gives me x minus c on top over square root of x plus square root of c, OK? And remember, this is less than the thing that we're going to choose in the end. The absolute value of x minus c is less than delta, so this is less than square root of x plus square root of c. Now, in the end, you have to only-- you can choose delta depending only on epsilon and maybe the point c, OK? You cannot choose it to be dependent on the point x, which is changing, right? What maybe you're tempted to do here is to then choose delta to be square root of x plus square root of c times epsilon. Don't do that because delta, again, only depends on the point c and the point epsilon. It cannot depend on the thing that's changing, which is x, right? This is changing. So you don't make this choice. Now, if we just do one more thing, then we'll get to something where we can choose an independent of the thing that's changing, x. The square root of x on the bottom is only making things smaller because that's non-negative. So this is less than or equal to delta over square root of c, OK? So now, in the end, I have something which is delta and depends on the point c, and I want f of x minus square root of c to be less than epsilon. So how should I choose delta? Shout it if you know it. Right, epsilon times the square root of c. So now this will be our choice. So choose delta to be epsilon times square root of c. Now we have to show this delta works, which is essentially, again, just rehashing our computation over on the right, which gave us how to choose delta than if the absolute value of x minus c is less than delta, f of x minus square root of c-- and I'm not going to write f of x. I'm going to write the square root of x is equal to basically what we did over here in the box, which is absolute value of x minus c over square root of x plus square root of c, which is less than or equal to, if I take away the square root of x on the bottom like we did before, which is less than delta or square root of c, which equals epsilon square root of c over square root of c, which equals epsilon. So this delta works. OK, Is there any questions about that example? OK. So let's do one more example, which really illustrates what I was getting at when I was talking about the limit only cares about what the function is doing near a point but not at the point. So let's say we look at the function f of x given by 1, if x equals 0, and 2, if x does not equals 0. So this is-- here's the point 0. And where it's not equal to 0, it's equal to 2, and when x equals 0, it's equal to 1, all right? And they claim that the limit as x goes to 0 of f of x-- this equals 2. And the thing to note is that this limit it does not equal f of 0, which it did for the previous two cases, right? If we look back at these two examples that we did a minute ago, the limit as x goes to c of both of these examples was I just plug in c to the function, right? But for this example, that's not the case. The limit is not equal to the function evaluated at the point in highlighting that limits don't care about the function evaluated at the point. They care about what's happening near the point, OK? And near the point, f of x is just equal to 2 identically. So I'll give you the quick proof, or you can completely ignore it and just believe that the limit as x goes to 0 of this function equals 2. So the point is that for x not equal to 0, f of x is just a constant 2. So let epsilon be positive. And you can choose delta to be whatever you like because f of x when x is not equal to 0 is going to be equal to 2, the proposed limit. Then if 0 is bigger than the absolute value of x minus 0, which is just x, is less than delta, then this implies x is not equal to 0. And therefore, if I look at f of x minus 2, this is just 2 minus 2 equals 0, which is less than epsilon, OK? Any questions about this example? No questions? All right, so what we'll talk about now is-- so before this section on continuous functions, or at least this section on limits of functions, we had the notion of limits of sequences. And so a natural question is, how do limits of functions and limits of sequences belong together, interact, relate to each other, I guess, is the best way to phrase that question. And the point is that to decide if a function has a limit it, suffices to stick in sequences that converge to that point that you're looking at. And that's the content of the next theorem. So let S be a subset of R, c cluster point of S. Let f be a function from S to R. Then the limit exists. Limit as x goes to c of f of x equals a number L. This is equivalent to the statement that for every sequence x sub n of elements of S take away c such that xn converges to c, we have f of xn converges to L, OK? So again, the statement of this theorem is that a function converges to L if and only if for every sequence I stick into the function which converges to c, f of x then converges to L. So to choose or to decide if a function converges, it suffices to look at convergence of certain sequences, OK? OK. And why is this theorem important, or at least very useful? It's because with this theorem, we now get basically every analog of theorems we proved for sequences for free, for example, limits of sums of functions, a squeeze theorem that can be stated for limits of functions, and so on. And I'll say a few comments about this after we proved the theorem. All right, so let's-- this is a two-way street. And I'm going to mark the second statement in green. That way, I don't have to rewrite it again. So first, assume that limit at x goes to c of f of x equals L, and now we want to prove the statement in green, OK? So here's the idea. So now we want to show-- so let xn be a sequence of elements in S take away c such that x sub n converges to c. And what we want to show is that f of xn, which is now a new sequence converges to L as n goes to infinity. OK, so here, you should write, as n goes to infinity, in here, OK? So let me move that, and let me draw the picture of why you should believe this. So remember, what it means for a sequence to converge to a number means, for every epsilon, there exists a capital M so that for all n bigger than or equal to capital M, the sequence is close to the number L within epsilon distance to it. So let's think about this a little bit. I'm not going to draw the graph. I'm just going to draw two copies of the number line. Here, we have the number L. Here's the number c and L plus epsilon, L minus epsilon. So let epsilon be positive. And the idea is we want to be able to find or say there exists a natural number capital M so that n bigger than or equal to capital M implies f of xn is inside this interval here. Now, since f of x converges to L as x converges to c, there exists a delta so that-- so this is the picture that goes along with it-- so that if I'm inside this interval here, then I get mapped into this interval here. So this is what I get from the assumption. So then how I should choose the natural number capital M is so that all of the x sub n's lie in this interval, c minus delta, c plus delta. And if I can arrange that, which I can because the x sub n's converge to c, then I will get that these guys get mapped into this interval, L plus epsilon, L minus epsilon, OK? And that's the whole idea of the proof. Now we just need to write it down. Since f of x converges to L, exists a delta positive such that if less than delta, then if x minus L is less than epsilon. That's what I drew in yellow there. Now, since the sequence x sub n equals c, this implies that there exists some natural number M sub 0 such that little n bigger than or equal to M sub 0-- since all of these x sub n's are not equal to c, by assumption, they're in the set S take away c. X sub n minus c, and absolute value is bigger than 0 and less than delta, OK? This follows from the definition of convergence of sequences to a point. Here, if you like, I'm choosing epsilon in that definition to be delta. And this delta is coming from the definition for convergence of functions. And so we'll choose capital M to be this integer M sub 0. And if n is bigger than or equal to M, I get that, by our choice of it being M sub 0, you get that x sub n minus c is less than delta. That means that it falls in this little yellow highlighted part in the first number line, which implies-- so here, what I'm using is what's highlighted in blue. So this is content of this implication-- implies that f of xn minus L is less than epsilon, all right? And that's what we wanted to do, right? We wanted to show that there exists some natural number M so that n bigger than or equal to M implies f of x minus L is less than epsilon in absolute value. And therefore, we've proved that the limit as n goes to infinity of the sequence f of xn equals L. OK, so that's one direction of this if-and-only-if statement. Let's prove the opposite direction. So suppose what's highlighted in green holds. OK? And this proof is going to be by contradiction. So let's suppose that our assumption, what's in green, holds. But the limit as x goes to c of f of x does not equal L. And now we're going to do the proof by contradiction, meaning suppose the outcome is false. OK? We're still assuming what's in green holds. That's our assumption. And for proof by contradiction, we assume that the outcome or the conclusion is false, which is that the limit as x goes to c of f of x does not equal L. Now, what does this mean? So now we get to negate the definition and get better acquainted with it. This means that-- so the definition of limit, if you go back to the notes is, for all epsilon, there exists a delta statement. So the negation is that there exists some bad epsilon, epsilon 0, such that for all delta positive, there exists an x such that x is close to c, but f of x is far away from L. OK? So the negation of the definition of limit of something being the limit is that there exists a bad epsilon so that no matter how close I get to c, I can find something very close to c that's far away from L if I stick it into f, OK? That's the intuitive way of viewing the negation of this definition. OK, so now I'm going to apply this with delta chosen to be 1/n for in a natural number, OK? And this says-- so this statement here is a for-all-delta statement. So that holds for every delta I pick. So for all n, a natural number, there exists an x sub n such that 0 is bigger than x sub n minus c is less than delta-- I'm going to pick to be 1/n-- and f of x sub n minus L is bigger than or equal to epsilon 0. So first off, I made a for-all-n statement. Let's think about this for just a second. So certainly, for delta equals 1, I can find a number x sub 1 satisfying 0 is less than x minus c is less than 1, and f of x minus f of x sub 1 minus L is bigger than or equal to epsilon 0. But delta can be anything I choose, so for delta equal 1/2, I can find an x sub 2 such that x sub 2 minus c is less than 1/2, and f of x minus L is bigger than or equal to epsilon 0. So I'm just choosing delta to be 1/n for each n, a natural number. Choosing delta to be 1/n, OK? Is this point clear? OK, please stop me if you have questions and something is not clear. OK, so I have this sequence of elements of S that are not equal to c because they are an absolute value bigger than 0, or the absolute value of x sub n minus c is bigger than 0. And they satisfy these two inequalities, OK? Then let me just write this inequality again. So then the inequality that I have is, for all n, the natural number 0 is less than x sub n minus c is less than 1/n. I claim that the x sub n's must then converge to c. What tool can I use to say that? Squeeze theorem, OK? Because this thing on the left, 0-- this, if you like, is a sequence, just a constant sequence. This converges to 0. And 1/n, something on the right, converges to 0 as n goes to infinity. So that implies the thing that's sandwiched in between gets squeezed to 0. So that implies limit as n goes to infinity of x sub n minus c equals 0, i.e., xn's converge to c. Now, I'm assuming what's in green holds, which is, if I take any sequence converging to c, f of xn has to converge to L, OK? So by what's in green has given me this implication-- 0 must be equal to limit as n goes to infinity of f of x sub n minus L. But What. I know is that, since I've assumed f of x does not converge to L, along this sequence, each of these guys is bigger than or equal to epsilon 0, which is positive. So now I've concluded that 0 is bigger than 0. That's a contradiction, OK? OK, so next time, we'll talk about the applications of this theorem. Do you all have any questions about this proof or anything that we've covered so far today? You mean how you get the choice of delta from doing-- AUDIENCE: Yeah. CASEY RODRIGUEZ: Right. So let me phrase this a-- Yeah, I mean, so-- and probably in most classes, you would just be given this proof that's over here on the left. And somehow it looks like magic that I chose this delta this way, and it ends up working, right? But the thinking that goes in behind it is-- so what's the thing you want? You want what's in yellow. So let me erase some of what's in yellow. You want to choose delta so that if you take f of x minus ac plus b, that thing in absolute value will be less than epsilon, OK? So start with that thing that you want to make small and that you want to ensure is small, f of x minus ac plus b, and start computing. And when you compute it, you get what we got here in the second bit of yellow. And we're assuming that the absolute value of x minus c is less than delta, right? We're trying to choose delta so that what's in green implies what's in yellow. So far, we haven't chosen delta. We just know that the absolute value of x minus c will be less than delta. And we want to choose delta so that that implies what's in yellow after the arrow there. So when we do this computation, and assuming that the absolute value of x minus c is less than delta, we get this thing right here, the absolute value of a time delta. This is just assuming that the absolute value of x minus c is less than delta, which is what will it be assuming. And we want what's in yellow, which is that thing to be less than epsilon. So what we've arrived at is this thing here, which we want less than epsilon, is less than this thing in red, OK? And therefore, we want to choose the delta so that the thing in red is less than epsilon or less than or equal to epsilon. That will ensure that f of x minus ac plus b in absolute value will be less than epsilon, OK? So I'm not solving an inequality. I'm not solving the inequality I want, which is f of x minus ac plus b. I'm starting with what's on the left, estimating it using my assumptions, and then, at the end, choosing delta so that I come up with epsilon in the end. So in the last step, I have absolute value of a times delta. And I want that to be less than epsilon. So I could have chosen delta to be less than or equal to epsilon/a. So anything, any delta like that would have worked. So I could have put delta to be epsilon over 2a. That still would have worked, or 3a, OK? Yes, [? what ?] [? they're ?] green. That's the thing we're doing, thing we get to use in this computation. Are there any other questions? This is a style thing, all right? So throughout this entire semester, for all epsilon, you have to be able to find the M so that something happens, right? So at some point, you always say, choose M to be something, right? Maybe it's the maximum of some M0, M1 that you had. Or choose M to be so that 1/M is less than epsilon or something like that. So I'm just sticking with that style of saying, choose M to be this. Otherwise, I could have said, there exists a capital number M so that for all n bigger than or equal to capital M, x sub n minus c is less than delta, and then move to the next part. But throughout the course, I've always, at least in the proofs I presented, you're making a choice of capital M, right? It's for all epsilon, there exists a capital M. So that means you have to tell me how to choose capital M. And it's the same thing with these epsilon delta proofs is that, for all epsilon, there exists a delta. So at some point, you have to tell me how to choose delta in the proof. So I've just kind of stuck with that style of saying, choose M to be something, although you're absolutely right that I could have just said, for all n bigger than or equal to capital M0, I have what I want. And therefore, implicitly, I'm saying that M equals M0 is the thing that works. Yeah, that's the same delta. Yeah, that's the same delta. Mhm.
MIT_18100A_Real_Analysis_Fall_2020
Lecture_9_Limsup_Liminf_and_the_BolzanoWeierstrass_Theorem.txt
[SQUEAKING] [RUSTLING] [CLICKING] CASEY RODRIGUEZ: All right, so we proved these two theorems last time, and we used them for-- and we had a couple of applications of them. So the first theorem, simple theorem, was that a sequence converges to x if and only if the limit as n goes to infinity of the absolute value of xn minus x goes to 0. And then we also had the squeeze theorem, that if you have two-- if you have three sequences-- a sub n, b sub n, and x sub n-- so that x sub n is in between a sub n and b sub n, and the a sub n and b sub n converge to the same thing, then the sequence x sub n converges, and it converges to the [? common ?] limit of a sub n and b sub n. So x sub n gets squeezed in between a sub n and b sub n. Last time we used this to prove a kind of special limit that this equals 0 if the absolute value of c is less than 1. So I had proved it for c positive, and maybe c less than 1-- but same proof works with absolute values, just because the absolute value of c to the n equals the absolute value of c raised to the n-th power. So let's use this to do a few more special limits if you like. But first, I'm going to state the binomial theorem. I'm not going to prove it, but it's a simple exercise using induction. And the binomial theorem says that [? for all ?] [? N, the ?] natural numbers, x, y in R, x plus y raised to the n is equal to the sum from k equals 0 to n of n choose k, x to the n minus k y to the k. And here n choose k-- this is equal to n factorial over k factorial divided by n minus-- times n minus k factorial. OK. So again, you can prove this by inducting on n. All right. OK. We have a theorem, so we'll prove a few special limits. I have a real number that's positive. Then the limit as n goes to infinity of 1 over n the p equals 0. If p is positive, then limit as n goes to infinity of p to the 1 over n equals 1. And the third is-- it's just that a certain limit exists-- limit as n goes to infinity of n to the 1 over n equals 1. OK? Let me make a small comment here. So far in our discussion of the real numbers, we've only defined what it means to take a real number to an integer power, but-- and n-th roots. So we have n-th powers and n-th roots. So using that, we can then define how to take a real number to a rational power-- although there needs to be something that's checked to make sure this is well-defined, because you can always write a rational number not uniquely as one integer over another. So we can define a positive real number to a rational power, but using really only the elementary facts about the real numbers, the fact that it has the least upper bound property, and the fact that the rationales are somehow dense in the real numbers, we can then define what it means to take a positive real number to a positive real number power. OK? All of that is just to say that to define a positive real number to a positive real number or to a real number of power doesn't require the introduction of the exponential or logarithm-- although they, in the end too-- they both agree, once you do have the exponential and the logarithm. So all of that is just to say that what we've done up to now-- these things actually do make sense. You don't need the exponential and the logarithm to make sense of a positive real number to a real number of power. OK? But we're just going to use the basic properties of exponents throughout all this, so we don't-- we haven't even talked about continuity, or derivatives, or anything like that, so we'll just use elementary means to be able to prove these statements. OK? So for the first one, we'll prove this actually just using the definition of the limit, which, remember, means for every epsilon positive there, we should be able to find a capital number M, such that if n is bigger than or equal to capital M, 1 over n to the p is less than epsilon. So let epsilon be positive. Choose M, a natural number, such that M is greater than or equal to-- is greater than 1 over epsilon to the 1 over p. OK? And if n is bigger than or equal to M, 1 over n to the p minus 0, which equals 1 over n to the p-- this is less than or equal to 1 over capital M to the p. So here, again, I'm using-- an elementary fact that we all know is that, if I have a positive power here, then-- and little n is bigger than or equal to capital M then into the p would be bigger than or equal to capital M to the p. We do know that for integer exponents, but just trust me that one can define non-integer exponents, and that inequality remains true as long as the power that you're using is positive. So I have 1 over n to the p is less than or equal to 1 over capital M to the p. And by our choice of capital M, this is less than epsilon. So that proves number one. So number two we'll do-- we really only need to do two cases. One is p. So p equals 1 is fine. This is clear, because then I just get 1 for the whole sequence. So that's one case. Now let me do p bigger than 1. OK. And if p is bigger than 1, the absolute value of p to the 1 over n minus 1-- this is just p to the 1 over n minus 1. And so I want to show that this quantity here goes to 0. OK? So we have an inequality which we proved actually in the second lecture, I think, using induction, but you can actually get from binomial theorem as well. So let me just recall this right here, that we had this inequality that, if x is bigger than or equal to minus 1, then 1 plus x raised to the n is bigger than or equal to 1 plus nx. OK? And so we use this inequality now with x equals p minus 1. So p-- this is equal to 1 plus p to the 1 over n minus 1 raised to the n-th power. I'm sorry. x is p to the 1 over n minus 1, not p minus 1. And now we use this inequality. This is bigger than or equal to this thing times n plus 1. So then I'll go over here. So now subtracting the 1 and dividing by n tells me that p to the 1 over n minus 1 is less than or equal to p minus 1 over n. And as we noted here, since p is bigger than 1, this is bigger than or equal to 0. OK? And now we apply the squeeze theorem. By squeeze theorem, this is just going to 0. p minus 1 divided by n-- that's just a number over n. This goes to 0 as n goes to infinity. And in, fact that's contained in number 1 for p equals 1, and also by our limit facts that we proved from last time, that the limit respects algebraic operations of the real numbers. So by the squeeze theorem, since this converges to 0 and this converges to 0, this converges to 0, which implies that. OK? So that deals with the case p is bigger than 1. To deal with p less than 1, we use the case p bigger than 1, and again, the fact that limits respect algebraic operations. Then we write the limit as p-- ugly looking p-- write that better-- the limit as n goes to infinity up to the 1/n. Now p is less than 1, so 1/p is bigger than 1. This is equal to the limit as n goes to infinity of 1 over 1 over p to the 1/n. And now 1/p is bigger than 1-- raised to the 1/n power converges to 1 by the case we did before. And so 1 over this converges to 1/1 equals 1. OK? OK, so let me just remark that, although we did prove this inequality for-- by induction, it actually follows from the binomial theorem. And we'll use the binomial theorem to get a little bit of a different inequality that we'll use for number 3. So for number 3, we want to prove limit as n goes to infinity of n over 1 over n equals 1. So rather than keep writing n to the 1 over n minus 1, which I want to show convergence to 0 now, I'm going to write xn. So let xn equal n to the 1/n minus 1, which we note is bigger than or equal to 0 for all n. OK? And so my goal I want to show is limit as x as n goes to infinity of x sub n equals 0, because then that proves that this converges to 0. And since this is equal to its absolute value, that means that n to the 1/n converges to 1. And the way we're going to do that is using an inequality we get from the binomial theorem-- and using this trick here. Now, if we look at plus x to the n raised to the n, this is just-- that's just n-- let me move it over a little bit-- plus x to the n raised the n. x to the n-- x sub n is n to the 1/n minus 1. Add 1. I get into n to the 1/n-- raised to n, I get n. Now, by the binomial theorem, this is equal to sum from k equals 0 to n, n choose k. And I have 1 raised to the n minus k power, x to to the n raised of the k. OK? Now, this is a sum of non-negative things because, x sub n is always non-negative. And these coefficients are just quotients of factorials, so they're always non-negative as well. This sum is always bigger than or equal to 1 term from the sum. So it's bigger than or equal to k equals 2 [INAUDIBLE] y2-- you'll see-- x sub n squared. Now, n choose 2-- this is equal to n factorial over 2 factorial n minus 2 factorial x to the n squares x of n squared, which equals n times n minus 1 over 2 x of n squared. All right? Now, I started off with n and I proved it was bigger than or equal to this quantity here. So now I divide what's in front of the x of in and take square roots. So then that implies that, for n in bigger than 1-- because I need to divide by n minus 1-- 0, which is bigger than or equal to n, x sub n is less than or equal to 2 over n minus 1 square root. OK? And now this is just 0, so it converges to 0. This right-hand side is square root of 2 over n minus 1. Now, the limit as n goes to infinity of 2 over n minus 1 is 0. Square root of that also converges to 0. That's a fact we did from the end of last time. So this whole thing converges to 0. So by the squeeze theorem, limit as n goes to infinity of x sub n-- which, remember, this is just n to the 1 over n minus 1, which implies-- OK? And that completes the proof. All right, so now we are going to study a couple of objects related to a bounded sequence. What's the underlying question we're going to try to answer? Whenever something's introduced, you should think of it in terms of, what's the question that was asked that this is trying to answer? So we're now moving on to the topic of limsup and liminf of a sequence. So here's the question. So we've seen sequences that don't necessarily converge, like minus 1 to the n, but-- and we know about subsequences now, where you just pick entries along the sequence. You pick one, move to the right, and pick the next one. Now, if you look at minus 1 to the n, if I just look at the subsequence consisting of picking the odd entries-- the odd numbered entries in the sequence, then I'd just get minus 1 minus 1 minus 1 for my new subsequence. And this converges. It's just constant. Or if I chose the even ones, I'd get 1, 1, 1, 1, 1. And that converges. So this sequence, which is bounded, has a convergent subsequence. Now, not all sequences have convergent subsequences. For example, if you look at the sequence x sub n equals n-- so x sub n is just 1, 2, 3, 4, 5, 6, and so on-- that will not have any convergent subsequences, because any subsequence of that has to be unbounded. OK? That's pretty easy to show. And we know that a convergent sequence is bounded. OK? So what am I getting at? So the question that we're going to ask and try to answer is the following. Does every bounded sequence have a convergent subsequence? OK? And what I was just going on about right there is that we know this is true, for example, for the sequence minus 1 to the n. We also this is true for convergent sequences. Convergent sequences are bounded, so-- and they have convergent subsequences. Just take the subsequence to be the whole sequence to begin with. And we have to throw in this bounded part, because there are sequences that don't have any convergent subsequences. Like I just said, x sub n equals n is an example of an unbounded sequence that doesn't have any convergent subsequences. So the answer to this question, as we'll see, is yes, this is a very-- really a very powerful statement. And this is due to Bolzano and Weierstrass. And I'll restate it in a little bit when we get to the statement of that theorem-- namely, that every bounded sequence does have a convergent subsequence. There are several different ways to prove this theorem. We're going to prove it by introducing limsup and liminf, because these are also two important objects that arise in analysis. So let's get to the definition of these guys. So let xn be bounded sequence. And we define-- if they exist, there are going to be certain limits, so it's not clear that they exist at all to begin with. But we'll show that they always do exist. We define limsup x sub n. And sometimes I'll write n goes to infinity underneath. Sometimes I'll just write limsup or I'll just have an n underneath it. This is supposed to be a number, and this is equal to the limit as n goes to infinity of a new sequence obtained from the old sequence x sub n. What are the entries of this new sequence? This is sup of x of k, k bigger than or equal to n. OK? And the liminf is similar, except it's now with infs. OK? So for each natural number n, I take the supremum of the set of elements x sub k for k bigger than or equal to n. So this is a bounded set because the sequence is bounded. So this the supremum is well-defined-- same thing for the inf. OK? So I get a number, a new number for each n. OK? And I take the limit of those numbers and I define one to be the limsup, one to be the liminf, if they exist-- because limits don't always exist, so it's not even clear that these two limits actually are meaningful. And the first thing that we'll do is we'll prove that these limits do actually always exist. So rather than keep writing this, I'm going to give these-- write some symbols for these two things. So let a sub n to be-- so of course, if I forget to say this at least in what I'm talking about with respect to limsup and liminf, I'm always talking about a bound sequence. But let me continue to state that as one of my hypotheses. Let x of n be a bounded sequence, and let a sub n be the supremum of the set of all elements x sub k, where k is bigger than or equal to n, and b to be the infimum of x sub k, a bigger than or equal to n. OK? So then there's couple of statements. One is that the sequence a sub n-- so let me just put over here what these things have to do with the limsup and liminf. Then the limsup of x sub n-- this is defined to be the limit as n goes to infinity of the a sub n's. And the liminf of x sub n-- this is defined to be the limit as n goes to infinity of the b sub n's. OK? What we're going to show is that the limit of a sub n as n goes to infinity of a sub n exists, and the limit as n goes to infinity of b sub n exists. So how we're going to show that is we're going to show that a sub n is monotone decreasing and bounded. And so these are the conclusions. So [INAUDIBLE] should say then. The sequence b sub n is monotone increasing and bounded. So in particular, since we know that, if we have a monotone sequence which is bounded, it has to converge, that means these two limits exist. And then the second part of this term is the simple statement that the liminf of x sub n is less than or equal to the limsup [INAUDIBLE] x sub n. OK? OK, to prove one-- so in fact, before I prove this theorem, let me prove a small simple theorem first. Let's put the proof of this theorem on hold just for a second, and now prove a very simple theorem that, if A and B are subsets of real numbers, A, B, both not equal to the empty set, and is A subset of B-- so also need to be bounded. So if we take two non-empty subsets of real numbers, such that [INAUDIBLE] bounded and A is a subset of B, and the conclusion is that the inf of B is less than or equal to the inf of A. And this is always less than or equal to the sup of A, and this is less than or equal to the sup of B. So what this says is that, if I take a subset of B, then that increases the inf and decreases the sup. OK? So the sup of a smaller set is smaller than the sup of the bigger set. And that inequality reverses for infs. The inf of the smaller set is bigger than or equal to the inf of the bigger set. All right? And this just follows immediately from the definition of sup and inf, so I'll just prove the sup statement. So since sup of B is an upper bound for B, and A is a subset of B, this implies that sup B is an upper bound for A. Sup B sits above everything and B. A's a subset of B, so it sits above everything for A. Now, the supremum of A is supposed to be the least upper bound, and therefore, if I take any upper bound for A, that has to be bigger than or equal to the sup of A. So since sup B is an upper bound for A, this implies that sup A is less than or equal to sup B-- and similarly with the infs, so I'm not going to write-- so similar for infs. OK, so let's go back to the proof of this theorem here, that if I have a bounded sequence, then the limsup and the liminf exist. And we show that by proving that these two sequences have these monotonicity properties. So proof now of the theorem we were-- started off proving-- OK. Since the set of xk's, with k bigger than or equal to n plus 1, is a subset of xk's for k bigger than or equal to n-- because now I'm-- I have a set where k's starting at n plus 1. Here's a set with k starting at n, so this is clearly contained in here. This implies that a sub n plus 1, which is the sup of the left hand side, by this little theorem that I stated here, is less than or equal to the sup of the bigger set. And this is just a sub n. So we've proven, for all n, a sub n plus 1 is less than or equal to a sub n. OK. Therefore, this sequence is monotone decreasing. OK? And so this uses the sup part of this previous theorem, but if we use the inf part, then we also get the statement for the b sub n. So rather than write out the details, I'll leave it to you just to flip around the inequalities for the inf in your notes. So similarly, for all N, a natural number, b sub n is bigger than or equal to b sub n plus 1-- or other way. OK? Now, so that shows these two sequences are monotone. Now we show they're bounded. And this follows simply from the fact that the x sub n's are bounded. Since there exist a b bigger than or equal to 0, such that all N natural numbers, x sub n in absolute value is less than or equal to b, which is the same as saying b is-- x sub n is bounded between minus B and capital B. So taken as elements of each of these sets, if you like this x of k, k bigger than or equal to n, that means minus B is always a lower bound for these sets and B is always an upper bound for these sets. So this implies that minus B is always less than or equal to x sub k, k bigger than or equal to n. And this always sits below the supremum. Since all of these are bounded above by B, the supremum has to be less than or equal to B, which I'll state in terms of the a sub n's and b sub n's-- means minus B is less than or equal to b sub n is less than or equal to a sub n is less than or equal to b. So these sequences are bounded. OK? So in fact, these two implies that-- all right, so we've shown that these two sequences that define-- that we use to define the limsup and the liminf are monotone and bounded, so therefore, the limit of these two sequences-- the limits, which define the limsup and liminf, actually exist. So limsup and liminf is always a well-defined object. Now, this proves one. To prove two, this follows immediately from what we've proven right here. So by this part, we have b sub n is less than or equal to a sub n. So for all n, I have these two sequences, one sitting below the other. And last time we proved that taking the limit respects inequality, so limit as n goes to infinity of b sub n-- which is the liminf-- sits below limit as n goes to infinity of a sub n, which is the limsup. And that completes the proof. So I've shown that these two objects we defined-- the limsup and the liminf-- exist, and are well-defined for every bounded sequence. So this is kind of-- can be a little bit of a daunting couple of objects to come across when you-- especially in your first analysis class. So the best thing to do is look at examples. Whenever you come across something that you just don't quite understand, start writing down some actual things. So for example, let's again look at our favorite example of a bounded sequence which does not converge, x sub n equals minus 1 to the n. Then, if I am looking at this set x sub n, n bigger than or equal to k, and writing-- instead of x sub n, let me just write what it is, minus 1 to the n. So what is this set? This is just a set consisting of two elements, 1 and minus 1. Yeah? And therefore, the sup of this set, which is the sup of this set, is just 1. Oops-- so if I take-- which implies that the limsup of minus 1 to the n, which is the limit as n goes to infinity of sup minus 1 to the n, n bigger than or equal to the k-- as we just saw, this is just the sup of this set consisting of two elements, 1 and minus 1. This is equal to the limit as n goes to infinity of 1 equals 1. So the limsup of minus 1 to the n is 1. Now, if I change all these sups to infs, then the inf of this set is going to be the inf of this set, which is just minus 1. And therefore, we also get-- OK? So the limsup is 1. The liminf is minus 1 for this set. That's just supposed to be a squiggly line, not necessarily looking like sigma. OK, so there's one sequence. How about our next favorite sequence, x sub n equals 1/n? So the set of elements 1/n, such that n is bigger than or equal-- so 1/k I should write. Oh, so I was using some-- this should have been a k. Hope didn't make that mistake throughout. No, I kept writing x sub k. So that should be x sub k. OK, all of that is written correctly. This should have been minus 1 to the k-- minus 1 to the k. All right, very good. So we're looking at now the set 1 over k, where n is-- where k is bigger than or equal to n. So this is just a set 1/n, 1/n plus 1, 1/n plus 2, 1/n plus 3, and so on. OK? So as I move to the next entry, things are getting smaller and smaller. And in fact, this sequence just here now, written as-- thinking of this as a new-- so this is not a sequence-- this is a set. Taking entries to be these guys, it's easy to see that converges to 0. So anyways, let's say I take the supremum of this set, which is what I need to compute the limsup. So I'm now taking the supremum of this set. 1/n plus 1-- that's always smaller than 1/n, and so is 1/n plus 2, and so on and so on. And 1/n is an element of the set that's bigger than or equal to everything else in the set. And I think there's an exercise in the homework that-- as you just show, that if you have a set that contains an element which is an upper bound for the set, then that has to be the supremum. So the supremum of this set is simply 1/n. OK? So this supremum is equal to 1/n. So as n goes to infinity, the limit of the supremum here, which equals the limit as n goes to infinity of 1/n, equals 0. OK? So the limsup of 1/n equals 0. OK? But now let's say I look at infs of this set. So I had to take sups to look at the limsup. Now let's say I take infs. OK? Now, the inf of this set-- these are elements that are getting closer and closer and closer to 0. So there's 0, 1/n, and then they just keep getting smaller and smaller and smaller and smaller, converging to 0. And you can, in fact, prove this rigorously if you'd like, but I think it's easy to at least convince yourself that infimum of this set is equal to 0. The smallest thing-- so first off, 0 is a lower bound for this set. And if I take anything bigger than 0, that thing cannot be an upper bound simply because of, if you like the Archimedean property, I can always find something from the set less than that real-- positive real number. So 0 must be the least upper bound. So in summary, the liminf of 1/n, which is the inf-- the limit as n goes to infinity of the inf of this set, is just the limit as n goes to infinity of 0 equals 0, all right. So the limsup of 1/n equals 0. The liminf equals 0 as well. OK. So let's take a look at these two examples a little bit, and let me just make a few remarks. First off, what's to notice about this sequence is the fact that the limsup is 1. Liminf is equal to minus 1. So the limsup does not equal the liminf. However, in this example, the limsup, which is 0, equals the liminf, which is 0. Now, what's the difference between these two sequences? What's the property that one holds, but the-- one has, but the other doesn't? This is a convergent sequence and this one is not. And we'll see this is a general fact, that if we have a convergent sequence, then the limsup and the liminf equal each other and are equal to the limit of the original sequence, because this sequence converges to 0. But that's not just a one-way street. It's a two-way street that, in fact, we'll prove, if the limsup equals the liminf, then the original sequence converges. So the sequence converges if and only if the limsup equals the liminf. And we saw here that in-- on display, that the limsup and the liminf don't equal each other, and the original sequence, which we know-- or shown last time-- doesn't converge. And so one other thing I'd like to point out is that-- so the limsup of this guy is 1. The liminf of this guy is a minus 1. Now, we can also find a subsequence which converges to 1, which is the limsup. We just take the entries that we choose to be the even numbered entries, and that just gives-- produces a sequence 1. And that sequence converges to 1. If we take the odd entries of the sequence, that subsequence is just minus 1 and converges to minus 1, which is the liminf. And that's also a general fact will prove, that for any bounded sequence, there exists some sequences converging to the limsup and the liminf. And that will give us the proof of this Bolzano-Weierstrass theorem. All right. And I think that's all of the remarks I want to say. And let me make one other important comment. So this sequence, which we use to define the limsup and also the liminf-- in three out of these four cases, they were actual subsequences. So the sup of this set equals 1, which I can consider as a subsequence of the original guy-- and the same thing for the liminf. The inf of this set was minus 1, which I can consider as a subsequence of minus 1 to the n. And for the sup of this set, I got 1/n, which I can-- this is just equal to the original sequence. Definitely, I can consider it as a subsequence of the original sequence. But if I take the infimum of this set, which I need to define the liminf, I got 0 for every entry, which is not a subsequence of this original sequence. OK? So all of that is to say that the sequences I get through this process to define the limsup and liminf are not necessarily subsequences of the original sequence. OK? What I just said a minute ago about there actually being some sequences which do converge to the limsup and liminf-- this is a non-trivial fact, which we're going to prove. OK. So theorem-- and this will give us the Bolzano-Weierstrass essentially immediately right after it-- let xn be a bounded sequence. Then there exists subsequences xn sub k and xm sub k-- so they don't necessarily have to be the same subsequence-- such that limit as k goes to infinity of x sub n sub k equals the limsup-- so it's a convergent subsequence-- and the limit as k goes to infinity of xm sub k produces the liminf. OK. And so before I prove this, this immediately gives the Bolzano-Weierstrass theorem, which is that every bounded sequence has a convergence subsequence. OK. So again, this follows immediately from the previous theorem, because if I take a bounded sequence, by the previous theorem, I can find a subsequence which converges to the limsup, which always exists for a bounded sequence. OK? And then that's [INAUDIBLE]. In fact, we have something stronger, in that we have at least two subsequences which converge to these two numbers, which may or may not be the same. OK? So the reason this is so powerful and so strong is that it-- to get your hands on something, it doesn't require you to show something as strong as showing there is a sequence converging to that. So quite often, you can think in terms of variational problems, where you want to show that a minimum of something always exists or a maximum of something always exists. Well, what you can try and do is take a sequence of guys that you stick into your-- so this is a general nonsense that you stick into your machine or function that spits out output. And these outputs are approaching the maximum or approaching the minimum. And what you'd like to say is that there does, in fact, exist an element that you can stick into your machine and produce the maximum amount of output. Now, maybe it's not clear how to do that. So first, take a sequence approaching-- so that the values are approaching that-- the outputs are approaching the maximum. Maybe you could show that the inputs converge to something, but that's typically really hard. But you don't have to work that hard is what this theorem says. It says that what you really need to do, and which is much more straightforward, or simpler, or impossible really, is to show that that sequence of inputs that you put into your machine to get the outputs is a bounded sequence. OK? Then you could pass to a subsequence, which actually does converge to something by this theorem, and proceed in that way by showing that you do have some minimum or maximum-- some input that produces a maximum output or minimum output. So that's a bit small bit of rambling about why this theorem is so useful is that, again, it's-- to get your hands on something that typically you want to study, it's very difficult to show convergence to that thing you want to study, or that thing exists, because there's a sequence that you come up with ad hoc actually converges to that thing. But this theorem says you don't need to work that hard or try to do the impossible. And typically, it's much easier just to proceed by showing your sequence of inputs is bounded. OK, so that's enough of that bit of rambling about why this theorem is so useful. It's also useful in the study of PDEs, which is what I study. So I have a soft spot for it. Actually, one of its generalizations-- all right, so let's prove this theorem that there exists some sequences which converge to the limsup and liminf. So as before, I'm going to use-- rather than keep writing the supremum of this set and-- actually, I'm not even going to do this statement. I'm going to leave this as an exercise. I'm going to do this statement. I'm going to use this notation a sub n, as before-- a sub n b sup of x sub k's, k bigger than or equal to n. So I want to show that there's something converging to the limit as n goes to infinity of the a sub n's. And so what I'm going to do is I'm going to try up a subsequence of elements between a sub n and a sub n minus something which is converging to 0, and not quite a sub n. So it'll be along a subsequence as well. So we know that there exists an n sub 1 bigger than or equal to 1 simply by how this is defined as a supremum, and by the exercise from assignment 3 that there exists an n sub 1 bigger than or equal to 1 such that a sub 1 minus 1 is less than or equal to is less than x sub 1 is less than or equal to a sub 1. OK? All right. a sub 1 minus 1 is not an upper bound for the set a sub 1, which is x sub k's, where k's bigger than or equal to 1. Therefore, I should be able to find an element from this set strictly bigger than that. And of course, it's always less than or equal to the supremum, which is a sub 1. OK. So now, since a sub n sub 1 plus 1-- so a sub n sub 1 plus 1, not 1 plus 1-- since this thing equals the supremum of xk, such-- k is bigger than or equal to of 1 plus 1, there exists an a sub n sub 2-- at least there exists an n sub 2 bigger than n sub 1-- in fact, it has to be bigger than or equal to n sub 1 plus 1-- such that a sub n plus 1 minus 1/2 is less than a sub n sub 2 is less than or equal to a sub n plus 1. OK? So why the n sub 1 plus 1 is because I wanted to obtain a new entry from the sequence that comes from farther than the index n sub 1. OK? I'm trying to build a subsequence. And like I said, the idea is I want to somehow sandwich-- this should be x sub n-- I want to build up a subsequence which is sandwiched between things converging to the limsup, and use the squeeze theorem. All right? OK, so since this is the supremum of this set that-- and because this thing is not an upper bound for this set, there exists something from the set-- so some element k bigger than or equal to n sub 1 plus 1, which I'm going to call n sub 2, so that x sub n sub 2 is bigger than a sub n plus 1 and a sub n sub 1 plus 1. And then I just keep doing this. Since a sub 2 plus 1 equals the supremum of the x of k, such that k is bigger than or equal to n sub 2 plus 1, there exists an n sub 3 bigger than n sub 2, such that a sub n sub 2 plus 1, minus now a third, is less than x sub n sub 3 is less than or equal to a sub n sub 2 plus 1. OK. But now we're essentially home free. We'll just continue in this manner. Let me write down. Now, strictly speaking, I need to state the construction of this sequence as an inductive argument, but for the purposes of this class, I'm not going to do that. I'm just going to say-- continuing in this manner. And that will be what I say for this part. So continuing in this manner, we obtain a sequence of integers, natural numbers-- n sub 1 less than n sub 2 less than n sub 3, so on-- such that what holds-- such that, for all k [? N, ?] natural number, a sub n sub k minus 1 plus 1 minus 1/k is less than or equal to x n sub k less than or equal to a sub n sub k minus 1 plus 1. OK? And here, if you like, we didn't define what n sub 0 is, so-- so really, we're only interested in the n sub 1, n sub 2. But for the sake of this whole thing making sense for all integers k, with n sub 0 being defined to be 0-- OK? That's just the first case that we dealt with. So we obtain this subsequence x sub n sub k that's sandwiched in between this subsequence of a's and minus 1 over k, and this sequence of a sub n sub k minus 1. So since n sub 1 is less than n sub 2 and so on, this implies that-- write it this way-- n plus 1 is less than [INAUDIBLE] 2 plus 1 is less than n sub 3 plus 1. So this is a subsequence-- a sub n sub k minus 1 plus 1 is a subsequence of a sub n's. Now, what do we know" We know the a sub n's converge to the limsup. And we proved last time that every subsequence of a convergent sequence converges to the same thing. That's the limit as k goes to infinity of a sub n sub k minus 1 plus 1 equals the limit of the original sequence, which is, by definition, the limsup. OK? So now I have this subsequence. So now I have this subsequence of x's sandwiched in between two sequences-- this guy on the left, this guy on the right. This guy on the right converges to the limsup. This guy on the left converges to the limsup minus 0, because 1/k converges to 0. So by the squeeze theorem, we get that the limit as k goes to infinity of x sub n sub k equals the limit as k goes to infinity of this and this whole thing, which is the limsup. OK. And again, so I'll leave it to you to do the liminf part. But the point is that now, basically, this 1/k gets moved to over here, and this becomes a sub n sub k minus 1 plus 1 plus 1/k. And now I just have this on sitting below this guy. All right? That's really the only change for the infs to get a subsequence converging to the liminf. OK? Now, we've shown that there exists a subsequence converging to the limsup and the liminf, which gives us the Bolzano-Weierstrass theorem. And now let me come back to the statement I made about these two sequences here-- namely, that the limsup equals the liminf if and only if the original sequence converges. So let's prove that now. Let x sub n be a bounded sequence. Then xn converges if and only if limsup equals the liminf. And there's one more part. Moreover, if xn converges, then all these limits agree-- equals the limsup, equals liminf. OK? So the sequence converges if and only if the limsup equals the liminf-- and in the case that we do have this limit of the sequences given by this common value of the limsup and liminf. OK, so let's go in one direction. To do this direction, we'll use the squeeze theorem. So suppose L equals limsup x sub n equals liminf of x sub n. We're assuming that these two things equal each other. They're given by a common value L. And what we're going to end up showing is that this sequence converges to L. So that actually gives the second part of the statement of this theorem here. So suppose L is this common number, limsup and liminf. Then, for all N, [? a ?] natural number, we have that the inf of the x sub k's, which is for k bigger than or equal to n-- so x sub n is in this set, so it's certainly bigger than or equal to this inf. And it's less than or equal to the sup of that set, again, because x sub n sits in this set. All right? But as n goes to infinity, this sequence of numbers converges to the liminf, which is L. As n goes to infinity, this sequence of numbers converges to the limsup, which is L. So by the squeeze theorem, the thing in between, which is x sub n, converges to L. So for the second part, it's-- follows from what we've proven-- what we proved to obtain the Bolzano-Weierstrass and what we know about convergent sequences and their subsequences. So this is for this direction-- namely, that convergence implies the limsup equals the liminf. Let L be the limit as n goes to infinity of x sub n. So now we're assuming that the sequence converges to something which we call L-- doesn't have to be the same L from-- so it's just L I'm just using for this limit. By previous theorem, there exists a subsequence x sub n sub k, such that the limit as k goes to infinity of x sub n sub k gives me limsup of x sub n. But this is a subsequence of a convergent sequence, and a subsequence of a convergent sequence is convergent, and converges to the same thing. So this thing on the left is just equal to L. OK? Similarly, there exists a subsequence x sub n sub k, such that the limit as k goes to infinity of x sub n sub k equals liminf of x sub n. And again, this is a subsequence of a convergent sequence, so it's convergent and convergence of the same thing, L. So that implies that this thing is equal to L. All right. And therefore, the limsup and the liminf equal each other, and they also equal the limit of the original sequence. So that is the end of the proof of this theorem that the limsup and liminf-- when they coincide and tell you that the original sequence converges. So that's another way of thinking about limsups and liminfs is that they also somehow measure just how divergent your sequence is, or at least the difference between them. If that difference is 0, then your original sequence is convergent. OK? All right, so I think we'll stop there.
MIT_18100A_Real_Analysis_Fall_2020
Lecture_21_The_Riemann_Integral_of_a_Continuous_Function.txt
[SQUEAKING] [RUSTLING] [CLICKING] CASEY RODRIGUEZ: OK. So we're going to continue our discussion of-- is it 1M or 2? It's 1. Of the Riemann integral, which, remember, from the discussion at the end of the last lecture, is a theory of area underneath the graph of a function. So what is that theory? The theory is built up as follows. Given a continuous function f-- so this is just a refresher on the definitions I introduced last time-- a partition is just points between-- just breaking up the interval from a to b into points, into little subintervals, with the norm of this partition being the length of the longest subinterval [INAUDIBLE] which we refer to as a tag. It's just a set of points with one lying in each of these subintervals-- so for me, let's take them all to be the right endpoint of these intervals-- and then associated to a tagged partition, which is a pair, x and [? xe. ?] We associated a Riemann sum, which is f evaluated at these points times the length of the interval. So if we draw the graph of a function f and-- say, in this picture-- this number this Riemann sum would then be equal to the area of these three boxes here. OK? And so our theory of Riemann integration, or our theory of the area underneath the curve, is built on the following goal, which is to show that, as these partitions get finer and finer, as the norm of these partitions goes to 0, these Riemann sums should converge to some number. And that number, which we call the Riemann integral of f, we interpret as the area underneath the graph of f. So the first goal-- the main goal, really, to start off with-- is to show that this is a reasonable-- that this is actually true that, these Riemann sums do, in fact, converge to some number as they become finer and finer for a given continuous function. So that is the main theorem, which I think is going to be the main thing that we prove today. So theorem of the Riemann integral, which is the following-- let f be a continuous function from a b to r-- which, remember, this is the notation we used from last time. Let me move my picture over here. Then there exists a unique real number, which we denote as the usual symbol, integral a b f of x dx with the following property-- for every sequence of partitions xr cr, such that the norm of these partitions is going to 0 as r goes to infinity-- so these partitions are getting finer and finer if the norm is going to 0. Remember, the norm is the length of the longest subinterval I have. So for all sequence of partitions, with norms converging to 0, we have that the limit of the Riemann sums exists and equals this number, which we refer to as the Riemann integral of f. OK? So there's a lot to unpack here. First off, what this number, integral from a b to f of x dx-- this property that it has is, no matter what sequence of partitions you take, as long as they're becoming finer and finer, this limit actually exist, and it equals the same number. It equals this number, integral a b f of x dx. So I could take two different sequences of partitions becoming finer and finer, look at the Riemann sums, and those two sequences of Riemann sums converge, and they converge to the same number-- again, which we denote by integral a b f of x dx, the Riemann integral of f. And so it's this number, which we interpret as the area underneath the graph of f. OK? All right, so our main goal for today is to prove this theorem. So first off, there's two parts of the statement here. There exists a unique real number, which we denoted by that with this property. Uniqueness is immediate. If I have two real numbers, such that, for all sequence of partitions with norm going to 0, the limit of the Riemann sums equals that limit, well, limits of sequences are unique, and therefore, those two numbers have to be the same. So uniqueness is clear. It's really the existence part that we have to prove, that there exists a real number so that, no matter what sequence of partitions we take, the corresponding Riemann sums converges to 0-- as long as they become-- the partitions become finer and finer. OK? OK. Now, we're not going to prove this theorem just yet. We need some facts that will be used in the proof, so we're going to put off proving this theorem for a few minutes and first prove some necessary facts. So let me first define a useful metric or number associated to-- or function associated to a continuous function called the modulus of continuity. So for f continuous function and eta positive, we define the modulus continuity omega f of eta-- this is equal to the supremum of f of x minus f of y [INAUDIBLE] absolute value, such that the absolute value of x minus y is less than or equal to [? eta. ?] OK? So what does this modulus of continuity measure? It measures, given-- if I take any two points less than or equal to eta in distance, and I look f of x minus f of y in absolute value, and take the sup over all that, that gives me the modulus of continuity. Now, for example, let's do the simplest example possible, f of x equals ax plus b. Then what do we get? f of x minus f of y-- this is equal to a times x minus y. So if x minus y is less than or equal to eta, we get that f of x minus f of y in absolute value is less than or equal to-- well, it's equal to a times x minus y is less than or equal to a times eta. Right? And therefore, absolute value of a times eta is an upper bound for this set for this function. And it's also achieved. If I take absolute value of x minus y equal to eta, then this will be equality. So what I'm saying is that, for this example, for f equals ax plus b, the modulus of continuity is equal to a absolute value of a times eta. OK. Now, something to note here is that, for this guy right here, as eta goes to 0, the modulus of continuity goes to 0 as well. And this is not special to this function. In fact, it's true for all continuous functions. So let me state the theorem and then give you an interpretation of it. So let me call this theorem 1. So for all f continuous on the closed and bounded interval in a, b, limit as eta goes to 0 of omega f of eta equals 0. OK. So let me make one more comment here. Maybe this will also-- just to connect to something we did earlier, which follows immediately from the definition of this guy, that for all x and y, this is true. OK? And therefore, you should think of, I'm trying to make this small, then the modulus of continuity is something that controls how close f of x and f of y are together. So somehow continuity is controlled by this function, the modulus of continuity. So if this goes to 0, then-- as this goes to 0, this goes to 0, then somehow I'm saying f of x and f of y are very close together. So again, the modulus of continuity is a way of measuring continuity of a function. That's one way to think of it. All right, so let's prove this. This is a limited statement, and what we're going to do is we'll just prove it the old-fashioned way. Let epsilon be positive. What do we know about functions which are continuous on a closed and bounded interval? While, they're also uniformly continuous. So since f is continuous on this closed and bounded interval, f is uniformly continuous. This means that there exists delta 0 positive, such that, for all x and y, x and y less than delta 0 implies that f of x minus f of y is less than-- let me put epsilon over 2. I'm going to give myself a little space here. OK? So just to recall, what we're trying to do here in terms of the definition of what this means-- we're trying to show that, for all epsilon positive, there exists delta positive, such that, for all eta-- eta's actually a positive number here, so-- implies omega of eta is less than epsilon. OK? OK. All right, so this is what we're trying to prove. We know that, for a continuous function, it's uniformly continuous, and therefore, there exists is delta 0 so that, when x minus y is less than delta 0, f of x minus f of y is less than epsilon over 2 for all x, y in the interval. OK? So basically, I'm going to choose delta to be this delta 0, the delta I need for this. And now let me show that it works. Suppose eta is less than delta, which, remember, is delta 0. Then, if x minus y is less than eta-- which, again, is less than-- less than or equal to eta, which is less than delta 0-- I get that f of x minus f of y is less than epsilon over 2. Basically, I just rewrote what I had right there. And therefore, this implies that epsilon over 2 is an upper bound for the set f of x minus f of y, x minus y, which is less than eta, which implies that the supremum of this set-- which is, by definition-- so I'm just going to put brackets here, and I hope that's clear that I'm referring to this set here when I put that here-- that the supremum of this set has to be less than or equal to this upper bound, which is less than epsilon. OK? So we've proven that, if eta is less than delta, then the modulus of continuity of f-- of eta is less than epsilon. And therefore, this is what we wanted to prove. OK. All right, so that's one fact that I need, that this modulus of continuity-- which, remember, is the supremum of the difference of f of x and f of y, as long as x and y are bounded by eta-- that this converges to 0 as eta converges to 0 for all continuous functions. OK? And this is key in showing that the Riemann integral exists for continuous functions. So theorem two is the following. So this is going to tell us how two Riemann sums are comparable if one partition is finer than the other. So if I have two tagged partitions of a, b, such that x prime is a subset of x-- so this means that x prime-- so this is a partition-- a breakup of a, b-- that contains all the points of x and more. OK? So it's a finer partition. And we refer to x prime as a refinement of the partition x. All right, so we have two partitions. One is finer than the other, and a continuous function f. Then we can estimate the difference in the Riemann sums. This is less than or equal to the modulus of continuity of f evaluated at the norm of the courser partition times the length of the interval. OK? OK, so this says, as long as-- one way to think about it is, if the partition x is very fine, and then I take a finer partition, that's not going to change the Riemann sum too much, because remember, this is converging to 0 as the norm of the partition-- how fine it is, or you can think how coarse it is-- is going to 0. OK, so the idea is very simple. It's just going to take me a little bit just to write down. Like I said, this partition being contained in the other means this partition has all the points in x and then some more. We're going to break these partitions up into-- or at least this one up into parts where it's points in x and then plus the extra one. So for k equals 1 to n-- so let me draw a picture first, and then I'll define the concepts that I have. So here we have the partition points of x. OK? And above I'm going to write now-- I used to write [? xe, ?] but I'm going to write the partition points corresponding to x prime. So this should be x prime sub l for some l, because remember, [INAUDIBLE] this one is contained in the other. So this partition point will be x prime sub l for some l. And then I'll have some other ones, and so on, until I reach the last partition point from x prime, which is contained in this subinterval from the partition x. OK? So I let y with a k upper-- this is equal to the partition x sub k minus 1 equals x prime l plus 1, until I get to some partition point x sub m, which equals x sub k. OK? So this is just a part of the partition x prime, contained in this subinterval, where these are partition points in x. And then I'm going to write eta k. This will be the tags that come with these points. OK. So what I want to draw attention to is that the tagged partition x, prime c prime-- this is equal to let me write it this way. Since these are sets what I mean by-- let me write it as the union of the tagged partitions. And also, this eta has nothing to do with the eta from before. All right? I'm just using this as notation. So the full partition of x prime and [INAUDIBLE] prime tagged partition is equal to the union of these smaller partitions of xk minus 1 and xk as k runs from 1 to k. OK? I hope that's clear. All right. All right, so now let's compare the part of the agreement Riemann for x [? xe ?] on this interval. So remember, somewhere sitting in here is the tag corresponding to the k-th tag for this guy. Let's compare it to the part of the Riemann sum for the x prime [? xe ?] prime also coming from this interval. OK. In fact, let me make one more remark that, since this tagged partition is equal to the union of these tagged partitions, this implies that the Riemann sum corresponding to this tagged partition is equal to the sum of the Riemann sums corresponding to these partitions of these intervals, xk minus 1 and xk. All right? So these are not partitions of a, b, but they are partitions of xk minus 1 and xk. All right. So I hope all this is clear. Again, I'm just breaking this Riemann sum up into parts where now I'm looking at what's happening between each of the partition points x sub k minus 1 and x sub k coming from the original courser partition x. All right? Now we compute. So this is a part of the Riemann sum of x [? xe ?] from this interval minus the part of the agreement some for x prime [? xe ?] prime coming from this interval. Now, this is equal to f of [? xe ?] k. And so let me just rewrite this sum here. x sub k minus x sub k minus 1-- this is equal to sum from k equals, let's say, j equals l plus 1 to m x prime sub j minus [? x of ?] [INAUDIBLE] minus 1. So of the smaller ones that are in there, this is just a telescoping sum. So all I pick up is x sub j prime sub m, which is x sub k, minus the first one, which is x sub-- x prime sub l, which is x sub k minus 1, minus-- and then this is, by definition, equal to j equals l plus 1 [INAUDIBLE] f of prime j, x prime j minus x minus 1-- and then a absolute value sign on the outside. And then I combine these two, and-- this is equal to sum from j equals l plus 1 to k of k minus j times x prime j minus x prime j minus 1-- absolute value. Now, by the triangle inequality, the absolute value of the sum is less than or equal to the sum of the absolute value. So save myself writing-- I'm going to bring the absolute values inside. OK. Now, [? xe ?] sub k and xe sub j-- they're both in the interval x sub k, x sub k minus 1. And by what I wrote up there, another way of-- or what follows simply from the definition of the modulus of continuity-- this is less than or equal to omega f of x sub k [? x sub ?] k minus 1 times x times j minus x prime j minus 1. And this equals-- now, this doesn't depend on j, so this is, again just a telescoping sum. So I just pick up times x-- so that's for one piece, and now I just add them up. Then I look at-- and so like I said, this is equal to the sum of these terms that look like this from k equals 1 to n. This is equal to the sum of terms like this from k equals 1 to n. That's what I wrote up there. And so if I apply the triangle inequality, this is less than or equal to sum from k equals 1 to n of f of [? xe ?] k times xk minus xk minus 1 minus s [? sub ?] f eta, which-- I have this estimate right here. I've already estimated-- so started off with this. And I showed it was less than or equal to this. So now let me stick that estimate into here-- less than or equal to sum from k equals 1 to n omega f of xk minus xk minus 1 times xk minus xk minus 1. So another thing which you should notice about the modulus of continuity is, if I have something here and then something bigger, this set gets bigger, and therefore, this sup gets bigger. So another property of the modulus of continuity is that it's increasing. This just follows from looking at the definition. There's more points to take a sup over f this thing gets bigger. So implies-- OK? So all of these lengths of these subintervals, the distance between partition points, those are all bounded by the norm of the partition, the biggest one. So this is less than or equal to sum from k equals 1 to n omega f of norm of x times xk minus xk minus 1. And now, this doesn't depend on k, so I just pick up the sum from k equals 1 to n of this, which is a telescoping sum. All I pick up is x sub n minus x sub 0, which is just b minus a, which is the estimate we wanted to prove. OK? OK, so now I'm going to do something that's sacrilegious, as far as board work goes. And I'm going to erase this board and prove the next theorem on it, because I want to keep what I prove-- or at least a statement of what I prove around for when we finally tackled the proof of the existence of the Riemann integral. So this is in permanent form on-- in digital form, and also in the notes. Also, I should mention, we're not following the textbook now. So you have to read the lecture notes or watch the lecture. All right, so this theorem here, theorem two, allowed us to compare two Riemann sums, as long as one partition is finer than the other. But there's a simple trick which we can now use to prove how to compare any two Riemann sums for any two partitions. So we have the following, theorem three. If xc and-- now they're any tagged partitions-- and f is a continuous function, then I look at the Riemann sum of this guy and then compare it to the Riemann sum of the other guy. This is less than or equal to the modulus of continuity evaluated at the first partition plus modulus of continuity applied to the second partition times the length of the interval. OK? What does this say? This says that, if I take two very fine partitions, so that the norms are very small, and therefore, the modulus of continuity evaluated at these norms a small-- remember, the modulus of continuity converges to 0 as the argument goes to 0. So if I take two very fine partitions, then the Riemann sums are very close together. The Riemann sums barely change. And so now we have some hope to see that sequences of Riemann sum do, in fact, converge. OK, so how do we prove this? It's a simple trick using the previous theorem to find a third partition to be the common refinement of these two. So take this partition and union it with the second one, and take a new set of tags to be the union of the two tags. So all the partition points of x and x prime-- I throw them together to get a new partition, x double prime-- and then the tags as well. Then x-- backwards-- this new partition is finer than x. It's finer than x prime. And therefore, by theorem two, we have this estimate which we can use. So if I look at the difference of the Riemann sums, and I add and subtract the Riemann sum corresponding to x double prime and [? xe ?] double prime, and then I use the triangle inequality-- so I'm doing two steps here-- so add and subtract the Riemann sum corresponding to-- and then apply the triangle inequality. For both of these, I can now insert this estimate here from theorem two, where-- what's the modulus of continuity being evaluated at? It's being evaluated at the courser partition. So x double prime is contained in x, so this is less than or equal to the modulus of continuity evaluated at x prime times b minus a. And then, for this one, x prime is coarser than x double prime. This one is contained in x prime-- so plus the modulus of continuity evaluated at x prime times b minus a. And that's what we wanted to prove. OK. All right, so now we're in a position to prove this theorem of the Riemann integral. And so what we're going to do is we're first going to come up with a candidate for what the integral could be. And then we're going to show that's the actual-- that that candidate satisfies the conclusion-- or this property that, no matter what partition I take, with norms converging to 0, the Riemann sums converge to this number. OK? So first we have to come up with a candidate number. All right. So first, take any-- so let's fix some partition of-- for the Greek letter aficionados out there, this is zeta. Let this be a partition of a, b with norm converting to 0 as r goes to infinity. You can always come up with some partition. So this [INAUDIBLE] tagged. You can always come up with one, at least one partition-- or sequence of partitions. So this is [? be a ?] sequence of tagged partitions. OK, now this is finally right. So you can always come up with a sequence of tagged partitions of a, b with the norms converging to 0. Take the first one to be just the whole interval-- so just left endpoint, right endpoint. Next one add-- the midpoint. That's now three partition points. Now add the midpoint of the previous two intervals you had. Now add the midpoint of the intervals you had before, and so on, and that'll build up a sequence of partitions with norm converging to 0. So I have this fixed sequence of tag partitions, and I claim that the Riemann sums converge for this guy. All right? So claim one-- the sequence of sums-- this is a sequence of numbers now-- OK. So it converges to some number. We're going to prove this by showing its Cauchy. Remember, Cauchy sequences of real numbers always converge. That's this completeness property of the real numbers. OK. So there's no other way to do this. Then, using the definition-- the epsilon [? m ?] definition-- so let epsilon be positive. So by theorem one, there exists a delta positive, such that if eta is less than delta-- I should say-- so we know that the modulus of continuity for f converges to 0 as eta goes to 0. So for all eta less than delta, omega f of eta is less than epsilon over 2 times B minus a. OK? Let me put a star by this guy. Now, since the norms of these partitions is-- are converging to 0, there exists natural number m0 such that, for all r bigger than or equal m0, the norm is less than delta. OK. And thus, for all r bigger than or equal to m0, if I look at the modulus of continuity evaluated at this norm of this partition-- so the norm of this partition is less than delta. And if I stick that into the modulus of continuity, anything I stick into the modulus of continuity which is less than delta-- which this is-- I should be less than epsilon over 2 times b minus a. So really, this is the one I want to star, maybe not this one. I am using this one to get it, but this is the key point here. OK. So here we're just using the fact that modulus of continuity converges to 0 as the argument converges to 0, and that the norms are converging to 0. OK? OK. So [? to show something's ?] Cauchy, we have to now choose m. I'm going to choose m to be m0. [INAUDIBLE] r, r prime bigger than or equal to m-- which, remember, we've chosen to m0. We have that the absolute value of s of f xr-- r, minus the Riemann sum, now with r prime-- absolute value. This is by theorem three, which we proved over there-- is bounded by the sum of the modulus of continuities evaluated at the norms times b minus a. And now r and r prime are bigger than or equal to m0, and so by star-- so this line here. This is by theorem three. But now, by star, by our choice of m0, this is less than epsilon over 2 times b minus a plus epsilon over 2 b minus a times b minus a. This is now by star, which equals epsilon. And therefore, this sequence of Riemann sums with respect to this one sequence converges. OK? And so we've proven the claim. Let me call the limit of this sequence something. I'm going to call it L. In fact, let's call it I for integral. Let I be limit as r goes to infinity of-- oh, I messed up my notation earlier. Sorry about that. The y's, and, x's and [? xe's ?] should have been y's and zetas. So that should have been y, zeta, y, zeta, and then y here, y here. OK? Now that's right. So I is defined to be the limit of these guys. So now we have to do one last thing in order to prove the theorem. I have this number I, and I claim now that I satisfies the properties of the theorem-- namely, that if I take any sequence of Riemann sums-- or any sequence of tagged partitions and take the Riemann sums, that converges to I. We've just shown that there is one that converges to. That's how I is defined. We took one partition-- sequence of tagged partitions and showed that the Riemann sums converge to some number I. So this number I depends on the partition, which I chose in the beginning. Now we want to show that this I, in fact, has that property, that no matter what partition-- sequence of partitions I take the Riemann sums converge to I. So that's the second and last thing I need to do to prove this theorem. So claim-- let x be now any sequence of tagged partitions, which are becoming finer and finer-- so with the norm converging to 0. Then I want to show that limit as r goes to infinity of the Riemann sums corresponding to this sequence of partial sums-- [INAUDIBLE] partial sums-- this sequence of partitions exists, and equals this number I, which I obtained from this one sequence of partitions. So once I've proven this [INAUDIBLE] once I've proven this, then I'm done. OK? This number I is therefore-- satisfies that property that, no matter what sequence of partitions I take, the sequence of Riemann sums with respect to these partitions converges to that number. Of course, here I'm calling it I, but we denoted by the integral from a, b of f of x dx. So to prove this is not too difficult using what we have on the board. So remember, I is the limit with respect to this one sequence of partitions, and now we have an arbitrary sequence of partitions, which we want to show the Riemann sums converge to this number as well. So with y sub r-- or yr and zeta r, the partitions from before, we have-- by the triangle inequality, if I look at the Riemann sum with respect to this arbitrary partition now-- subtract I-- I want to show this convergence to 0, so I'm just going to bound this by something that converges to 0. And then, by the usual argument with the squeeze theorem, that implies this thing converges to 0. So adding and subtracting the Riemann sum associated to the partition y and zetas-- this is less than or equal to in the triangle inequality, minus sf of [INAUDIBLE] plus now-- minus I. And so I'm almost there. This thing now-- I use theorem three. So this is less than or equal to-- times b minus a plus-- s sub f of i sub r minus I. OK? The norm of the x sub r-- this is converging to 0. So this converges to-- since the modulus of continuity goes to 0 as the argument goes to 0, this goes to 0, plus-- remember, the same thing for this guy-- this also converges to 0, plus-- how did we define I? It was defined as the limit of the Riemann sums corresponding to this first fixed partition. And therefore, this thing in absolute value converges to 0 as r convergence is to infinity. All right, so the absolute value of this thing, this Riemann sum, minus I is bounded by something converging to 0 as r goes to infinity. And therefore, by the squeeze theorem, we conclude that the limit as r goes to infinity of s-- so in other words, no matter what sequence of partitions we take, with norms converging to 0, the Riemann sums converge to this same number I. And that's the end of the proof. OK. It took a lot of work to do this. This is the work of Raymond, so-- who was one of the greatest mathematicians of all time. Weierstrass was definitely one of the greatest analysts of all time. Riemann, just as a mathematician in general, was one of the greatest of all time. He not only did analysis-- he applied analysis to number theory. This is the content of the famous Riemann hypothesis, which maybe you've heard about, which is worth a million dollars-- says something about the zeros of a certain function, which then gives you information about prime numbers. And he also had deep contributions to the foundations of geometry-- or I should say the foundations of differential geometry-- but truly some deep stuff going on. All right, so now we have this notion of a "remand integral. It's this no integral f of x, which has this property. No matter what sequence of partitions I take, with norms converging to 0, the partial sums converge to this number. So I'm often going to denote-- so if you like, this is some alternative notation I'll use. Instead of writing f of x dx, I may just write integral a, b, f. So this definition of the Riemann integral looks terrifying if you want to actually try and compute it. And it is. So that's the miracle of the fundamental theorem of calculus is that it gives us a way to compute it. And that's why I'm not doing any examples of computing it right now. But we can still learn about some properties of the Riemann integral without that to start. So now we're moving on to some properties of the integral. And the first is that it-- so this is a limiting process, but this limiting process is linear in f. If I look at Riemann sums of, let's say, f plus g, that's equal to the Riemann sum of f plus the Riemann sum of g. And therefore, since integration is a limit of this-- of Riemann sums, then integration should also be linear as well. And that's the first theorem is the linearity of the Riemann sum, Riemann integration-- f and g are a, b. And alpha is in R. So first off, alpha f plus g-- that's going to be-- so that should not be in a, b. This should be a continuous-- it's the end of a long day, OK? So sorry if I'm losing a little steam here towards the end. Anyways, alpha f plus g, if f and g are continuous-- this is a continuous function as well, so its integral is meaningful. And its integral is equal to alpha times the integral f plus the integral of g. OK? So what's the proof? The proof is basically what I said right before I stated the theorem, that Riemann sums are linear in the function. And then we just take a limit. Let's take a sequence of tagged partitions with norms converting to 0 as r goes to infinity. Then, if I look at the Riemann sum associated to alpha f plus g, it is easy to see from the definition, right? This is a sum from k equals 1 to [? in-- ?] so this is easy to see that this is equal to alpha times the Riemann some associated to f plus that guy. And now I just take a limit-- so take the limit as r goes to infinity. The left-hand side converges to the Riemann integral of alpha f plus g. The limit as r goes to infinity converges to-- because we know limits respect linear operations, the limit of the right-hand side is going to be alpha times the limit plus the limit. And therefore, this is equal to alpha times integral a, b, f plus integral a, b, g. OK. So like differentiation, it's also what one would call a linear operator. It takes a function and spits something out, but does it in a linear way. So for us, it takes a continuous function and spits out a real number in a linear way. The integral of alpha f plus g is equal to alpha times f plus g. Just like differentiation, the derivative of alpha f plus g if everything's differentiable, is equal to alpha times the derivative of f plus the derivative of g. But Riemann integration, the Riemann integral, or integration in general, is in some sense-- I don't want to say it's not miraculous. It still is a bit of a miracle that it exists for all continuous functions, but it's not as destructive an operator as taking the derivative, if you take as your baseline continuous functions. Last lecture we constructed a continuous function which is differentiable nowhere. So with differentiation, you can take a continuous function and not be able to take its derivative anywhere. Well, with integration, if you start off with a continuous function, you can always take it's integral. So somehow integration is a much more smoother process than differentiation. All right, so I think we'll stop there.
MIT_18100A_Real_Analysis_Fall_2020
Lecture_23_Pointwise_and_Uniform_Convergence_of_Sequences_of_Functions.txt
[SQUEAKING] [RUSTLING] [CLICKING] CASEY RODRIGUEZ: All right. So last time, we proved the fundamental theorem of calculus. And as a consequence, integration by parts of formula and the change of variables formula or use substitution. So let me just recall, for integration by parts, it was that if I have two continuously differentiable functions, then I can shift the burden of taking a derivative. F prime times g integrate is equal to f of b, g of b, minus f of a, g of a, minus the interval from a to b of g prime f. So I can shift that derivative over from f to g. And this is apart from the triangle inequality, probably one of the most useful theorems that comes out of calculus other than the fundamental theorem of calculus. In fact, for those of you who've taken quantum mechanics, which I mean, you don't have to. But or you heard of something called the Heisenberg uncertainty principle, which says something like something to the effect that you cannot measure the position of a particle and its momentum to arbitrarily good degree, you're bound by if you can measure the position of a particle very well, then your measurement of the momentum is going to be not so great and vice versa. And that's based on an inequality, and how do you prove that inequality? Integration by parts. So integration by parts is, in fact, responsible for one of the great head-scratchers from quantum mechanics. So just to back up my claim a little bit. Now, I'm not going to give that as an application. I'm going to give a different application related to Fourier series. So the Fourier series, what are these? So suppose we have a function. And I'm not going to say what type. Suppose f from minus pi to pi to R is 2 pi periodic. And so, the question that arose due to Fourier in his study of heat transfer-- so this is Fourier, I don't know, something like 200 years ago. He made the following claim, that the function f of x can be expanded in terms of simpler building blocks, in terms of simpler functions. So we haven't talked about Taylor series. We will in just a minute, or power series, which you've come into contact with, which is a way of, if you like, expanding a function in terms of polynomials. And now-- or monomials. Now, Fourier suggested that f of x can be expanded as a superposition of functions which are 2 pi periodic and kind of the most basic 2 pi periodic functions. Now, what is so special about sine x and cosine x? Well, this is a little bit deeper, the fact that they satisfy certain second order differential equation. And they are all of the solutions to the second order differential equation that are 2 pi periodic. So you should think of these as kind of being building blocks. Another way to think of this is analogous to for if you have a vector, so now this is not a partition. This is a vector, x1, x2, xn, then you can expand this vector as a sum of coefficients a sub n, a sub j, j equals 1 to n. You know what? Let me make this M. Let's make this N. Let's make this M, so it looks a little-- a sub n, e sub, where now, e sub n this is the basis vector given by 0 1 0 where this is in the end spot. So you can think of this expansion in terms of sines and cosines as being analogous to expanding a vector in terms of basis elements. Or you can think of it as a different way to expand a function other than Taylor series or power series. But these components arose in a natural way if one were to study the problem of heat transfer, which is governed by the equation dtu equals dx squared u. And then, along with an initial condition, that u of 0x, so at time 0, equals f of x. So now, just like for how we expand a vector into basis elements, there's a formula for computing these coefficients. So they should be a sub n should be x sub n over here. But what's a different way of obtaining these coefficients? So this is a vector in RM. And so, how do you obtain the coefficients a sub n? Well, if I take the inner product of both sides, the dot product, say of e x dot, let's say, e sub n prime, let's say-- so I've used Mn, let's say l. This is equal to the sum from n equals 1 to the M of a sub n, e sub n dot e sub l. Now, I wrote these basis vectors this way, because that's kind of a standard choice for RM. And what makes them standard is, they have unit length. And they're orthogonal to each other. So they form in orthonormal basis. So when I take the product of e sub n with e sub l, I pick up what is usually referred to as delta in l. Where here, delta in l, this is 1 if n equals l, and 0 if n does not equal l. And therefore, this just reduces to a l. So we see that a sub l is equal to x dot e sub l. And so, all of this discussion was in the setting of a finite dimensional vector in RM. And expanding in terms of the standard basis here. But it didn't have to be. It could have been as long as it's an orthonormal basis, then I get this relation, that the coefficient that appears in front of that vector is equal to the thing I'm interested in dotted with that vector, which is written here. So let's say we try and do the same thing now with f of x, except now and say-- so these are functions. So instead of taking dot products, which is a sum of components, let's take an integral. So if I take f of x and, if you like, dot it with sine of x in sum, which is-- you can think of as I said that the integral is, you should think of as maybe a continuous sum. What do we get assuming that this expansion holds, this is equal to the sum from n equals 0. So let me make this l. This is the sum from n equals 0 to infinity of a sub n sine n, x times sine. So let me-- forgetting to write the integrals here. Skipping a point I want to make as well. Sum, and just remember the sum is starting from 0 to infinity. I don't want to keep writing it. a n sine n x sine l x plus bn cosine in x sine n l x, sine x. And then, n equals 0 to infinity. I'll just write it. Stop being lazy. Now, assuming I can do what I'm about to do, and that's actually going to be a lot of the motivation for what we're going to discuss in our final chapter, assuming I can take this infinite sum and interchange it with this integral, this is the interchanging of two limits. The sum is the infinite limit. An integration is a limit. So assuming I can switch these two limiting processes, then I pick up a sub n minus pi to pi, sine in x, sine in lx, sine lx, sorry, plus b sub n integral minus pi to pi, cosine in x, sine lx dx. Now, you can actually sit down and compute this based on trigonometric identities. And what you get is that this is always equal to 0. And that this here equals pi times delta in l. So this equals the sum from n equals 0 to infinity a n pi delta in l, which equals pi times al. So we get this quantity here is assuming everything we've done is kosher equal to pi times a sub l. And then, to pick up the b sub l is the same, except now you integrate against cosine of lx. So similarly, pi times b sub l is equal to the integral from minus pi to pi of f of x cosine lx dx. So the b sub l's and a sub l's are referred to as a Fourier coefficients of the function f. So if f from minus pi to pi to R is continuous, and 2 pi periodic, the numbers a sub n equals 1 over pi, integral from minus pi to pi of f of x sine nx, dx, b sub n equals 1 over pi, integral from minus pi to pi, f of x cosine in x, dx are referred to as the Fourier coefficients of f. And so, just using integration by parts, so what's the first question one should ask if it's even possible, or in what sense does f of x equal this infinite sum? Well, we haven't even gotten into that. But one question you can ask is, do these coefficients that come in front of these basic building blocks, sine nx and cosine nx, do they converge to 0? I mean, if we expect f of x to be equal to the sum of these basic parts, then the contributions from each should be getting smaller and smaller. So does a n and bn tend to 0 as n goes to infinity? And this is the content of what's usually referred to as the Riemann-Lebesque lemma. But it's usually stated in a different way. I'm just going to state it this way right now. And which is the following. If f from ab to R is continuously differentiable, then limit as n goes to infinity of a sub n equals the limit as n goes to infinity of b sub n equals 0. Now, the actual way the Riemann-Lebesque lemma is typically stated is, in fact, I don't need it to be continuously differentiable. I just need it to be continuous. This is still true. But we haven't done-- or won't do in this class approximation theorems for continuous functions. Which says that if you can do this for continuously differentiable functions, then basically you can do it for continuous functions. But this will suffice. So what this says is that the contributions coming from these building blocks is getting smaller, at least in the sense that the coefficients are getting smaller. But it says nothing about if that sum up there with the a n's and bn's defined this way actually converges to f. I do want to emphasize that, in fact, trying to straighten out this question, in what sense this series converges to f is really the motivation for a lot of analysis developed past in the first part of the last century and the last part of the century before that and forms the basis of what's called harmonic analysis. Which is a really beautiful subject and still an active area of ongoing research. So how do we prove this? Well, I stated the integration by parts formula earlier. So in fact, it'll follow pretty easily from that. Let's prove that the limit as n goes to infinity of b sub n equals 0. The one for a sub n is similar. There's just an extra piece. But I'm going to be a little bit lazy and do the easier one. We'll show-- so let's look at b sub n. This is equal to the integral from minus pi to pi of f of x. And in fact, let me write cosine nx times f of x, dx, dx. Why am I writing dx, dx? And now what I do is, cosine nx, I can write as the derivative of something. 1 over n times sine nx. If I take the derivative of that with respect to x, I get cosine nx. I didn't actually prove that. But you can look back in your calculus textbooks. We've proven enough to be able to make that precise. So by integration by parts, I can now shift the blame, or shift the burden of this derivative onto f. But look what I've gained. I've gained 1 over n here. So now, this is equal to 1 over n sine n pie, f of pi minus sine of n minus pi times f of pi-- minus pi, sorry-- minus 1 over n sine nx minus pi to pi, f prime of x, dx. And really, what this competition is showing is-- illustrating is the oscillatory nature of what's going on. Cosine of nx is oscillating as n gets very large between minus 1 and 1 and equal footing. So on average, you're getting the same amount of positive f as minus-- as negative f. So or you're weighting f in such a way that it's both positive and negative in equal amounts. Now, sine of n pi, no matter what n is, I get 0. Sine of n of minus pi, I get 0. So this first part drops off. So this is equal to minus 1 over n, minus pi, pi, sine in x, times f prime of x, dx. And therefore, if I take the absolute value of b sub n, this is less than or equal to 1 over n minus pi over pi, sine in x f prime of x, dx. If I bring the absolute value inside, so I can bring the absolute value inside and still get this. So in fact, before when I had the absolute value outside, it's the equality. But now it's a less than or equality-- I mean, an inequality. So now, sign of nx is always bounded by 1. So this is less than or equal to the integral of f prime of x, dx. So this is equal to 1 over n integral a, b, f. Now, this is just f prime. This is just a fixed number times 1 over n. So this converges to 0 as n goes to infinity. And b sub n, an absolute value, of course, it's always bigger than or equal to 0. And it's bounded by something converging to 0 as n goes to infinity. So by the squeeze theorem, we conclude b sub n converges to 0. And that's the proof. The proof for the a sub n's is similar, except now you can't throw away necessarily the endpoints. But it's still not a very big deal. And in fact, if we have time, I'll show you how much-- so in fact, one can prove-- and this is proven in classes on-- courses on harmonic analysis, that in fact, for a function which is continuously differentiable, this-- and 2 pi periodic-- this series actually does converge uniformly to f on this interval. And I haven't even said what uniform convergence means. But actually does converge to the function f of x. So this is the case for continuously differentiable functions. I'll give a proof later that, in fact, this series converges if f is twice continuously differentiable. We can actually do that using the fundamental-- the integration by parts again, essentially. But so, but there are a few things here that are behind the scene that are kind of swept away. First off, when we computed these formulas-- formulae, I guess-- we interchanged summation, infinite summation of functions with integration. When can we do that? In what sense does if this Fourier series converges, in what sense does it converge to f? For convergence of real numbers, there was just one sense of the convergence of real numbers. Now, when we have a sequence of functions, which is now what we're going to turn to, we'll have different notions of convergence to another function. And depending on in what sense that conversion-- that convergence takes place, some of these limiting operations may not interchange. So now, we're going to move on to the final chapter of this class. And I know it seems like we're kind of hitting a lot of different things now towards the end of the class. And we took it slow during the first part of class. But that's, like I said, that's-- I think I even said this at the start of class. We didn't have very much to go off of. We built things from the ground up. And the more technology you have, the more things you can prove, the more interesting questions you can ask. So now, we're going to go on to sequences of functions. And you could also put sequences and series. Because a series is just a special type of sequence of functions. So I motivated a little bit of why we would be interested in functions converging to other functions or sequences of functions converging to a function. But we could look at something much more basic. So let's take a step back and look at power series. And this should be thought of as motivation for what's to come, just like our discussion about Fourier series. And again, I'm not going to ask any questions about Fourier series on the homework or on the exam. So a lot of this is just-- this discussion was to motivate this theorem here. But now, I'm going to make a kind of a more precise motivation, I guess, for what's to come. So although we've had series forever, I never brought up power series. And it's for a reason. It's because I didn't think they belonged anywhere until we got to sequences of functions. So a power series about a point x0 is a series of the form sum from j equals 0 to infinity of a sub j, x minus x0 to the j. So the x0 is given. And the things that could change are the coefficients or this number x here. So theorem, which immediately follows from essentially the root test. Suppose, this number R, which is the limit as j goes to infinity of a sub j, 1 over j exists. So it's a finite number, positive, not negative number. And define rho to be 1 over R if R is bigger than 0, and infinity if R equals 0. Then we have the following conclusion, that this power series a sub j converges absolutely if x minus x0 is less than rho, and diverges if x minus x0 in absolute value is bigger than rho. And this number rho, we refer to as radius of convergence. So again, the proof follows immediately from the root test because if we take a limit as j goes to infinity of a sub j, absolute value x minus x0 j, 1 over j, this is equal to x minus x0. This kills that j. This is just a fixed number. So this pops out of the limit. And this limit exists. So this equals x minus x0 times R. And we have two things happening. This is less than 1 if x minus x0 is less than rho, bigger than 1 if x minus x0 is bigger than rho-- this number here. And therefore, by the root test, the theorem holds. So we see that this series, where the, if you like, what's given are the coefficients a sub j and x0, and what could change is x, that this series converges as long as x minus x0 is less than rho. So as long as we stay-- [SNEEZES] excuse me. As long as we stay in that interval, a symmetric interval about x0, then this series converges, absolutely. So we can define a function where if I take x in this interval, stick it into this series, I get out a number. So define function f now going from this interval. So x0 minus rho, x0 plus rho to R by f of x equals the number that gets spat out by this power series. [INAUDIBLE] j there. So for example, what is f of x? Let's say I take all of the coefficients to be-- let's say, x0 is 0, and all the coefficients are 1. So let's say I look at sum of x sub j. Then f of x, so we've already computed for a geometric series. This is equal to 1 over 1 minus x for x in minus 1, 1. So in the simple setting, 1 over 1 minus x is equal x to the j. Another example is, that if I take a sub j equals 1 over j factorial, x0 equals 0, then you've done this in an exercise, that this series here converges absolutely for all x. And meaning rho equals infinity. And this is how we define the exponential function. Exponential function, exponential of x to be what comes out of this series. And then, simply from this definition, you can show things that an exponential function should satisfy, x e to the n is equal to n e to the e of 1 to the n-th power, and so on, and so on. So that this really does obey what you believe an exponential function should look like. And also, it grows faster than any power of x as x goes to infinity, goes to 0, those types of things. But this is how the exponential function is defined. So we have this function that's defined by whatever this power series spits out, for x inside this interval of convergence. So then, I could write f of x as the limit as n goes to infinity of a sequence of functions. Because this is just how it's defined, where fn of x is the partial sum. It's just a polynomial. So for power series, the limiting function, you can write it as the limit of the partial sums, which are just polynomials. And so, I should say, for all x in this interval, we have this. So now, some questions arise. So what this function is equal to, the limit of these maybe simpler functions. These simpler functions are just polynomials for the case of power series. So like I said, 1 over 1 minus x is equal to the limit of these polynomials. Some questions should arise. I mean, analysis is about limits. You can think of that as half the story. First off, what is the limit? What are important limiting processes that we consider? The second question is, how do different limits interact? So let's pose that as a question now for a three-parted question for power series. And this will motivate-- and this is, again, motivating all of what we're going to be doing now. So is the function that I get as this limit of polynomials as the output from a power series, is it continuous? The individual pieces that I take a limit to get f of x, these polynomials or the partial sums are certainly continuous. They're just polynomials. So is the limiting thing continuous? Now, if so, is f differentiable? And in particular, since f is equal to the limit as n goes to infinity of the fn's, is f prime equal to the limit as n goes to infinity of the fn prime? So the derivative is a limiting process. So I'm taking the derivative of the limit. So I'm asking, can I take that derivative inside the limit? Can I swap the two processes? And the same with integration. If one, does the integral of f equal limit as n goes to infinity of the integrals of the fn's? So again, this is a limiting process that we're asking us to flip, because f is equal to the limit as n goes to infinity of fn. And what I'm asking is, can I take this integral inside that limit? Now, you can ask these questions not just for power series, but in a more general setting, which is what we're going to turn to now. But this should be in the back of your mind as the motivation for what we're doing. And apart from being just an academic question, it's also somehow giving you information over whether the formal manipulations that you're doing with Fourier series that are actually somehow modeling some physical phenomenon, are these formal manipulations even meaningful? So these are the three questions that motivate what we're going to do going forward. But we don't have to just stick to the setting of being in power series. This should be a very important example of a sequence of functions converging to a function, a limiting function. And then, we can ask these questions. But we don't have to just stick to power series. So let me move on to a more general setting in which we'll answer these three questions. And two modes of convergence for limits of functions or sequences of functions. So first definition, this is, in fact, what we showed or what we were talking about before for N natural number, let fn be a function from S to R. S is some non-empty subset of the real numbers. And let f be from S to R. We say, fn-- so the sequence of functions fn converges point-wise to the function f if for all x in S, by sticking x into S, so for each fixed x in S, if I stick this into fn of x, I get a limit. And this limit is f of x. So for example, going back to power series, if we defined f of x-- so let me just rewrite that example that I had up there. If I define f of x equals 1 over 1 minus x, fn of x to be in sum from j equals 0 n end of xj, then for all xn minus 1 to 1, limit as n goes to infinity of fn of x equals f of x. I.e. this, the sequence of partial sums corresponding to this power series, converges to 1 over 1 minus x point-wise on minus 1, 1. So I said, whenever you come across a definition, you should negate it. But the negation of this definition is not too difficult. A sequence of functions does not converge to another function point-wise if there exists some point, so that when I stick them into fn of x, fn of x does not converge to f of x. So let's look at another example, which is not a power series. Let's say, we take fn of x to be x to the n, where x is in the closed interval 0, 1. So what's happening here, as n gets very large, there's 1, 1, there's, I don't know, f 5 of x. And then, as n gets very large, these guys are dropping down even more and more. And what is-- are we picking up something in the limit? Well, let's look. Well, if x equals 1, and it's pretty clear that the limit as n goes to infinity of fn of 1, this equals 1. If I stick in 1 here, I get 1 for all n. And therefore, the limit as n goes to infinity of fn1 is 1. Now, if x is in 0, 1, then, I mean, we've done this limit before. The limit as n goes to infinity of fn of x, this is equal to the limit as n goes to infinity of x to the n. Now, x is less than 1. So x is being raised to a higher and higher power. This equals 0. Thus, what do we conclude? For all x in 0, 1, this sequence of functions x to the n converges point-wise to the function f of x, which is equal to 0 if x is in 0, 1, and 1 if x equals 1. So I draw another picture here of what the limit looks like. And you can start to see this, as m gets large, again, this is becoming more vertical there. But then going to 0. So for any fixed n, it's converging to this picture on the right. And so, we can already pick up something, or at least answer one of these questions, if we take them as a question about convergence or functions in the point-wise sense. So we could have asked this question now. After having this definition, suppose fn's are continuous converging point-wise to a function f. Is the function f continuous? And what this example shows is that, no, that's not the case. x to the n is always continuous. Yet, the limit as n goes to infinity, the point-wise limit is given by this function, which is 0 from 0 to 1, and 1 at x equals 1, which is not a continuous function. So already, we're kind of seeing that point-wise convergence, which is this first weakest mode of convergence that we as of now can say about power series and is not good enough to ensure that the limit is even continuous. Because this example shows that we have a sequence of continuous functions whose point-wise limit is not continuous. So as another example, you should always-- I like this kind of last chapter, because you can draw a lot of pictures. So I'm going to draw pictures of fn. So it's piece-wise linear. So I can write down what the function is, but I don't want to. I'm just going to draw a picture. So fn of x from 0, 1 to R. This is how it looks. So there's 1, 1. And what I do is, I go to the point, 1 over n. And the function fn of x is 0 up until then. And then, it's a linear function connecting 1 over n to 2n here. And then it's connects this point to n0, so or I should say, 1 over 2n, 2n, connects that to the origin. So that's f sub n of x. It's just piece-wise linear. So for example, if I want to draw f1, f1 would look like there's 1, 0, 2, 1/2. And let's say I wanted to draw f 100. What does that look like? So maybe I should make this one a little bit bigger. There's 1, 1 over 100, and then that should be 1 over 200. And then if I go up to 200, is this piece-wise linear function, which is getting-- it's 0 from 1 up to 1 over 100. So it's 0 most of the time. And it's 0 at the origin. But in between, it's very tall and very slim. And so, my claim is that for all x in 0, 1, limit as n goes to infinity of fn of x equals 0. So this sequence of functions converges point-wise to 0. So why is this? Well, let's just give a full proof of this rather than me talking it out. I mean, I'll talk it out and give a full proof. So let's look at the easiest spot first. And I don't even need the formula for these guys. I just need to know that they have this basic characteristic that their point-wise linear, I mean, that they're piece-wise linear connecting 0 to 1 over 2n and 2n here, and then down to 1 over n 0. And then, there's 0 between that and 1. So first off, if x equals 0, then all these functions are 0 at the origin. So they equals 0. So that equals 0. So that's fine. So now suppose x is in 0, 1. And so, what do we want to show? I want to show limit as n goes to infinity of fn of x equals 0. And here, so what's the point here? Now, there's 1, there's x. So I have to give a-- well, I'm not even going to do an epsilon delta epsilon M argument. I'm just going to show you what happens. So there's x between 0 and 1. Now, let's choose a very large integer so that 1 over M is less than x. So here's 1 over M to strictly to the left of x. Now, what is the graph of fn of x look like for n bigger than or equal to M? It's 0 from x equals 1 to x equal 1 over M, and then shoots up, and then comes back down over here to 0. But here's the point. It's 0 all the way from 1 to 1 over M. So in particular, at x, fn of x is 0. So if I look at this sequence, fn of x, which I'm trying to show converges to 0, it is 0-- no, it is a f1 of x, f2 of x, up to fn minus 1 of x. And then, at fM of x, so at the-- now, so this is fM of x spot, it's 0. And this point is only going to the left. So for all n bigger than or equal to M, now this will be 1 over n will be to the left of x. And therefore, fn of x will be 0. So this is 0, 0, 0, 0, and so on. So I have the sequence is eventually 0 for all n bigger than or equal to capital M. So and therefore, the limit, I mean, it's pretty easy to take a limit of a constant sequence. And which proves that this sequence of functions converges point-wise to 0. Now, I didn't come up with this fancy example for just any old reason. It'll come back in a minute when we start answering some of these questions, or asking them, again, within the context of these two convergence-- ways of converging. So, so far, I've only given you one definition of convergence-- point-wise convergence of a sequence of functions. And now, I'm going to give a slightly-- I'm going to give a stronger-- it's not slightly, it's much stronger-- definition of convergence of a sequence of functions. So we have a sequence of functions and a given function from S to R. S is a non-empty subset of R. Then we say the sequence fn converges point-wise or uniformly to 0-- uniformly to 0-- uniformly to-- it's the end of the day. If I start mixing up some of my words, I'm always going to correct it. But the first word out of my mouth may not be the correct one-- converges uniformly to f of x to f if-- now we have an epsilon in statement. For all epsilon positive, there exists an M natural number such that for all n bigger than or equal to M, for all x in S, fn of x minus f of x is less than epsilon. Now, I want to make a brief comment. This looks suspiciously like point-wise convergence, if you just wrote down what it means for the limit as n goes to infinity of fn of x goes to f of x. Except, there's a very subtle and important point. And that is, where does for all x and S appear? For point-wise convergence, you can state point-wise convergence as this being at the start of the line, for all x in S, for all epsilon positive, blah, blah, blah. Here, it appears at the end of the line of the quantifiers. And this makes a very important difference between point-wise convergence and uniform convergence. Point-wise convergence means, I take a point x, I stick it into fn of x. That gives me a sequence of numbers. And point-wise convergence says that sequence of numbers converges to f of x. For each x, I get a sequence of numbers, which converges to f of x. Now, uniform convergence is actually saying something stronger. And I'll say that. So let me, in fact, let me draw a picture that goes with this definition. Let's make S to be an interval. Let's say my limiting function is f is given by this graph. And so, what I'm going to do is basically shift the graph up and down by epsilon, meaning this length is epsilon, and so is this length, all the way across. So let me shade this in and re-outline-- oh. So I get this little, if you like, shaded area around my function f. So this is f, f of x. That's the graph. And the shaded part, this is the set of all x and y, such that f of x minus y is less than epsilon. So I get a little tube snaking with f. Now, what uniform convergence says, is that for all n bigger than or equal to some M, so given epsilon, for all n bigger than or equal to M, if I were to draw the graph of fn of x, it better fall inside this tube across all of a, b. See, this tube is defined for all x between a, b. So it's making a statement about how close fn of x is to f of x across the entire set. Point-wise convergence just says, if I put an x into fn of x, then eventually those numbers are getting close to f of x. Uniform convergence is a global property. It's saying, across the entire set, as n is getting large, the graph of this fn is getting very close to the graph of f of x. Not just if I fix an x, the fn of x at that point are converging to f of x. So I've said a couple of times that uniform convergence is stronger than point-wise convergence. Let me actually prove this now. So let me prove that following theorem. So if I have a sequence of functions from S to R, and fn, rather than write converges to f point-wise or uniformly, I'm going to put an arrow. And then, with the description afterwards, uniformly on S. And then this implies fn converges to f point-wise on S. So it's very simple. Again, what is the picture that's going on for uniform convergence is that fn is getting close to f across the entire set that we're looking at. So certainly at one point, which is all you need for point-wise convergence. Each fixed point we should be getting close. So let epsilon be positive. So first off, let's fix a number in the set S. So now we want to prove that the limit as n goes to infinity of fn of c equals f of c. Let epsilon be positive. Since fn converges to f uniformly, there exists a natural number M0 such that for all n bigger than or equal to M0, for all x in S, fn of x minus f of x is less than epsilon. So choose M to be this M0, the M that is for this epsilon. Then for all n bigger than or equal to M, let's call this equation star, star with at the single point x equals at c, implies fn of c minus f of c is less than epsilon. And thus, the limit as n goes to infinity of f of c, fn of c equals f of c. So I don't think I have enough time to do the example that I want to do. So I'm just going to leave you on the edge of your seat by stating the following theorem. That in fact, so this is a one-way street. Meaning point-wise convergence does not imply uniform convergence. So we just proved uniform implies point-wise. But the converse does not hold. And what we'll prove next time is for in the setting of this simple example of x to the n. So-- and so, what we'll prove next time is the following. If I take any b between 0 and 1, then fn convergence to f uniformly on the set 0, b. So these functions are defined on 0, 1. So they're certainly defined on 0, b for b less than 1. But however, this sequence of functions does not converge uniformly to f on 0,1. So here, this second part, since the fn's converge to point-wise on 0, 1, the second part says that this is a one-way street. This is not a two-way street. These two modes of convergence-- uniform convergence and point-wise are not equivalent. All right. I think we'll stop there.
MIT_18100A_Real_Analysis_Fall_2020
Lecture_20_Taylors_Theorem_and_the_Definition_of_Riemann_Sums.txt
[SQUEAKING] [RUSTLING] [CLICKING] CASEY RODRIGUEZ: --differentiable as many times as you like. And the derivative at 0 equals 0 for all n. Why do I bring this up? Because then the Taylor polynomial for this function that I've written here at 0-- so this is the Taylor polynomial at 0-- just equals, again, the sum of the derivatives evaluated at 0 times x minus 0 to the k. But all of the derivatives are 0. And thus, the function is, in fact, equal to the remainder term near 0. So you see, the remainder term carries all of f, so it doesn't necessarily need to be small. And what I'm trying to say is that in general, you can't just throw away the remainder term and expect that to be even near the point x, some sort of faithful representation of the function just by the Taylor polynomial. Because as we see for this function, the Taylor polynomial is identically 0. If I throw away the remainder term, I would be saying f is 0, but it most certainly is not near x equals 0. So that was the point of that discussion. Now, let's give the proof. And as you see, we're just going to kind of apply the mean value theorem repeatedly to higher derivatives of f, but not necessarily of f, but of a function we cook up out of f. Let's take two points, x0 and x, not equal to each other. Of course, if they're equal to each other, we can take C to be whatever we want, because then, f of-- because then what we pick up is f of x equals f of x on the right-hand side. So we can just consider the case that x0 does not equal x. And let M be-- this is just a number depending on x and f0 over x minus x0 to the n plus 1. P sub n This is, again, the Taylor polynomial of degree n. f of x minus P sub n of x over x minus x sub 0. So this is just a number depending on x and x0. Then just rewriting this, this means that f of x is equal to P sub n of x, the Taylor polynomial of n at x, plus Mx times-- Mx x0 times x minus x sub 0 over n to the n plus 1. Now, the goal is to show that, in fact, this number can be written as the n plus 1 derivative evaluated at some point over n plus 1 factorial. Now, the goal-- show there exists a C and a B such that Mx equals f n plus 1 over C over n plus 1 factorial. Now, what is this defining characteristic of this Taylor polynomial at-- this n-th order Taylor polynomial? Evaluate with respect to x0. Well, the point is that this Taylor polynomial agrees with f at x0 up to n-th order, up to n derivatives. In other words, if I take the k-th derivative of f and evaluate it at 0, this is the same as taking the k-th derivative of the Taylor polynomial and evaluating it at 0. So the Taylor polynomial agrees with f up to n derivatives at the point x0. Again, this is the whole point of Taylor polynomials, is that they, at least at the point, agree with f up to n-th order. Does it mean they agree with f, or even are a good representation of f, away from x0 like we just saw? But at least at x0, they agree with f. Now, I'm going to define a new function, which I'm going to start applying the mean value theorem to, and hopefully come up with this C. It's g of s equals f of s minus P sub n of s minus this number from earlier times s minus 0 to the n plus 1. And something to note is that this function here, this whole function, so g-- first off, f is n plus 1 times differentiable. This is a polynomial, so it's n plus 1 times differentiable. And this is just a polynomial, also. And s. So it's n plus 1 times differentiable. So ns. Let me draw a picture. We have x0 and x. At least in the picture, x is bigger than x0, but that doesn't really matter. What do we know about g of x0? Well, this is equal to f. And now, when I stick in x0 here, I get 0. And now f of x0 minus P n of x0, again, by this first thing here for k equals 0. This is equal to 0. And now, what do I know about g evaluated at x? This is equal to f of x minus P n of x. Remember, the variable that I'm changing-- or at least, the free variable there, is s. So if I stick in x, I get f of x minus P n of x minus M, this constant from earlier, which I chose depending on x and x0. But using this relation here, this is 0. I have that-- the function f at x0 and at x is 0. By the mean value theorem, or Rolle's theorem-- so by mean value theorem, there exists a point x1 between x0, x such that g prime of x1 equals 0. Yeah? Now, remember, at x0-- or g prime of x1 equals 0. Now, at x0, we have that-- OK. So at g prime of x1, at x1, g prime is 0. But also, if I look at the derivative of g at x0, this is equal to f prime of x0 minus P n prime of x0 minus-- now, here, I'm working under the assumption, just for illustration purposes, I'm assuming n is, say, bigger than or equal to 2, at least from what I'm writing down right now. But if I take the derivative of this and plug in x0, then I will also get 0 here. So I just get f prime of x0 minus P n prime of x0. So this equals 0. So I have g prime of x1 equals 0, g prime of x0 equals 0, and therefore, by the mean value theorem applied again, there exists a point x2 between x and x0 such that the second derivative of g evaluated at x2 equals 0. And now, I just iterate this. Because I still know, at x0, the second derivative of g is also 0 as long as n is bigger than or equal to 2. And then I'll get that there's x1 here-- let me write here, at this point, we know g equals 0, g prime equals 0, and so on. At this point, g prime equals 0. And then at x2-- then the fact that g double prime here at this point is 0. And here, we apply the mean value theorem again. And we get a point, x3, in between them where, now, the third derivative equals 0. And we can keep going on, up until a certain point. And what point is that? That's when I've taken away n derivatives here, and all that I have left here is s minus x0. Let me summarize. Continuing in this way, we see there exists-- we see for all k between 0 and n, there exists an x sub k between x0 and x such that the k-th derivative at x sub k equals 0. In particular, at the k equals n stage, what do I have? x0, x, and this is x sub n. I'm just now going to repeat this argument one last time, and we'll see where that leads us. Since g-- the n-th derivative of g evaluated at x sub 0 equals 0-- again, this is coming from this relation here. Let me, in fact, write that again. This is equal to f of x0 minus P n of x0. And I'll even write out, this is equal to Mx x0 n plus 1 factorial times x0 minus x0. This is what happens when I take n derivatives of ns, of this monomial here. This, of course, is 0 equals 0. Since we have that and we have, it's equals 0 at this other point, there exists, by the mean value theorem, now applied to g-- the n-th derivative of g. When I write mean value theorem here, I'm not applying it to the function g. I'm applying it to the derivative. Here, I was applying it just to g. Here, I was applying the mean value theorem to the derivative of g. And, here I'm now applying the mean value theorem to the n-th derivative of g. Let me make that perfectly clear. There exists a number, C, between x and x0 such that the n plus 1-- the derivative of g, the derivative of the n-th order derivative of g, so the n plus 1st derivative of g-- of C equals 0. But what does this mean? Now, if I take n plus 1 derivatives with respect to s of this over here, I get f. Now, if I take n plus 1 derivatives of an n-th degree polynomial, I get 0. If I take two derivatives of a degree 1 polynomial, which is just x, I get 0. If I take three derivatives of a degree 2 polynomial, I get 0. So I get 0 for when I differentiate n plus 1 times an n-th degree polynomial minus this constant again times n plus 1 derivatives of this monomial here in s. Remember, all of these derivatives I'm writing down here, these are all in terms of s. Times n plus 1 factorial. And this equals 0. This is just this here expressed here. And it should be C, I'm sorry. Because we're plugging in C. But that means precisely that, which is what I wanted to show existed. At this point C, this constant from earlier, which, remember, was defined in this way, is actually equal to the n plus 1 derivative of f evaluated at some C. And therefore, f of x is equal to P n of x plus-- x C times-- where C is between x and x0. Again, Taylor's theorem says a couple of things, but it doesn't say certain things. The mean value theorem, as it's written, says there exists some point in between so that the secant line from f of b to f of a is equal to the derivative of the function, the tangent to the graph, at some point in between. But it doesn't tell you that the function near a point can necessarily-- what am I trying to say? What Taylor's theorem does say is that you can iterate the mean value theorem for higher-order derivatives. But what it doesn't say is that this polynomial that you get over here, which you interpret kind of as an approximation of the function f near x0, it doesn't say that approximation is necessarily good. Because we just saw from this example that that remainder term may end up being the entire function. But still, that doesn't make it any less useful in applications. Let's give a simple application of Taylor's theorem, which perhaps you had endless homework problems or exam problems on back when you first took calculus and were finding critical points and trying to characterize them as relative minimums or relative maximums. We have the second derivative test, which says the following-- which states that, suppose I have a function from the open interval a, b to R. And suppose this has two continuous derivatives on this open interval a, b. If, at a point in a, b, the derivative equals 0, and the second derivative of f, you evaluated it at 0, x sub 0 is positive, then f has a relative min at x0. And I should say that this is a strict relative min. What's the difference between a strict relative min and just a relative min? A strict relative min, I mean that if I'm at any point other than x0 and I'm nearby, then f of x is bigger than f of x0. Let me just write that here. That means near x0-- OK. In fact, let's just briefly recall what the definition of relative min is. And this will allow me to state what it means to be a strict relative min. This means there exists a delta positive such that for all x, n-- such that for all x, x minus x0 implies f of x is bigger than f of x0. This is the definition of strict relative min. A strict relative min is a relative min because, what's the only thing missing from the definition of relative min is, what happens if I evaluate at x0? And then at x0, we get f of x0 equals f of x0. so a strict relative min is a relative min, but it's a little bit stronger. Because it's saying that as long as x is not equal to x0, meaning this thing is bigger than 0, f of x is bigger than f of x0, not bigger than or equal to. I hate doing this, but the theorem's stated over there, and now we need to go across the room to do the proof. f has two continuous derivatives on a, b. And therefore, the second derivative is continuous at 0. Since the second derivative is continuous, we get that the limit, as x goes to x0-- or, let me put-- instead of x, say C. This equals f double prime of x0, which is positive, by assumption. That's what we're assuming. And therefore, by an exercise in one of the assignments, since this limit, this implies that there exists a delta 0 positive such that for all 0 bigger than C, bigger than x0-- and in fact, we can include-- let me see. There exists a delta 0 positive such that, for all C, satisfying-- we get that f prime of C is positive. All I'm saying is we have this point, x0. x0 plus delta 0. x0 minus delta 0. And then on this interval, f double prime of C is positive. You proved that, in fact, in an assignment. If the limit of a function as I approach a point equals l, which is positive, then near the point, the function has to be positive. Now, I have to verify that-- what am I trying to do? I'm trying to verify that I have a strict relative minimum so that there exists a delta-- I have this delta 0, which ensures that the second derivative is positive on this interval. So I say choose delta to be this delta 0. And now, I have to show that this delta works, meaning for all x satisfying that inequality, I have f of x is bigger than f of x0. So take an x between delta-- I mean, within delta distance to x0. Here's x, say. Then by Taylor's theorem, there exists a C between x and x0-- so here's x. There's this point C between x and x0, which I can always choose strictly in between them, such that I have that f of x equals f of x0 plus f prime of x0 times C minus x0 plus-- no, that should be x, sorry-- plus f double prime of C over 2 times x minus m 0 squared. Now, at x0, the derivative is assumed to be 0. We're assuming the derivative vanishes at x0 and the second derivative is positive there. So this equals f of x0 plus f prime of C over 2 times x minus x0 squared. Now, on this whole interval, which is where I'm looking at, f double prime of C is positive. So this thing here is positive. And as long as x minus x0 is not 0-- as long as x is not equal to x0-- this thing is positive. This is a square. So this is strictly bigger than f of x0, which is what I wanted to prove. So I have proven that f of x is bigger than f of x0 on this interval here. Of course, the picture that goes along with this is something like, let's say, the point x0, 0, at least near this point, the derivative is 0. The second derivative is positive. So this is how the function should look. CASEY RODRIGUEZ: So that concludes what we're going to say about differentiation. I have put in the assignment the most useful version of L'Hopital's rule, which is kind of the only other main thing we're missing right now from just the theory of differentiation. But remember, differentiation is a bit of a miracle, as I've said before, because there exist continuous functions that never have a derivative. Integration, which is what we're moving on to now, is not so much of a miracle. Because as we'll show, every continuous function has a Riemann integral, which is a different limiting process. So all of these things we're talking about, all of these notions-- continuity is a notion that involves limits. Differentiation is a process involving limits, and integration is a process involving limits. But somehow, integration is not as harsh a process as differentiation. We're moving on, now, to Riemann-- I should say the Riemann integral, but I'll say Riemann integration. What is Riemann integration? It is-- you were told this in calculus, but maybe in not so careful a way. This is a theory of what it means-- or this is a number that we associate to a function that you interpret as the area underneath the curve. It is not, as maybe you're told, somehow magically equal to the area underneath the curve. There is no notion of area underneath the curve. The Riemann integral is a number which you interpret as the area underneath the curve because it agrees with what you think the area underneath the curve should be for simple examples. For example, a half-circle or just a box. These two notions agree. And therefore, you interpret the Riemann integral, which is a number obtained by a limiting process, as the area underneath the curve. It is not somehow, out in the universe, there is this notion of area underneath the curve, and the Riemann integral magically coincides with that notion. No. It is a theory, if you like, of assigning a number that we interpret as the area underneath the curve. And it's good for-- very good for, especially once we get to the real miracle of calculus, the fundamental theorem of calculus, which connects the derivative to integration-- it's fantastic in being able to, in its ease of computing. Hopefully, at some point, you go on to learn about Lebesgue integration, which is a much more versatile notion of area underneath the curve. And a little bit more robust. We have better theorems that you can then use and prove-- prove, then use, of course. But Riemann integration is a place to start. And in fact, in some treatments of Lebesgue integration, Lebesgue immigration is treated as the completion of Riemann integration, just as the real numbers are the completion, in some sense, of the rational numbers. Let's set up some definitions and notions that we'll need. I'm just going to be talking about Riemann integration of continuous functions. This is the simplest way to go. Why not for some general functions or something like that? Because in general, a function does not have a Riemann integral. So you could try to ask, can you characterize what functions do have a Riemann integral? And the answer to that is functions which are continuous, in a sense, almost everywhere. Almost everywhere, though, we don't have the machinery to describe that. That's a measure theory course. Because you cannot-- or at least, because we don't have the machinery to fully state what it means, a precise "if and only if" statement about when a function is Riemann integral, I'm just going to do the ribbon in a rule for continuous functions, which is nice and simple enough-- and still pretty. Let me just introduce, first, some notation that I'll be using a lot. C of a, b. This is going to be the set of all continuous functions from a, b to R. So f from a, b to R. f is continuous. Now, as I said, we're going to associate to an interval and a function-- a number-- which we will later interpret as a notion of area underneath the curve. This process is a limiting process where we're going to be taking the domain and cutting it up into smaller and smaller pieces, and somehow writing down a number that we think approximates the area, is a good approximate area underneath the curve. I'm going to assign some words to this breaking down process. Partition of the interval a, b. This is just a finite set x underline, which I'll write in this way. It's a finite set, which I write x0, x1, x2, up to xn, with the property that x0 is equal to a is less than x1, less than x2. The norm of a partition, which I denote with these two vertical lines on either side of x underline, is by definition, the max of the differences between these partition points. I refer to these points that are in the partition as partition points. This is x1 minus x0, and so on, xn minus xn minus 1. A tag for partition x bar is a finite set xi. Get used to some Greek letters in your life. Xi equals C1 up to Cn. As before, in the partition, we started off with a 0 here. We started off with 1. Such that each of these xis lie between partition points. In other words, x0 is less than xi 1 is less than x1. And the pair is referred to as a tagged partition. Although maybe it looks a little bit fancy, it's not. A partition is, you take your interval a, b, and you cut it up into pieces, with your first point always being a and your last point always being b. So x1, x2, and because I can't draw n points, I'm going to draw four points. x3, x4. There's a partition of a, b, into-- think of these points as being the endpoints of little intervals that I've broken up the bigger interval into. And the xis are just points in each of these little intervals. C1 has to land there. xi 2 could land in the next one. It could actually be the endpoint if we like. xi 3. Let's say it's the midpoint. We'll say xi 4 is there as well. The tagged points are just lying in these smaller intervals. And at least in this picture, the biggest separation between partition points would be something like x3 minus x2 here. The norm of a partition is the length of the largest subinterval. I drew kind of something abstract here. Let's make this more concrete. Let's say I'm looking at-- just to write down a few examples, let's say my interval is 1, 3, and then my partition are the points 1, 3/2, 2, 3. And then my set of tags are 5/4-- just a midpoint-- 7/4, 5/2. So my partition is 1. There's 2. 3/2. Those are my partition points. And meanwhile, my tags are the midpoint. And then the norm of this partition is the maximum of the lengths of these smaller subintervals here-- not the ones using C, but the one with the partition points. So max of 3/2 minus 1, 2 minus 3/2, 3 minus 2, and is 1-- the length of this subinterval. Now, given a tagged partition, we're going to associate a number to this tagged partition, which we interpret as an approximate area. Let f be a continuous function, xi, a tagged partition the Riemann sum associated to-- I should say, of f-- associated to the tag partition xi is the number s sub f of x bar-- I mean, x underline, xi underline, which is the sum from k equals 0 to k equals 1 to n of f of xi k times xk minus xk minus 1. Again, what we interpret this number as-- how do we interpret this number? We interpret it as somehow-- we give meaning to this number as an approximate area. If this is a, b, and there's a function f, and let's say those are the partition points. So x1, x2, x3, x4, x0. And let's say the tags are just the right endpoints of each of these smaller intervals. Then what is this number, at least in terms of this picture? That's a little off, but anyways. What I've shaded in, this area, this equals this Riemann sum of f associated to this tag here. And let me go over, again, the graph of f. This number here which we've come up with we interpret as somehow being an approximate area. Again, I don't like saying area underneath the curve, because that presupposes that there is a notion of the area underneath the curve independent of what we're doing here. But that's not the case. We are, in fact, giving a theory-- a mathematical theory-- of area underneath the curve. We are prescribing a number which we interpret as the area underneath the curve. These Riemann sums we interpret as being approximate areas. What we would like to do is somehow take a limit as the lengths of these subintervals get smaller and smaller, as the norm of the partitions go to 0. And what we would like to say is that these approximate areas-- these are just numbers-- converge to some limiting number-- a, say. That number we refer to as the Riemann integral of f, and we interpret as the area underneath the graph of f. Now, for this to work, we have to show that as we take partitions with smaller and smaller norm, where the intervals get smaller and smaller, these approximate areas actually do converge to some number. And that's going to be the content of the next lecture, in which we'll prove the existence of the Riemann integral and do some properties about. So the take-home point is, again, there is no definition of area underneath the curve independent of what we're doing. It's not like, out there in the universe, there's a notion of the area underneath the curve, and when we compute the Riemann integral, magically, those two things-- those two numbers-- coincide. No. We are giving-- we are constructing a theory of the area underneath the curve, which, for example, for a half-circle, or square, or ellipse, do, in fact, coincide with stuff you know from ordinary geometry. And therefore, it gives a good theory of area underneath the curve. But in order for us to construct that theory, or how we're constructing that theory, is for a continuous function, we define a number associated to a partition, which we interpret as approximate areas. We would like to say these approximate areas converge to some number as the partitions get finer and finer, or as the norm gets smaller and smaller. And that limiting number we interpret as the area underneath the curve. That will be what we do next time, is actually show the existence of this limiting number, which is the Riemann integral of f. And we'll stop there.
MIT_18100A_Real_Analysis_Fall_2020
Lecture_16_The_MinMax_Theorem_and_Bolzanos_Intermediate_Value_Theorem.txt
[SQUEAKING] [RUSTLING] [CLICKING] CASEY RODRIGUEZ: OK, so let's continue with our discussion about continuous functions, no pun intended. First, let me recall a few facts. Well, this is not a definition. But this was a theorem we proved last time. So I won't recall the definition of continuity. But we proved this theorem characterizing continuity. And if we have a subset of R, an element in S, and a function f going from S to R, then f is continuous at C if and only if for every sequence x sub n, we have elements of S such that xn converges to C. We have limit as n goes to infinity if x sub n equals f of C. So this was a theorem we proved last time. And let me recall an older theorem, which I said was very powerful, but never really gave you any powerful application, which we'll start giving some powerful applications now. But this was the Bolzano-Weierstrass theorem, which states that every bounded sequence of real numbers has a convergent sub-sequence. And we're going to use these two theorems today and in the next lecture to prove that a continuous function on a closed and bounded interval is very well-behaved. So what we're going to show, let's call it the theme, is that so if f from a closed and bounded interval is continuous, meaning it's continuous at every point, then it is well-behaved. And for example, the first thing that we're going to prove, this will be a combination of two theorems, is that the image of a closed and bounded interval by a continuous function is another closed and bounded interval. And E could equal D. So this could just be a single point. For example, if f is a constant function, then the image would just be a single point. So this is a theme of this lecture and the next lecture. And so, let's start. Basically, we're going to prove two theorems. One is called the min-max theorem, and the other is called Bolzano's intermediate value theorem. The min-max theorem will tell you that f of an interval is always contained in an interval of this type. And then the intermediate value theorem will tell you everything in between two certain bounds is attained by f. So let's start off with showing that f of a closed and bounded interval is a bounded set. So let me just recall what it means for a function to be bounded. We say that a function from a subset S to R is bounded. So here, S is a subset of R-- is bounded if there exists a non-negative B such that for all x and S, f of x is bigger than or equal to B. So for example, if I take f of x to be 3x plus 1. And here, S is closed and bounded interval 0, 1, then f is bounded. So pictorially, what does this mean? Well, f of x equals 3x plus 1. So at 1, it's going to be 4. And pictorially, a function is bounded if the graph is always bounded between two real numbers. And for us, for f of x, it's always bounded between 0 and 4 in terms of this absolute value. So f of x equals 3x plus 1. This is less than or equal to by the triangle inequality 3 times the absolute value of x plus 1. X is between 0 and 1. So it's absolute value is bounded by 1. So this function is bounded. And so, how about a function which is not bounded? What does that mean? Remember, whenever we see a definition, we should try to negate it to understand it better. So f is unbounded if for all B bigger than or equal to 0, there exists some bad x such that f of x in absolute value is bigger than or equal to B. So if a bounded function is supposed to have a graph bounded between two real numbers, an unbounded one is one in which at least part of the graph is going off to plus infinity or minus infinity. So most basic example we could think of is, let's say if we wanted to go from 0, 1 to R. Again, we could take f of x to be well, 0 at x equals 0, 1 over x if x is not equal to 0. So this function looks like 1, 1, and goes up to infinity as you approach x equals 0. So why is this unbounded? We have to verify the negation of the definition. So claim f is unbounded. Let B be bigger than or equal to 0. We now have to find an x in 0, 1, so that 1 over x-- or so that f of x is bigger than B. And this is pretty easy. So let x be in R such that 1 over x is bigger than B. So if B is equal to 0, we take x to be anything in 0, 1. That's fair enough-- i.e. x [? less than ?] 1 over B. So if we like, we could make this explicit. So I'll just keep it here. And so, now if we take x here and look at the graph, it's going to be bigger than B. And that's it. So several of these proofs will have the same flavor of what we're going to try and prove. And this one is a nice one to start off with, and necessary. So my first theorem that I want to prove about continuous functions on a bounded interval-- closed and bounded interval-- is that they're bounded. So if I have a function from a, b to R is continuous, meaning it's continuous at every point in a, b, then f is bounded. So the proof of this theorem, and several of these theorems, will be by contradiction. And the fact that we're on the closed and bounded interval is-- or that we're in this setting is what allows us to use Bolzano-Weierstrass, this powerful tool. So this is-- so assume and, so like I said, this proof is by contradiction, that the conclusion is false. Which the precise meaning is right here. For all b, there exists an x and S such that that. So for every b, I can find an x and S so that the absolute value of f of x is bigger than or equal to b. So I could take b to be N, for example, where N as a natural number. Then for all N, a natural number, so I'm taking b to be N, there exists x sub n in a, b, such that f of x sub n in absolute value is bigger than or equal to n. So now I have this sequence, which is bounded because it's in this closed and bounded interval. Then xn is bounded, because it's in this closed and bounded interval a, b. So by Bolzano-Weierstrass, there exists a subsequence x in sub k, of the sequence x sub n and x in R, such that limit as k goes to infinity of x of n sub k equals x. Now, I claim that this x is actually in the interval a, b. Since x of n sub k is a subsequence of x sub n, which are in a sub b, or a comma b, so I have for all k x sub n sub k is between a and b. This implies by what we know about limits that limits respect inequalities. And therefore, a is less than or equal to limit as k goes to infinity of x sub n sub k, which is less than or equal to b. And this is x. So i.e. x is between a and b. Another way, instead of going through this is, I believe we did in the assignment, that an interval of this form is closed. And you proved that for a closed set-- so when I say an interval of this form is closed, I mean it's a closed set. And you proved on the assignment that for a closed set, if I have a convergent sequence, then the limit of that sequence has to belong to the set. So this is really a consequence of the fact that this interval is a closed set. So I have this subsequence x sub n sub k. It's converging to x, which is in a, b. And I know the function. I haven't used anything about f yet. And I'm assuming it's continuous. So let's look at f of x. Since f is continuous, and the x sub n sub k is converged to x as k goes to infinity, I know by the theorem up top over there, that f of x equals the limit of f of x sub n sub k. And therefore, the absolute value of x is equal to the limit as k goes to infinity of the absolute values. But now, f of x sub n is always bigger than or equal to n. That's how the x of n's were chosen. We assume that f is unbounded, and therefore, all of these x sub n's in a, b were chosen so that f of x sub n is bigger than or equal to n. So this is bigger than or equal to-- so each of these is bigger than or equal to n sub k. Let's see, how to write this without-- OK. So we'll write it this way. I don't want to write the limit of something equals infinity, because we haven't said what it means for the limit of something to equal infinity. So then, the limit as k goes to infinity of f of x sub n sub k exists, which implies that the sequence is bounded. A convergent sequence is always bounded. And since n sub k is always less than or equal to f of x sub k, and this is a bounded sequence, this tells me that the sequence n sub k is bounded. But this is impossible. Remember, to form a subsequence, the n sub k's are increasing integers, which is a contradiction. And this is always because n sub 1, is always less than n sub 2, is less than n 3, and so on. So these are always getting bigger without bound. And therefore, we have our contradiction. So again, here's the structure of the proof. We want to assume some property of f. So we assume not. We get this kind of bad sequence of numbers in this interval. And using Bolzano-Weierstrass, we get to pass to some limit x. And the continuity of f at x essentially breaks the badness of this sequence x sub n. And we'll see another argument where it's kind of the same flavor. So we'll soon state that f always achieves a maximum value and minimum value. So let me first on the closed and bounded intervals-- so let me precisely define what these absolute mins and absolute maxes are. So let f be a function from S to R. S is a non-empty subset of R. We say, f achieves an absolute max. Let me write it this way-- an absolute min at c in S if f of c sits below f of x for all x in S. If for all x in S, f of c is less than or equal to f of x. So this is an absolute min. f achieves absolute max at d and S if f of d sits above everything. Every x you stick into f, if for all x in S, f of x is less than or equal to f of d. So for example. So what's the picture that goes with this? Let's imagine we're on a closed and bounded interval. Then d-- so f achieves absolute max at d. The graph of f sits below the value f of d. And sits above the value f of c. Now, absolute max and mins at points are so that the function is always sitting above f evaluated at that point. Just a quick Warning. Let's say, we're looking at-- all right, so this is not to scale. But let's say our function looks like this. So this is one, two, three halves. Then, this function does not have an absolute max, does not achieve an absolute max, or achieve an absolute min on the set 1, 2. What you would like to say is that f achieves an absolute min at 1. But the graph does not sit above f evaluated at 1. Just like over here, the graph of f does not sit below f evaluated at 2. So absolute f achieves an absolute min at a point if the whole graph sits above f evaluated at that point, not necessarily some number, it sits below the whole graph. That's what boundedness means. So I just want to clarify that distinction between the graph of a function being bounded below by a number and f achieving a min at a certain point. So for example, the graph of this function is bounded below by 1. But f does not achieve an absolute min at c equals 1. So our next theorem is the min-max theorem for continuous functions, which is the following. Let f be a function from a, b to R. If f is continuous, meaning it's continuous at every point in a,b, then f achieves absolute min and absolute max on a, b. So another way of stating this, so let's just make this a remark, it says that they there exists c, d, and a, b, such that f of a, b is contained in f of c, f of d. So we'll do the proof of an absolute max. And the proof of the absolute min, I'll leave to you. It's a simple change in the argument that's not too difficult. And again, we're going to use this powerful tool, Bolzano-Weierstrass, that allows us to go from an arbitrary sequence in a, b, to a convergent subsequence. So this equals f from a to b-- let me write it this way. If this function is continuous, then by what we've proven-- where? Over there. This implies f is bounded. Then the set E given by the range of f, so f of x, where x is in a, b, this set is bounded above. If the absolute value of f of x is always less than or equal to some b, then f of x is always bounded above by b for all x and a, b. So this set is bounded above. Now, let L be the supremum of E, which exists, because the real numbers have the least upper bound property. Whenever we have a non-empty subset which is bounded above, we can always find a supremum. Now, since L is a supremum of this set, we did this, I think, in an assignment. Yes, I definitely did an assignment. Then there exists a sequence of elements of this set E converging to L. And to express that, that means there exists some sequence of elements. So there exists a sequence of the form f of x sub n such that limit as n goes to infinity of f of x sub n equals L. Now, we would like to show that L is equal to f of d for some number d. And how we're going to do that is, well, this is-- the x sub n's are just some sequence in a, b. So we can pass to a subsequence, which converges to some number, call it d. And then we need to show that f of d equals L. And that's where we use continuity. And then, that's the whole proof. Which we saw a little bit over here. We passed a subsequence. And then we get to say that the limit of this subsequence has the same property as the original sequence. So by Bolzano-Weierstrass, there exists a subsequence x sub n sub k. So this is a subsequence of the sequence x sub n. And as before, the same argument and limit d, but as before, d will be in the set a, b, such that limit as k goes to infinity of x of n sub k equals d. So we use Bolzano-Weierstrass to pass to a subsequence of the x sub n's, and a convergent subsequence. And yeah, so, all of these x sub n sub k's are between a and b. So their limit d will be between a and b. But now, since f is continuous at d, it's continuous at every point in a, b. So it's certainly continuous at d-- f of d, this is equal to the limit as k goes to infinity of f of x of n sub k, since the x of n sub k's are converging to d. And now, f of x sub n sub k, this is also a subsequence of the sequence f of x sub n. And f of x sub n is a convergent sequence. So any subsequence has to converge to the same thing. Since f of x sub n converges to L, and f of n sub k, k is a subsequence of f of x sub n. And remember, what was L? L was the sup of V. And therefore, for all x in a, b, f of x is less than or equal to f of d, which means f achieves an absolute max at d. And the absolute min is similar. And I'll leave it to you. So just rerunning through the proof. We did use the fact that a continuous function is bounded. And we extracted the sup, if you like, of the range of f. And so, which is by this exercise we did in one of the assignments, means that this supremum is equal to the limit of f of x sub n's. We would like to show L is equal to f of d for some d. So we passed a subsequence of the x sub n's. That does converge to something in the interval a, b. And we can show using the continuity of f. And how this original sequence was chosen, that f at that point is actually equal to the supremum of that set. And therefore, f achieves an absolute max at that set. As beginning students of math, one of the things we should be curious about is what hypotheses are needed and what are not in the statements of theorems. So our main hypothesis-- so we had two hypotheses really in the theorem, that f is going from this closed and bounded interval to R, and that f is continuous. So what happens if we drop one of those hypotheses? Is the conclusion still true, that f achieves an absolute max and absolute min? So this example that I drew right here shows that we need to have the function be continuous in order for it to achieve an absolute max and absolute min on the closed and bounded interval. But it also needs to be on a closed and bounded interval. So we have some continuous function from S to R. Can we drop the assumption that S is equal to this closed and bounded interval? Meaning could it be, say, an open interval? Does the same conclusion hold? I.e. let's say f is from an open interval. And the answer is, no. Simple example is, if I take the function f of x equals, for example, 1 over x, minus 1 over 1 minus x. And on the open interval 0, 1, so what does this guy look like? Here's 0. Let me draw a dotted line there. Here's 1. And the graph shoots up to positive infinity as you approach 0 and as you approach 1. So where does it equal 0? I guess there's 1/2. So this function does not achieve a absolute min or absolute max. But it is continuous on this interval. But because the interval is not closed and bounded, this function does not achieve an absolute min or absolute max. So what I'm trying to say is that the assumptions that two-- so there's two assumptions here, that we're working on a closed and bounded interval, and that f is continuous. These two assumptions are necessary for the theorem to be true. Now, they are not the most precise way of stating this theorem. You could replace a closed and bounded interval with what's called a compact set, which maybe I'll put in the assignment or on the midterm, depending on where there's room. But just in the setting of intervals, the interval has to be closed and bounded. And the function has to be continuous for this theorem to hold true. So that's what I wanted to get at. So what we've proven, as I said in this remark here, is that there exists two numbers c and d in a, b, so that f of a, b is contained in f of c, f of d. c would be where f achieves an absolute min. d would be where if f achieves an absolute max. And this is absolute max. So now the question becomes, do I hit everything in between? Does this inclusion become equality? And I gave the game away at the beginning of the lecture by saying, yes, it will. But that's actually a theorem. So this was the min-max theorem, which I didn't call it that. I should have. It's actually called that in the notes. But now we're going to move on to the intermediate value theorem. And first we're going to do what looks like a special case, and then we'll prove the general case. So this theorem, which I'm actually-- it's not called this in the textbook. But I'm going to call it this, which I'll call the bisection method, is the following. Let f be a function from a, b to R if f of a is less than 0-- and so, I need one more-- be continuous. So we're always in the continuous setting for this section. So if f of a is less than 0 and f of b is bigger than 0, then there exists c and E in the interval a, b such that f of c equals 0. So the picture that goes on with goes along with this is, here's a, b. Here's f of a. f of b is positive. And therefore, if it's continuous, and you believe the definition of a continuity is the fact that is that I don't have to pick up my piece of chalk. It pains me to say this. But that's, again, not the official definition, but one you keep in your head. Since somebody probably told you that at some point, if I don't have to pick up my piece of chalk, then eventually, I have to cross the x-axis. And therefore, at this point c, f of c will be 0. Now, why do I call this theorem the bisection method? As we'll see, the way you determine this c is by what in calculus books is called the bisection method. So for the proof, let-- we're going to define a sequence of numbers a in b and with special properties. So I'm first going to tell you what a1 and b1 are. a1 is just going to be a, b1 is going to be b. And so, now I'm going to tell you how to choose the next element in the sequence knowing the element in the sequence before. So we're going to-- first to find two sequences a n and bn. So let me tell you how to do that. And the way we're going to choose this is so that f of a sub n is always less than 0, and f of b sub n is always bigger than or equal to 0. And there obtained from the previous two guys by taking the midpoint, so bisecting the ones before. So to get this started, a1 will be just a, b1 will be b. Now, for in a natural number, we're going to define-- so we know a1. So now, I'm going to tell you how to define a2 based on now you know a1. But what I'm about to write down will also tell you how to define a3, since you now know how to do a2. So I'm going to write it this way. For in a natural number, knowing a sub n, b sub n, we define a sub n plus 1 and b sub n plus 1 as follows. So if f of a sub n plus b sub n-- so the midpoint between the two guys that I already know-- so if you like, take n to be 1 for when you first read how to define a sub 2, say. But what I'm writing down applies to every in. So we take f of a sub n plus b sub n over 2 to be, if this is bigger than or equal to 0, then we define a sub n plus 1 to be a sub n, and b sub n plus 1 to be the midpoint. And if of the midpoint is less than 0, then a sub n plus 1 is going to be the midpoint, and b sub n plus 1 will be the previous point. So let me-- in fact, so this is how you define the sequence in general. Let's walk through what this means just for n equals 2 so that you see-- you get the idea. Maybe I should have done this first. So let's draw a picture to go-- in fact, I don't need this axis. I'll just-- b. So and this is a sub 1, this is b sub 1. And then, we look at the midpoint. So we know that f of a sub 1 is less than 0. f of b sub 1 is bigger than 0, which is certainly bigger than or equal to 0 if it's bigger than 0. Now we look at the midpoint. And based on the sign of this guy will be how we define a2 and b2. So for the sake of me going through this, Let's suppose that f of this thing is less than 0. Then I take, a2 will be a1 plus b1 over 2. And b2 will be b1. And now, I look at the midpoint of these two guys. So now, I'm at this stage, f of a2 is less than 0, f of b2 is bigger than 0. So I'm in the picture before, except now I'm at half the distance between the two endpoints. And so, I look at this point now, a2 plus V2 over 2. And I look at the sign of that. And let's suppose f of this thing is bigger than or equal to 0. Then I will take this point to be a 3, and this point will now be b3. And what I have is f of a3 is less than 0, f of b3 is bigger than or equal to 0. So the point is that the left endpoint is always negative when I stick it into f. The right endpoint is always non-negative when I stick it into f. And that the distance between the two midpoints is always getting cut in half by the previous distance before. But we're always in the picture, but kind of in the setting we were in the step before. Now, we have three properties of this sequence of a n's and bn's So for all n, they're always bounded between the original two endpoints. Not only that, the a sub n's are always moving to the right. Remember, a sub n is always getting replaced by the midpoint between a sub n and b sub n, possibly, or staying the same. So a sub n is always less than or equal to a sub n plus 1. So in fact, let me write it this way. We always have a sub n is less than or equal to b sub n. We always have a is less than or equal to a n plus 1, which is less than or equal to a n. And then, we also have-- so for all in b. I'm making a mess of what I'm writing down. So let's start this over. So the first property I want to write, for all N natural numbers, we have b, a sub n is always less than or equal to a sub n plus 1. And b sub n plus 1, remember, whenever-- if we're going to change b, it's always to change it to the midpoint between the previous guy and the a from the previous step. So this is always less than or equal to b sub n. And now, two, if I look at the difference, this is always equal to 1/2 of the previous distance. Because either one of these are getting changed to the midpoint. So if it's a sub n plus 1, then b sub n plus 1 minus a sub n plus 1, is b sub n minus a sub n plus b sub n over 2, which gives me this. Three, for all n, a natural number-- and this is just based on how we are choosing-- we are doing this construction, f of a sub n is less than 0, and f of b sub n is bigger than or equal to 0. So I hope this is clear. All of these, if you like, you could prove by induction. For equals 1, this is certainly true. Assume it holds for n equals m, and then prove all of these statements here for n equals m plus 1. But I'm just stating it as clear from the construction, and hopefully it is. So what does this give us? Well, by one, the sequence is a sub n and b sub n are bounded, because they're always bounded between a and b monotone sequences. a sub n is increasing, b sub n is decreasing. Thus, there exists limits. So I'm going to call them c and d. But it has nothing to do with the c and d from over here. Thus these sequences converge. I.e, there exists element c and d in a, b, such that limit as n goes to infinity of a sub n equals c-- again, the a sub n's and b sub n's are between a and b. So their limits will also be between a and b-- b sub n equals d. Now, I claim c equals d. Why should this not come as a surprise? Well, the a n's and bn's are getting very close. They're always-- the distance between any two of them is always getting halved. Now, a sub m minus b sub n, this equals a sub n minus 1, minus b sub n minus 2 over 2, which equals 1 over 2 squared, a sub n minus 2, minus b sub n minus 2, dot, dot, dot. And therefore, this equals 1 over 2n minus 1 b sub 1 minus a sub 1, equals 1 over 2 to the n minus 1, b minus a. And therefore, c minus d, which is equal to the limit as n goes to infinity of a sub n minus b sub n equals the limit as n goes to infinity of 1 over 2 to the n minus 1, b minus a. 1 over 2 to the n minus 1 is converging to 0, times this fixed number, b minus a, equals 0. And therefore, these two numbers, these two limits c and d equal each other. So this step here is by the second property here. And now, we're essentially done. Using the third property, by three and continuity, if I look at f of c, since the a sub n's are converging to c, this is equal to-- and f of a sub n is always less than 0. So its limit is less than or equal to 0. And now, remember, c is also equal to the limit of the b sub n's. And therefore, by continuity, f of c is equal to f of b sub n. And the f of b sub n's are always non-negative. So their limit is also bigger than or equal to 0. So I've shown that this number f of c is less than or equal to 0, and that it's also bigger than or equal to 0. And therefore, f of c equals 0. So we use the-- why I put the bisection method here is because we used a bisection method to prove this theorem, which looks like a kind of a special case of a certain intermediate value property. Meaning that I can always find something, that if I have-- that I can always find something in the interval that achieves something in between f of a and f of b. In this case, it's just 0. But I can in fact upgrade this. This is the following theorem due to Bolzano so Bolzano's intermediate value theorem. Let f from a to b, R be continuous. If y is between f of b-- so meaning, let's say, so we're in one of two cases. Let's say f of a is less than f of b. If y is between f of a and f of b, then there exists a c in a, b, such that f of c equals y. If f of a is, say, bigger than f of b, and we take something in between, the same conclusion holds. So in short, if I take anything between the value of the function evaluated at the two endpoints a and b, then the function achieves that value at some point. So if I take any y between here, f has to cross this horizontal line. And it's achieved at some point c. And we'll deduce this from-- so this looks like a special case of this intermediate value theorem, if f of a is less than 0, f of b is bigger than 0, and y is equal to 0. But in fact, you can reduce this general statement to that special case just by a simple trick. So proof, suppose f of a is less than f of b. So we're in this first case. And y is in f of a, f of b, let g of x be f of x minus y. This is not a function of x and y. y is a fixed number between f of a and f of b. And this is a function going from f to b to R, and which is continuous. So sum of two continuous functions is continuous, f is a continuous function by assumption. This is just a constant minus y. So their sum is also continuous. But now, if we look at g of a, this is equal to f of a minus y. Now, y is bigger than f of a, since y is in f of a, f of b. So this is less than 0. And g of b is equal to f of b minus y is bigger than 0. And therefore, by the previous theorem, there exists a c in a, b, such that g of c equals 0, which means i.e. f of c equals y. And the proof, assuming f of a is bigger than f of b is similar, you just now put y here minus f of x. Or you put-- now you define g of x would be y minus f of x. So let me just make a remark about that. Is similar, look at g of x equals y minus f of x. So Bolzano's intermediate value theorem tells us that if I take any value between the function evaluated at the two endpoints, then that value is achieved by something in between a and b. And therefore, this will, in fact, give us that f of a, b is equal to f of c, f of d for some c and d. So this is a simple theorem from this. Is continuous if f local n at c-- not local, absolute min at c and absolute max at d, then the image by f of a, b is equal to f of c, f of d. And what's the proof? Well, apply Bolzano's intermediate value theorem to f now going from the possibly smaller interval, c, d. It could be d, c or c, d, depending on if the thing that I have to stick in to f to get the min and the thing I have to stick into f to get the max is smaller than the other. So apply the Bolzano's intermediate value theorem to this function now just restricted to the smaller closed interval. It's still continuous. And that's the proof. So maybe I was a little too fast with that. So let me just say a couple of more words about this. So this tells me that everything between the two values f of c and f of d is achieved by f on this interval. But this is a possibly smaller interval than a, b. So this is contained in the range of f now of f of this larger interval. And since we already know that f of a, b is contained in f of c, f of d, and now we've proven the reverse inclusion, this implies that f of c, f of d equals f of a, b. And that's the proof. So I think we'll stop there. Next time, we'll finish with a few remarks about an application of the intermediate value theorem. And then we'll move on to uniform continuity.
MIT_18100A_Real_Analysis_Fall_2020
Lecture_3_Cantors_Remarkable_Theorem_and_the_Rationals_Lack_of_the_Least_Upper_Bound_Property.txt
[SQUEAKING] [RUSTLING] [CLICKING] CASEY RODRIGUEZ: So here we are again. So I'm going to finish real quick proof of a theorem that I stated last time due to Cantor. So let me recall the setting. So last time, we were finishing up what I had to say about cardinality, which remember, is a notion of size of sets. And at the end, for a given set A, then we defined the power set of A to be the set of all subsets of A. And last time, for example, we did several of them or looked at a few different power sets. And simplest example is that the power set of the empty set is the set containing the empty set. In particular, the power has one more element than the empty set. The empty set has no elements. The power set of the empty set has one element, namely the empty set. And we are looking at this to answer a question which I posed at the end of last class, namely, all the sets we saw from the integers to even numbers to rational numbers, which is on the assignment, all have the same cardinality as the natural numbers. And that's what we call the countably infinite. And so a question would be, is there any set that is bigger in cardinality then the natural numbers? Is there any set that's uncountable? And so this theorem due to Cantor answers that and more. And it says the following-- if A is any set, so if A is a set, then the cardinality of A is strictly smaller than the cardinality of the power set of A. And as a consequence is that the natural numbers have smaller size than the power set of the natural numbers, which has smaller size than the power set of the power set of the natural numbers, which is smaller in size than-- and so on. So there are an infinity of infinities. There is an infinity of infinite sizes. So let's prove this theorem, and it's extremely clever and a bit mind boggling. So first, let me prove that A has cardinality less than or equal to the cardinality of the power set of A. So let A be a set. First, we can show that the cardinality of A is less than or equal to the cardinality of the power set of A. So we need to find an injection, a one-to-one map from A into the power set of A. And the simplest one to choose is one that takes an element of A to the set containing just that element. So define f from A into the power set of A by the function that takes x. And this should spit out a subset of A. So this will be the subset that consists solely of x. And this is clearly one to one. I'll prove this right now. Then we need to prove that if f of x equals f of y, then x equals y, but this is clear from the definition. Then if f of x equals f of y, this means the set containing x-- this is by the definition of how we've defined little f-- means that the set containing x is equal to the set containing y. But this means x equals y. They both contain one element. Two sets are equal if and only if one side is an element of the other. That just means x equals y in the set. Thus, f is one to one, which since we found an injective map from A to the power set of A this means-- So now we want to show that they cannot have the same cardinality. We now show that A does not have the same cardinality as the power set of A. And these two statements are what is meant when we write down, recall the definition, of the cardinality of A or size of A being smaller than the cardinality of the power set of A. So we're going to do this by a proof by contradiction. So that means we're going to assume that this does not hold and arrive at a false statement. So I assume that they do have the same cardinality. So this is our initial assumption. We're going to derive a false statement from this assumption. And the only way to arrive at a false statement from a given assumption in a logically consistent fashion is that the original statement, namely this, is false. In other words, the thing we want to show is true. So let's assume they have the same cardinality. What does that mean? Then there exists a bijective function g going from A to the power set of A. Remember, a bijective function means that G is one to one and onto. One to one meaning different things of A gets mapped to different things of the power set. Onto means everything in the power set gets mapped onto from A. So really, the fact that it's surjective is the only thing I'm going to use. And I'm going to define a weird set. And we're going to look at this set. So define a set B. And this is a subset of A. B is a set of all x's in A such that x is not in f of x. This should be g. So remember, g maps from A to the power set. So for any given x in A, g of x is a subset of A. So the condition to be in B is that this element x is not in the image of itself by G. And B could be empty. There could be no x's in A that satisfy this. But it's just a subset of A, simply by definition. We're just looking at elements in A that satisfy a certain condition. So since it's a subset of A, this means that it's a element of the power set of A. This is just how the power set is defined. Remember, it's all subsets of A So B is an element of the power set of A. Therefore, something gets mapped to B under this map. So since g is surjective, there exists b in capital A such that g of little b equals capital B. So this just follows from the fact that g is a surjection, meaning everything in here must get mapped to by some element in A. But now, let's take a look at this B guy. There are two cases-- either b is in g of b or b is not in g of b. So b is in g of b. If b is in g of b, remember this is equal to capital B, by definition. And to be in capital B means that x is not in g of x. So let me just write this again. This means b is in B, which means b is not in g of b. We started off with b and g of b, and ended with b is not in g of b. So we arrive at a false statement in this case. Then there's one other case-- b is not in g of b. Then if b is not in g of b, this immediately implies, by the definition of little b, meaning g of b is in capital B. So let me write it this way. So if b is not in g of b, then from the definition of capital B, this implies that b is in capital B. That's just from the definition of capital B. But now capital B is equal to g of little b, which is another contradiction. Because we started with this and ended with this. So what I've really proven is-- so let's ignore this contradiction mark here-- thus, we have shown that b in g of b implies b not in g of b. b not in g of b implies b is in g of b. So we've proven the statement that b is in g of b if and only if b is not in g of b. And this is a very false statement-- cannot have some object being in the set if and only if it's not in the set. So we've arrived at a false statement. And therefore, our initial assumption which led us there, namely that the cardinality of the set A has the same cardinality as the power set of A, must have been false. So it feels almost like you're being hoodwinked a little bit with this proof. But to give you some sort of cling or connection back to reality, what's underlying this argument or what's one way of understanding this argument is to think about-- so in some sense, what this argument does is make you talk about being in B while having to also reference B itself. And maybe this seems a bit wild because this is a math course, but you can do the statement just in the English language. If I am to tell you that I am a liar, then if that statement is true, then what I just said, namely that I am a liar, is false, which means I'm not a liar. Therefore, "I am a liar" implies I'm not a liar. And vice versa-- if I say, "I am a liar," and that statement is false, meaning I'm not a liar, then that statement is true, which implies I am a liar. So of course, I would not lie to you. I would just give you alternative facts. But that statement is kind of what's underlying this argument. So I'm going to put that there, true or false. That's the connection to these two things, loosely, or at least the logic is contained in trying to verify this statement. So that's all I'm going to say about cardinality. We're going to move on to the real numbers now. And so this is really, like I said in the previous lecture, our first real goal of the class is to describe R. namely, what exactly is the set of real numbers? What characterizes the set of real numbers? And so let me state this as a theorem just completely ahead of time, so that it's there for us. And this is our goal, although we're not going to prove it. Our goal is just to understand what this theorem says. So this is a complete description of what R is. There exists a unique ordered field containing Q with the least upper bound property, which we denote by R, instead of real numbers. So as you sit there and listen to this, you shouldn't have-- maybe you do, but you really shouldn't expect to have any idea about what this theorem says, what all these words mean. So our goal for this lecture is to make sense of these words. So the rest you should understand. So our goal for this lecture is to make sense of these words, "ordered field," "least upper bound property." Because these are the two defining characteristics of R. And I'm not going to prove this theorem. We're just going to take it as a given. We could prove this theorem but it takes quite a bit of time. And I'd rather start studying properties of R than building R. This is not so uncommon in math, that one is not especially interested in the proof of actually why certain things exist, but we are definitely interested in the properties of that thing once we know it exists. So let's get started on making sense of what this theorem says. Let's start off with ordered stuff. This has ordered field, least upper bound property. Let's start off with what do I mean by order. We've seen that just a little bit already when we talked about the natural numbers and this well-ordered property of the natural numbers. But I was just using words to label a certain property of the natural numbers. I didn't say they meant anything specifically. It was just a label for that. But when I say "order" now, this will definitely mean something. So ordered sets and fields definition-- an ordered set is a set S with a relation, which we label with this less than symbol . And this relation satisfies two properties. One is you can always check whether two elements are bigger than each other or equal. So for all x, y in S, either x equals y, x is less than y, or that's just restating, y is less than x. And if x is less than y, and y is less than z, then x is less than z. So an ordered set is just a set with some relation which has two properties, namely that for any two elements in the set, I can compare the two. That's basically what this says. And I have this transitive property that if x is less than y, and y is less than z, then x is less than z. And so as I said before, whenever you have some sort of mildly interesting definition, you should definitely try to come up with examples and non-examples. So what's the simplest example of an ordered set? Well, the natural numbers, which we discussed earlier, but also the integers, where I have to define what this relation means. We say m is less than n, this m is less than n, if n minus m is a natural number. So this is how we define our order. So n is bigger than m if n minus m is a natural number. And you can check that this is just the usual ordering on z. And that satisfies these two properties. The standard order on Q, namely that we'll say q is less than r if there exist natural numbers, m, n, such that r minus q equals m over n. So these are just the usual orders of orderings on the integers and rational numbers that you're used to and that you know. I'm just writing out exactly how one would define this order. And you can check just using this definition of these orders that these orders satisfy 1 and 2. Now, a simple non-example, which I guess you could call it a relation which satisfies 2 but not 1, is the following-- so let's take our set S to be-- and create the power set of the natural numbers. And we define a relation A less than B by A is less than B if A is a subset of B. So I'm just defining a relation on the power set-- A less than B if A is a subset of B. And maybe I shouldn't even use less than, because this makes you think that it's automatically an order. So let's make it a script-looking less than. This relation is not an order, though. I keep saying "order," so we usually refer to this relation as an order. So it's clear that it satisfies 2. Why? Because A is a subset of B. B is a subset of C. Then A is a subset of C, i.e., A is less than C. It just follows from the definition of what it means to be a subset of the other. If A is a subset of B, that means every element of A is an element of B. If B's an element of C, that means every element of B is an element of C. And therefore, every element of A is an element of C. So this relation satisfies the second transitive property, but it doesn't satisfy the first. I cannot always measure if one thing is bigger than the other. Why is this? For example, the set containing 0 does not equal the set containing 1. But neither of these things hold. So for something to be an order, or a set to be an ordered set, it has to have this property that I can always compare two elements of the set. And for this relation, which seems like it could be an order-- it satisfies the second property-- it does not satisfy the first property because I cannot always compare two subsets of the natural numbers by saying one is a subset of the other. So we just saw that. We have two sets which are not equal, but one is not bigger than the other. And so one more example, which again, I will leave for you to check that it does satisfy conditions 1 and 2 just based on how it's defined. For example, this is the dictionary ordering of Q Cartesian product Q. I think I wrote this down. I mean, you should know what the Cartesian product is of two things. But let me just recall that I have two sets. The Cartesian product of A and B, this is the set of all ordered pairs of elements from those sets. So the dictionary ordering of Q is what? So I need to define this relation which I claim is an order. So we'll say that a, b is less than q, r if one of two things happens. If either a is less than q or a equals q, and b is less than r-- so dictionary ordering or alphabetical ordering of Q. You can just check and see which is smaller first. If they're both equal, then you check the next letter, and see which one's smaller there. So then this relation that I've defined here is in order on Q cross Q, making it an ordered set. So this is what an ordered set is. We'll get to an ordered field in just a second. But now, let me define what I mean by this least upper bound property. And this is really what sets R apart from Q. So we'll see in a minute that both R and Q are ordered fields, so that's not what separates R from Q, the rational numbers. But what does separate R from Q is the second property, the least upper bound property. So if I removed that property, and just said there exists a unique ordered field containing Q, that would just be Q. We don't need to add anything to it. But it's the second property, the least upper bound property, that really separates R from Q. So to define this, I need to define what a least upper bound is. And this is all in the setting of an ordered set. Let S be an ordered set, and let E be a subset of S. So I'm going to make a series of definitions here. First, if there exists an element of S-- so not necessarily the set I'm looking at, the subset I'm looking at-- such that for all x in E, x is less than or equal to b. So here, I have this order less than. Less than or equal to means just what it means in English, either x is less than b or x is equal to b, and the same thing with bigger than or equal to, and so on. But just keep in mind that this ordered set is a general ordered set. You could think of it as the dictionary ordering on Q cross Q. So if there exists a b such that for all x in E, x is less than or equal to b, then we say that E is bounded above. And this element of b is an upper bound for E. So if I can find some element of my set bigger than everything in this set E, I say E's bounded above, and will be an upper bound. I also have lower bounds. If there exists b in S such that for all x in E, b is smaller than or equal to everything in E, so b is less than or equal to x, then we say E is bounded below, and b is a lower bound for E. So b sits below everything in E. Now, we call an element in S the least upper bound for E if it satisfies two conditions. One is, if there's a least upper bound, there should at least be an upper bound. b is an upper bound for E, and it should be, in some sense, the least of all upper bounds. So if I take any other upper bound, b sub 0 should sit below that one. So an element is the least upper bound for a set E if it's an upper bound and it's the least among all other upper bounds. It sits below every other upper bound. And in this case, we also say b0 is the supremum of E. And we write b0 equals sup E. Now, this was having to deal with upper bounds. We can also deal with lower bounds or we also have a definition corresponding to lower bounds for what would be the greatest lower bound. So we call an element of S the greatest lower bound for E if two conditions hold, which is kind of similar to the conditions we had for at least upper bound. Except now for lower bounds, b0 is a lower bound for E, and it is the greatest of all lower bounds. If b is any lower bound for E, then b is less than or equal to b0. And so there's some Latin name attached to least upper bound, so there's a Latin name attached to the greatest lower bound, which we call the infimum. We also call b0 the infimum of E. And we write b0 equals inf E. So we have this mildly interesting and complex definition, which means we should look at some examples to get a feel for it. So let's look at some simple examples. So let's take our big ordered set to be Z. And let's take our set E to be minus 1, 0, and 2. So what about this guy? What is the supremum? What is the infimum? So really, if I'm going to prove or if I'm going to make a statement something is equal to the supremum or something is equal to the infimum, I should actually give a proof of that, meaning if I say something is an infimum or something is a supremum, then I need to prove that it satisfies these two conditions and these two conditions. But I'm just going over examples, so I will not give a full proof of that. This is just to get some intuition going. So now, first off, what would be an upper bound for E? Well, 3, 4, 5-- 2 is an upper bound because 2 is bigger than or equal to everything in the set. So 2, 3, 4, 5-- these are all upper bounds for this set E. But what is the supremum, meaning the least upper bound? That would be 2. If I take anything less than 2 that's not an upper bound because there's something in the set bigger than that. And if I take anything bigger than that, then this is going to be bigger than 2, but 2 is still an upper bound. So 2 is the least upper bound. Now what about lower bounds? A lower bound would be minus 1, because minus 1 is less than or equal to everything in the set. But then so would be minus 2, minus 3, and so on. But the greatest lower bound would be minus 1. Now let's do another example. Let's now look at S, the rational numbers, and let's take E to be a set of rational numbers such that 0 is bigger than or equal to-- Q is between 0 and 1 inclusive. So then what are some upper bounds? Everything in E is less than or equal to 1. So 1 is also a perfectly good upper bound, 3/2, 5/4, 6/5, 7/6. Anything bigger than or equal to 1 is also an upper bound for this set. And the least upper bound would be 1. So if I were to try to draw Q for this set-- let's not do that-- I usually draw this line is for the real number line, but let's imagine it's Q. So everything bigger than or equal to 1 sits above everything in E. So everything bigger than or equal to 1 is an upper bound, and the least upper bound is 1. What are some lower bounds? Everything in the set E is bigger than or equal to 0. So 0 is a lower bound. So is minus 1/2, minus 1/3, minus 1/4, minus 1/5, and so on. But anything less than or equal to 0 is a lower bound. Nothing bigger than 0 can be a lower bound because I can always find some-- so this is not a proof, but this is some explanation. Maybe I should have also said this for the upper bound statement, but anything bigger than 0 is not a lower bound. Because what if I take some number, call it r, bigger than 0, and less than 1, then r cannot be a lower bound because I can find something in E less than r, namely r over 2. So therefore-- and we're going to do some proofs where we actually have to get our hands dirty and prove something is an infimum or the supremum. So you'll see how that works, but for now, let's just go off of intuition. The infimum of this set is 0. Now, both of these examples so far have this property that both the sup and the inf belong to the set E that I'm looking at. But this is not necessarily always the case. So I could change this slightly so our universe, the ordered set we're looking in, is still Q, and the subset E is now, let's say, q is bigger than 0 and less than 1. Then sup of E is still going to be 1, but this is not an element of this set E. It's of course an element of S, this universe we're in, but it's not an element of the set E. And likewise, the infimum is still 0, but it's not an element of E. So there are situations where you have a set, a subset of an ordered set, which has an infimum and supremum which do not exist in this smaller set you're looking at. Whether or not the supremum or infimum may exist in the universe that you're looking at in the bigger set is an entirely different issue. In fact, that's the next issue we're going to talk about. So let me just reiterate what we saw a minute ago. So we can be in some ordered set Z or Q, and the smaller set, which we're taking the inf or sup in, it could belong to the smaller set-- for example, in this case 1 and 0 were in the set E-- or not. There is this case where neither of them were in E. And of course, if I put less than or equal to here, then I would had the inf in E and the sup not in E, and then vice versa. So inf's and sup's of these sets don't necessarily need to belong to that set you're looking at. But they at least existed in the big set Q. Now, big, ordered sets-- so this ordered set that has this property that the inf and sup of bounded above and below sets exist in the bigger set always-- is what we call an ordered set with the least upper bound property. So the definition-- ordered set S has the least property if every subset of S which is non-empty and bounded above has a supremum in S. So a set has the least upper bound property if every non-empty bounded set has a supremum. And so we could come up with simple examples of sets that do have the least upper bound property. For example, let's take S to be-- I mean, this is kind of the simplest one-- S with a single element, let's say 0. Here there's no order to put on it. Every element in here is equal to itself. And therefore, every non-empty subset of S is just the whole set, and the supremum would then be that one element. So this is kind of the silliest one you could do. Let's say you could have two elements. And then if E is a subset of S, E is one of four guys. If E is a non-empty subset of S here, the order is 1, 0 is less than 1. If E is equal to 0, then sup equals 0. If E is equal to 1, sup E equals 1. And if E is equal to 0, 1, this implies sup E equals 1. So this is in S, this is in S, this is in S. So every subset of S which is non-empty has a supremum in S. So these are not the most interesting examples of sets with this least upper bound property. So for example, maybe a more interesting one would be if I take S to be, let's say, with the dots here, minus 3, minus 2, minus 1. So let me just write it this way-- minus 1, minus 2, minus 3, minus 4, and so on, with the usual ordering coming from the integers. So minus 2 is less than minus 1, minus 3 is less than minus 2, and so on. I claim this that does have the least upper bound property. Why? Because if E is a subset of S, E non-empty. In fact, both of these-- this was always bounded above because there's only two elements. And you can choose the biggest one, 1. Every subset of S is also bounded above by minus 1. So every subset of S is automatically bounded above. So I just need to check that every non-empty set has a supremum in S. If E is a subset of S and is non-empty, then let me look at the set minus E. This is a label. This is not meant to mean anything always. This is a label, which is the set of elements minus x. So I take all my elements in E, which is a subset of the negative integers. I take their minuses. This is now a subset of the natural numbers. And since by the well-ordering property of the natural numbers, there exists an element in minus E such that it sits below everything in minus E for all x in E, which means that for all x in E, x is less than or equal to minus m. And I'll let you think about this just for a minute. But so m is in minus E, therefore minus m is in E. And m less than or equal to minus x for all x in E implies that for all x in capital E is less than or equal to minus m. And therefore, I found an element actually in the set capital E that's bigger than or equal to everything in the set. You can convince yourself or write down a formal proof, if you like, that this implies that minus m is therefore the supremum. And so using this trickery of going from one set that's bounded above to a different set, a subset of the natural numbers, you can also show, for example, Z has the least upper bound property, which I'm going to now shorten because those are a lot of things to write. LUBP, least upper bound property-- the integers also have this property. But the integers are not all that interesting, again, because I cannot divide by an integer and stay in the set. For what we want to do, we want to be able to add, multiply, subtract, and divide. And we can't do that in the integers. We can do that in the rational numbers. However, Q does not have this property. And we're going to prove this. So this is what we're going to prove in a couple of theorems here, but let me just put this out in front. Q does not have the least upper bound property. And where does this come from? This comes from the simple fact which if you believe your-- I don't want to call it Greek mythology, but maybe it's Greek mythology strictly speaking-- that some young guy discovered that the square root of 2 is not a rational number and then he got thrown off a cliff. That's who we have to thank for showing us that Q is not perfect, that it does have this algebraic property which I just referenced-- the fact that you can add, subtract, multiply, and divide and stay within the set. But it does not have square roots of prime numbers, but square root of 2 in it. And this then manifests itself in this least upper bound property by giving an example of a set which is bounded above which does not have a supremum in the set Q. So basically, if E is the set of rational numbers-- I have not stated a theorem yet, but I'm just telling you what we're about to do-- if E is the subset of rational numbers where positive and q squared is less than 2, then sup E does not exist in the rational numbers. So therefore, we would have found a set which is bounded above which does not have a supremum in the rational numbers. And therefore, the rational numbers do not have the least upper bound property. So this is what we're going to do. And I'll prove-- state a couple of theorems that spell all this out. So first, I'm going to prove a theorem about a what the supremum of such a set would have to satisfy. So if x is in Q-- this is a statement of a theorem-- if x is in Q, and if x equals the supremum of the set, then x is bigger than or equal to 1, and x squared equals 2. So we're just saying-- don't try to take this as necessarily contradicting what I wrote there because that was not a statement of a theorem yet. I'm stating this theorem that if I have such a supremum in Q of the set, then it would have to square and give me 2. I'm not saying such an element exists. I'm just saying if there is any element of Q that is the supremum of this set, then it has to be bigger than or equal to 1, and it squares 2. So let's give a proof of this. And because I don't want to keep writing the set again, so I'm going to use the notation that I used in the comments. Let E be the set that I'm interested in. So this should be-- and suppose we have an element of Q which is the supremum of E. So first off, what would be one element of E? What's one rational number whose square is less than 2? 1. So since 1 is in E, square is less than 2, and x is the supremum of E, meaning it's an upper bound for E, it must be bigger than or equal to everything an E, in particular for 1. That implies x is bigger than or equal to 1. So that's the first part of the theorem I want to prove. So now we're going to prove two inequalities to show that x squared is actually equal to 2. We're going to prove-- so this is a common trick in analysis, that if I have two things that I would like to show equal to each other, sometimes a way to show that is by showing one is less than or equal to that. And this is less than or equal to that. So one side is less than or equal to the other side and vice versa, which immediately implies they must equal each other. So that's what we're going to do. And we're going to now prove that x squared is bigger than or equal to 2. We'll then prove that x squared is less than or equal to 2, and therefore, x squared equals 2. So to do this, we'll do this by contradiction. So let's assume-- everything else we've done so far is still true, but now we want to prove this statement. So we're going to assume that this statement is false. I assume that x squared is less than 2. So we'll define a certain rational number, h, which is a smaller of two rational numbers. It's going to be the smaller of 1/2 and 2 minus x squared over 2 over 2x plus 1. And let me just reiterate this is less than 1 because it's the smaller of 1/2 and this number. Now, when you write a proof, as you'll see, it's going to be magic that somehow this h does something magical. That's not exactly how you come up with proofs. How it comes up is you take an inequality that you want to mess with, you fiddle around with it, and you see that if h is given by something, then it breaks the inequality or it satisfies the inequality, which whichever one you're trying to do. So since x squared is less than 2, h is positive. It's the minimum of a 1/2 and this number. So if it's a 1/2, it's positive. If it's this one, then it's still positive. And what I'm going to prove now is that x plus h is in E. Its square is less than 2. This will give us our contradiction, because x is supposed to be an upper bound for E, and therefore, x is supposed to be bigger than or equal to x plus h. But this is x plus a positive number. x plus a positive number is bigger than x. So how do we do this? We have to compute the square of this and show it's less than 2. And that's where our choice of h comes from. We compute that x plus h squared-- this is x squared plus 2xh plus h squared-- now this is less than x squared plus 2xh plus h. Why? Because h times h is less than h since h is less than 1. So this is why we chose h to be less than 1, so that I can get rid of this square, and somehow just have a single h floating around, which I can then use to show x plus h is in E. So when I write the string of inequalities, this thing is supposed to be equal to the next thing. This is not saying x plus h squared is equal to what I'm about to write now. So this is equal to x squared plus 2x plus 1 times h. Now, h is the minimum of these two numbers. So h is going to be smaller than or equal to this thing. And so this is 2 minus x squared over 2. Write it this way, so I had what I had before. And then times 2 minus x squared over 2, 2x plus 1. So now some magic is happening. This cancels with that. And I have 2 minus x squared over 2, which is less than 2 minus x squared, so less than x squared plus 2 minus x squared, because I took a 1/2 of it, equals 2. So summarizing, we started off with x plus h squared, and we showed it was less than 2. And therefore, x plus h is in E. But if I have an element of E-- let's see where are we on space. But I have an element of E which is bigger than x. So that implies that x is not equal to the supremum of the set. Remember by definition, something is the supremum if it's an upper bound for the set. Something's not an upper bound if you can find something in the set bigger than b0. And this is also a good exercise to do when you come across a new definition, is to try and negate it to understand it a little bit better. So let me write here next to b is an upper bound for E-- let me write what this means, actually, here. Let me write this here. b0 is not-- so what's the negation of being an upper bound-- so not an upper bound for E if there exists an x in E such that x is bigger than b0. So we have found an element-- so going back to our proof, we have found an element of our set E which is bigger than x, which means that x is not the supremum of E, which is a contradiction to our assumption. These are assumptions for the theorem. So we're always assuming this. So this is a contradiction. Thus our assumption that x squared is bigger than or equal to 2-- I mean less than 2, which is different from our assumptions of the theorem, is false. So now, we want to prove that x squared equals 2. So we now show x squared equals 2. Since x squared-- so this is not exactly the proof. Just I'm going to rewrite it a little differently so it's maybe clear that instead of showing also x squared is less than or equal to 2, which we know the less than cannot happen, let's just show that x squared cannot be bigger than 2. Since x squared is bigger than or equal to 2, this means either x squared equals 2, which we want to show, or x squared is bigger than 2. So let's rule out this case. We now show that the case x squared bigger than 2 cannot hold. So suppose otherwise. We're going to do this-- this is going to be a proof by contradiction, as well. I guess when I say x squared bigger than 2 cannot hold, I'm also saying x squared must be less than or equal to 2. But anyway, so let's show this cannot hold. So we're going to do this by contradiction, as well. So we're trying to show this cannot hold. So let's assume that it does hold. So that's the negation of what we want to show, which is this statement, that x squared greater than 2 does not hold. So assume x squared is bigger than 2. So what we're going to do is we're going to find an upper bound for the set E which is strictly smaller than x. And we have to do that next time because I think I'm about to run out of time. So I hate to stop the proof here, but we'll finish this in the next lecture.
MIT_18100A_Real_Analysis_Fall_2020
Lecture_10_The_Completeness_of_the_Real_Numbers_and_Basic_Properties_of_Infinite_Series.txt
[SQUEAKING] [RUSTLING] [CLICKING] CASEY RODRIGUEZ: OK, so let's continue our study of sequences of real numbers. So we've seen special types of sequences, monotone sequences, before. And then in the previous lecture, we looked at sequences obtained from sequences, namely the sequences that give you the lim sup and lim inf. And we showed that these are actually limits of subsequences. Now I'm going to define what looks like a new class of sequences. But we'll see it's actually not. These are called Cauchy sequences. So "coe-shee"-- not "couch-ee," not "cawt-shee"-- Cauchy, so a French guy. So it's pronounced Cauchy and probably not even pronounced like that. It's probably got a different pronunciation by people who actually speak French. So what is the definition of a Cauchy sequence? A Cauchy sequence, intuitively, it's a sequence so that if you go far enough out in the sequence, any two entries in that sequence are close together. So convergent sequences had the property that if you go far enough out, the entries in the sequence are getting close to a real number. Cauchy sequence is that any two entries are close to each other. So a Cauchy sequence, so we say a sequence is Cauchy if we're all epsilon positive, there exists an M, natural number, such that if n is bigger than or equal to M and k is bigger than or equal to M, then xn minus xk is less than epsilon. So maybe not write if. I mean, it's the same statement, but since it looks-- so it'll look a little more like previous statements when we put a "for all" there. So you have a definition here. It's the definition of a new thing. You should now try to look at an example and then possibly negate the definition to see if you really understand it. So an example of a Cauchy sequence is x of n equals 1 over n, our favorite sequence. So let's prove this. So all we have is the definition. So we have to verify that x of n equals 1 over n verifies the definition of being Cauchy. So just like when we try to prove something is convergent, which is a "for all" epsilon statement, the first thing you have to do is let epsilon be positive. And then I have to choose M and show that that capital M produces this statement here. So choose M, a natural number, so that 1 over M is less than epsilon over 2. So I could phrase that as capital M being bigger than 2 over epsilon, but I'm going to phrase it this way. Now we have to show that it works-- namely, if I take n bigger than or equal to capital M and k bigger than or equal to a capital M, then this difference is less than epsilon. Then if n is bigger than or equal to M, k is bigger than or equal to M, and I look at 1 over n minus 1 over k, this is less than or equal to, by the triangle inequality, the absolute value of each of these added together, which is just 1 over n plus 1 over k. And since these are both bigger than or equal to M, each 1 is less than or equal to 1 over M 1 over M, so I get 2 over M, which, by our choice of M, is less than epsilon. So x of n equals 1 over n is an example of a Cauchy sequence. So let's negate the definition, and then we'll look at an example of a sequence which is not Cauchy. And as you'll probably guess, if this is our favorite sequence, which converges, our favorite sequence which doesn't converge will be an example of a sequence which is not Cauchy. And this should shouldn't come as a surprise. Because, again, a sequence which is Cauchy, if you go far enough out, any two entries are close to each other. But if we look at, for example, the sequence minus 1 to the n, which is just minus 1 plus 1 minus 1, any two entries will differ by-- or you can always choose two entries-- that differ by 2 in distance. So let's negate this definition to get what it means for something not to be Cauchy. So we'll not write all that out. So x of n is not Cauchy if-- so every time we see a "for all," it becomes "there exists." If there exists a bad epsilon 0 positive such that for all M, a natural number, you can find two entries further out than M that are greater than epsilon 0 distance to each other. So there exists n bigger than or equal to M and k bigger than or equal to M such that x of n minus x of k is bigger than or equal to this bad epsilon. OK Again, the definition of Cauchy means that, as long as I go far enough out in the sequence, this distance is supposed to be less than epsilon. So for all epsilon positive, there exists a capital M so that I have this picture. If I choose x sub k plus 1, then it should also be within distance epsilon to x sub n or x sub k. So they're getting closer and closer together. The negation means that they're not getting closer and closer together to each other. So there exists some small distance so that you can always go as far out as you want and find two entries that are greater than epsilon 0 distance to each other. So what's an example of that? Like I said, the sequence minus 1 to the n is not Cauchy. That just doesn't look right. There we go. Now it is. So this is not Cauchy. So that means there should exist some bad epsilon 0. So I can go as far out as I want and find two entries in the sequence differing from each other by epsilon 0 in distance. So basically, I can always find two entries in the sequence which differ from each other by 2. So that'll be my bad epsilon 0. So if you like, here's a proof. Choose epsilon 0 equals 2. Let M be a natural number. So now we have to find an element of entries in the sequence further out than M whose distance to each other is bigger than or equal to 2. We can just take M plus 1 and capital M. Choose n equals M and k equals M plus 1. So these are both bigger than or equal to M. Then minus 1 to the n minus 1 to the k, this is equal to 1 minus minus 1 after I factor out a minus 1 to the capital M, which equals 2. So minus 1 to the n is not Cauchy. So I, at the beginning, said that this will look like a definition of a new type of sequence, but it's not, really. So as it turns out, the elements of the sequence are getting closer and closer together as you go far enough out. So They're all kind of clustering near each other, which kind of makes you think they're all clustering near something in the real number line. And therefore, maybe, the sequence is convergent. Now, this is true-- and we'll prove this-- that a sequence is Cauchy if and only if it's convergent. Now, this is true only for the real numbers. And I'll say a little bit about this in a minute-- or not only true for the real numbers, but it's not true for the rational numbers. And I'll get to that in just a second. So what we're going to prove is that a sequence is Cauchy if and only if it is convergent. So the first thing I want to show is that Cauchy sequences are bounded. So the proof of this statement is essentially the same as the proof that convergent sequences are bounded. So let me draw a picture that goes along with this proof. So as long as I go far enough out, there exists an M so that for all n bigger than or equal to capital M, I can say this. Let's look at this entry x of M. Then for all n bigger than or equal to capital M, all of the other entries have to be within a certain distance to x sub n, based on the definition of Cauchy. So let's say I make that distance 1. And let's say 0's over here, just for this picture. So then for all n for all n bigger than or equal to capital M, x sub n lies in this interval here. And therefore, we'll get that x of n is bounded. So the way this picture looks, I'm going to write it this way. It's 1 plus 1. Now, that handles all n bigger than or equal to capital M. So we just need to deal with the first capital M minus 1 other guy. So maybe there's capital M minus 1 is over here. Capital X sub 1 is over there. Capital X sub 2 is over here. So then our bound will just be this one, which handles all of the n bigger than or equal to M plus the absolute values of these guys that we missed. So since xn is Cauchy, there exists and M, a natural number, such that for all n bigger than or equal to M, and k bigger than or equal to M, x sub n minus x sub k is less than 1 in distance. So this is certainly true for k equals capital M. So this implies for all n bigger than or equal to M, x sub n minus x sub capital M is less than 1. So now, if I use the triangle inequality, I can show that the previous implies that for all n bigger than or equal to M, if I look at the absolute value of x sub n, this is equal to x sub n minus x sub capital M plus capital M. And this is less than or equal to the absolute value of this guy plus the absolute value of this guy. And this is bounded by 1. So in summary, I've shown that for all n bigger than or equal to this fixed integer, capital M, x sub n is less than or equal to x sub capital M in absolute value plus 1. So that's for all little n bigger than or equal to capital M. So now I just need to pick a big enough number that bounds the first capital M entries that are not covered by this inequality. Capital M is fixed. So let B be the absolute value of x sub 1 plus absolute value of x sub 2 plus this fixed number now. Then for all n bigger than or equal to capital M, I have, by this inequality up here-- this is a sum of non-negative numbers, so this number is certainly bigger than or equal to just this part. And if I have n bigger than equal to 1 and less than M, then x of n, the absolute value of this guy is going to be one of these that appears here, which is certainly less than or equal to if I add on this number and the others, which is less than or equal to B. So now I've found a B which is non-negative which bounds all the absolute values. And therefore, this proves that the sequence is bounded. So we've shown that a Cauchy sequence is bounded. And so what I'm now going to show is the following. So again, all of the entries are getting close to each other. They're kind of clustering near each other. So it kind of feels like they want to converge. And this next theorem says that, well, if you've identified a limit along a subsequence, then, in fact, the entire sequence converges. So of course, this is not true for an arbitrary sequence. If a subsequence converges-- or I should say, for an arbitrary sequence, it's not true that a subsequence converging implies a full sequence converging. We have minus 1 to the n for which a subsequence converges, but the whole sequence does not converge. But if we make the additional hypothesis that the sequence is Cauchy, then the sequence converges if and only if that subsequence converges. So the statement of the theorem is following. If x sub n is Cauchy and there exists a subsequence which is converging to some number-- call it x-- then the whole sequence converges to x. So what I was saying right before I stated this theorem is that if I hide this part and just say, there exists a subsequence which is converging to x, this does not imply that the full sequence converges to x. Because we had this example of minus 1 to the n. But if I also assume the sequence is Cauchy, then it does follow that Cauchy plus subsequence converging implies the full sequence converges. So I want to show that xn converges to x. So want to show-- and we're going to do this just by using the definition, by verifying this through the definition, not using the squeeze theorem or anything like that. So let epsilon be positive. Since xn is Cauchy, there exist M0, a natural number such that for all n bigger than or equal to M0 and k bigger than or equal to M0, x sub n minus x sub k is less than epsilon over 2. Why this epsilon over 2? Or why should you not be surprised? Well, we have two assumptions here. So like we did when we did convergence of products of sequences and so on, which had two assumptions, namely two sequences converged to something, typically, that means we'll have two integers coming. We'll choose a bigger integer and then some inequalities to get an epsilon. So that's a little bit of a rambling answer to why we get an epsilon over 2 here, or why we put one there. Since the subsequence-- so this subsequence converges to x-- there exists another integer, M sub 1 such that if k is bigger than or equal to M sub 1, then x sub n sub k minus x is less than epsilon over 2. So maybe I should have used a different letter here. Let's use a little m. Because I don't want you to think these have to be the same k. So now we'll choose an integer bigger than both M1 and M2 and show that it works. Choose M to be M0 plus M1. Now we need to show this works. And if n is bigger than or equal to M, so let me, actually, make a first observation before I go to the n bigger than or equal to capital M. Then, since n sub k is bigger than or equal to k for all of k, a natural number-- just because the n sub k is there in increasing sequence of integers, which starts at least at 1-- and since n sub k is bigger than or equal to k for all k, this implies that the integer n sub capital M is bigger than or equal to M, which, remember, is M0 plus M1, which implies that n sub M is bigger than or equal to M0 and n sub M is bigger than or equal to M1. So I just wanted to make this preliminary observation. And now we'll go to showing that this capital M works. So now, if n is bigger than or equal to capital M, and I look at x sub n minus x, an absolute value, and add and subtract x sub n capital M capital M minus x and use the triangle inequality, then-- so since n is bigger than or equal to capital M, which is bigger than or equal to M0, that means n is bigger than or equal to M0. And then n sub M we just showed is bigger than or equal to 0. So by this inequality, I get that the first term is less than epsilon over 2. And now, so M is certainly bigger than or equal to M sub 1. And therefore, I will get that this part is less than epsilon over 2 because of this inequality. So that choice of capital M works. And now, we'll prove the following, that a sequence is convergent if and only if it's Cauchy. So this is a two-way street. So we need to show the left implies the right and then the right implies the left. So this direction is, in fact, easy. Based on what we've done-- I shouldn't say it's easy-- but what we've done so far, it quickly follows. So we're assuming x sub n is Cauchy. I'm trying to show it's convergent. So if x sub n is Cauchy, this implies that x sub n is bounded, the sequence is bounded, which implies by the Bolzano-Weierstrass theorem that x sub n has a convergent subsequence. And by the theorem we just proved, if a Cauchy sequence has a convergent subsequence, it must be convergent. Now, for the converse direction, that xn is convergent implies xn is Cauchy, well, so this should not come as a surprise. Let me draw a picture. Let's suppose x sub n is converging to x, and epsilon is positive. Then, since the xn's are converging to x, if I draw a little interval around x of total length epsilon-- so x minus epsilon over 2 and x plus epsilon over 2-- then I will find, as long as so then there exists M so that, for all n bigger than or equal to capital M, all of the x sub n's lie in this interval. They all lie in this interval because they have to be within distance epsilon over 2 to x if the xn's are converging to x. And since they lie in this interval, the distance between any two of them can only be as big as the length of the interval, which is epsilon. So this is essentially the picture of why a convergence sequence has to be Cauchy. So now let's turn this picture into math. We have to verify xn is Cauchy through the definition. That's all we have. So let epsilon be positive. Since the xn's converge to x, there exists an integer M sub 0, a natural number, such that for all n bigger than or equal to M sub 0, x sub n minus x is less than epsilon over 2. And so we'll choose the M for our definition of Cauchy to be this M sub 0. And if n is bigger than or equal to M and k is bigger than or equal to M and I look at the absolute value of x sub n minus x sub k and add and subtract x and use the triangle inequality, this is less than or equal to x sub n minus x, an absolute value, plus x minus x sub k. Each of these is less than epsilon over 2 since n is bigger than or equal to capital M, and k is bigger than or equal to capital M. So this is less than epsilon over 2 plus epsilon over 2 equals epsilon. And therefore, xn is Cauchy. Now I want to make a brief remark about the previous theorem. So remember how this whole story started off? There was something wrong with the rational numbers, namely, they didn't contain the square root of 2. So we couldn't solve the algebraic equation x squared minus 2 equals 0. But this also, then, turned into the rationals not being complete in the sense of order. Not every non-empty bounded set had a supremum. It didn't have the least upper bound property. But you can also interpret this lack of having square root of 2 as somehow saying that the rationals are incomplete in this sense. So hopefully, at the end of this class, we'll be able to get to metrics basis. But so what do I mean by that? Let's say I look at this statement now within the universe of rational numbers. So now, if this sequence is rational numbers-- meaning sequences are only sequences of rational numbers, limits are only elements of the rational numbers, epsilon is only a rational number, and so on-- then we still have many of the same theorems that we proved-- not all of them, and I'll indicate which ones don't hold. But if we only work in rationals, then we always do have a convergence implies Cauchy, meaning convergent sequences are Cauchy. But Cauchy sequences are not necessarily convergent. Again, what's the example here, or what's the intuitive example? Take x sub n so that x of n is in Q. And now viewed in the universe of real numbers, x sub n's converge to root 2. Then such a sequence would be a Cauchy sequence. We just proved that, basically. So such a sequence would be a Cauchy sequence of rational numbers. However, it would not converge in the set of rational numbers. It would converge to the square root of 2, which is not a rational number. So because the square root of 2 is not a rational number, this shows that the rational numbers do not have this completeness property that Cauchy sequences converge. So there's a whole, still, to this day, kind of industry of studying spaces for which Cauchy is equivalent to convergent. These are called complete metric spaces. And then if you add a little more structure, they're called Banach spaces and so on, which are very important, not just in math but also for formulating rigorously a lot of the underlying assumptions for mathematical physics. So if we're just looking inside the rationals, it does not follow that Cauchy sequences always converge. And now let's just stop for a minute and take stock of why this was true for the real numbers. What did we use going back? So if you really go back to the proof of the Bolzano-Weierstrass-- so that's what we used here to show that Cauchy sequences converge-- we use the fact that the lim sup and the lim inf always exists. And lim sup and lim inf, first off, they're defined to be sups and infs, which may not always exist as rational numbers, as we've already shown. So that's definitely a problem already there. But even more so, when we prove that every bounded monotone sequence converges, what we showed was that this limit is actually a sup of a certain set or an inf of a certain set, which, again, may or may not exist if we're just looking in the rational numbers. Because the rational numbers do not have the least upper bound property. So it really is the least upper bound property that gives us convergence equivalent to Cauchy for the real numbers. So for R, the least upper bound property is-- it has to be because that's the main thing that separates the two fields, but I'm just reiterating this here-- is the reason why convergent is equivalent to Cauchy. Now that I've proved that Cauchy is equivalent to convergence, maybe you'll ask, then why did we introduce it at all? If these two notions are the same, why even introduce them if they're just convergent sequences already? And the reason is because to show that a sequence converges, you have to somehow have your hands on a candidate for the limit. If you want to prove that xn converges, you have to somehow come up with an x that it converges to. And it's not always clear how to find that x. But Cauchy, although it's equivalent to a convergent in the set of real numbers, doesn't require you to find a candidate for convergence. All it requires you to do is show that, as long as you go far enough out, any two entries in the sequence are close together without requiring you to come up with a limit. See, computing limits is quite difficult. We're about to do series. And there's maybe, I don't know, five series people can compute explicitly. But you do know that there's a ton of other series that are actually convergent, even though you don't know what the limit as. And why do you know that? This is exactly because and exactly why people thought of Cauchy sequences to begin with in much of analysis. So again, just to summarize this, convergent sequences are nice. But in practice, it's difficult to get your hands on what could be a limit of a sequence, especially if that sequence is pretty complicated. So if you're trying to show a certain sequence converges, it suffices, by what we've done here, to show that it's Cauchy. And that's a little bit easier to do because that just requires you to work with the original sequence. You don't have to come up with a limit. You can just take your sequence and start playing directly with the entries rather than try to come up with a limit explicitly. So that's what we're going to move on to is series now, which, as I said a minute ago, is original reason why people started developing the foundations of analysis, what we're talking about right now, to begin with. Because they were just kind of doing very formal things that ended up not making sense, like they were adding infinitely many positive numbers and coming up with a negative number. Well, that can't be right. So all of this was created, discovered-- however you want to phrase it-- to put on rigorous foundations this next topic, which is series. And you dealt with series in calculus, so you know what a series is. Maybe you don't remember all the proofs of the properties of series. But suffice it to say, series is a pretty good motivation since it's one of the most useful things that comes out of math. Series expansions are how you solve ODEs, PDEs, Taylor expansions. All these things are, in some sense, a form of series. So being able to justify them as being real things is a necessity. So the definition of a series, for now, really is just this symbol I'm about to write down. So given a sequence x sub n, the symbol-- or maybe I'll just sometimes write just the sum x sub n-- is what's called the series associated to the sequence x sub n. So right now, that's just a symbol. We're going to interpret this as a real number in the following situation. We say that the series converges, if the following sequence given by s sub m equals-- so s sub m, this is the element of the sequence. And what is it? It is the actual sum. So this is not a formal thing. This is just a finite sum from n equals 1 to m. So this is-- so this sequence now, m equals 1 to infinity. And these guys we call partial sums converges. So right now, if we just have a sequence x sub n, the series associated to that sequence is just a symbol. We say that this series converges if this sequence of partial sums converge and if s is this limit. And we write s is equal to the series and treat the series now as a number. So in general, if I have a sequence, I just have this formal symbol, which I'm writing down, which I call a series associated to it. In the case that the sequence of partial sums converges, then I actually identify this series with a real number and treat it as a real number. And so the way I've written this, the series is starting at 1. But it doesn't necessarily have to. So just by shifting the index-- so let me just say here that we don't necessarily have to start a series at n equals 1. So this could be sum from n equals 0 to infinity, in which case, we have a sequence starting now at x sub 0. Or this could be starting at 2, in which place the sequence of x sub n starts at n equals 2. And then the sequence of partial sums would start at not m equals 1 but m equals 0 or m equals 2, depending on where the series starts. So some examples. The series sum from n equals 1 to infinity 1 over n plus 1 times n, this is a convergent series. So why is this? So let's look at the proof. So we look at the m-th partial sum-- this is the sum from n equals 1 to m of 1 over n plus 1n. And this is equal to-- now, if I write 1 over n plus 1 times n as 1 over n minus 1 over n plus 1, this is now the sum of 1 over n plus-- these are finite sums, so I can always split them up. This should be a minus. And so now, this is equal to 1 plus 1/2 plus 1 over m minus 1/2 plus 1/3 plus 1 over m plus 1 over n plus 1. And you see all of these cancel. And all that's left is 1 minus 1 over m plus 1. So the m-th partial sum is equal to 1 minus 1 over m plus 1. And therefore, the sequence of partial sums is the limit as m goes to infinity of this, which is just 1. And therefore, this series converges. Now, our favorite sequence, which does not converge, will give us a series which does not converge. So let's look at sum from n equals 1 to infinity minus 1 to the n. This does not converge. So what's the proof? The m-th partial sum, this is equal to minus 1 plus 1 plus minus 1 up until I get minus 1 to the m. And therefore, this is always equal to one of two things. If m is odd, then I have an odd number of these guys. And therefore, the minuses and pluses cancel, just leaving a minus 1 in the end-- this last one, the odd one. And m is odd and 0 if m is even. If I add up an even number of these terms, then all of the minus 1's and 1's cancel out, so I just get 0. And therefore, this sequence, which is just minus 1 for m odd, 0 for m even, does not converge. And therefore, the series does not converge. So when I write this, this is just a symbol. This is just chalk on a chalkboard. It doesn't mean anything. So let's go to another series, which does converge. And this is kind of the one to which we compare all other series, essentially, as you'll see. You have all these series tests that you remember, hopefully, from calculus that tell you when a series converges. But maybe, if you remember the proof or don't, how you do that is you converge it to one series, which you do know how to sum. So it was just by pure luck we were able to compute the sum, or compute the explicit series for this guy. Another one which we can do that for is geometric series. So the theorem is if I have a real number with absolute value less than 1, then the series starting now at 0 R to the n converges. And I can actually compute the sum of this series. And this is 1 over 1 minus r. So what's the proof? Let's look at the partial sums. And we can actually compute these, as well, just as we were able to do for the first example. We compute that the sum from n equals 0 to m of r to the m-- now, you can prove this by induction. I cannot exactly remember if I did this-- I believe I did-- in the second lecture on induction, first or second lecture on the induction. But if I add up some number raised to the n-th-- so this should be to the n-- power from 0 to m, this is equal to 1 minus r to the m plus 1 over 1 minus r. And I guess two lectures ago, we proved that if the absolute value-- so let me state this now. Two lectures ago, we proved that if the absolute value of r is less than 1, then limit as, let's make it m, of r to the m equals 0, which implies that-- so this was the m-th partial sum-- which implies that the limit as m goes to infinity of s sub m equals the limit as m goes to infinity of this thing, which is, if you like-- so that plus 1 there just multiplies r to the m by r. And using the algebraic facts we proved about limits is 1 minus r time 0 over 1 minus r, which equals 1 over 1 minus r. So now you ask about the other. What about r bigger than 1? Well, when r equals minus 1, then we get the second example we looked at. If we get r equals 1, then that's just summing up 1, and I'll leave it to you to check that that does not converge, that the sequence of partial sums, if I just sum up 1 from n equals 0 to m, is equal 2n plus 1, which does not converge. And let me make just a kind of silly comment. And maybe I didn't explicitly make this comment about sequences. Maybe I forgot to do that, as well. So for a sequence, you could start your sequence not necessarily at the first entry. Maybe you look at a, if you like, new sequence where-- well, so we know from sequences that subsequences of convergent sequences converge. So if I, instead of looking at the whole sequence, start at, let's say, x100 and then go x101, x102, and that's the sequence I look at, well, that's a subsequence of the original one, which, if it converges, implies that subsequence converges. So all that is to say-- and for this simple way of obtaining this new sequence-- all of that is to say that to understand if a sequence converges, I don't have to consider what happens for the first finitely many terms in the sequence, meaning a sequence x sub n converges if and only if a sequence starting now at say, n equals 100-- and 101, 102, 103 and so on-- converges. And the same is true for series, that a series converges if and only if a series converges, now, starting at a different point along the sequence. So this is the following theorem. Let xn be a sequence, and let capital M be a natural number. Then n equals 1 to infinity of x sub n. This converges if and only if the sequence, now starting at capital M, converges, meaning when I have to decide whether a series converges or not, it doesn't matter what's going on for the first finitely many terms. What matters is what's going on as I keep adding terms from further and further out of this sequence. And what's the proof? The proof is just expressing the partial sums for this guy in terms of the partial sums for this guy. So a partial sum satisfied for all m, sum from n equals 1 to m, and now x sub n as sum from n equals capital M to m x sub n plus sum from n equals 1 to capital M minus 1 x sub n. So this is now just a fixed number. So this is a sequence of partial sums corresponding to this series. This is a sequence of partial sums corresponding to this series. And this is just a fixed number. Therefore, if this converges, then this side converges to this plus this-- so maybe I'm going a little quick. But so if this converges, then this converges to this minus this number. And if this converges, then this sequence of partial sums converges to this limit plus this fixed number. And that's all I'm going to write. Now, coming back to the usefulness of Cauchy sequences, this is kind of where they really become useful is in the study of series. Because again, it's difficult to sum. So when I keep talking saying the word sum a series, I'm talking about find the limit of partial sums. But because we have this equivalence between Cauchy sequences and convergent sequences, to decide if a series is convergent or not, we can just decide if, in some sense, it's Cauchy or not. So let me make this definition. We say that a series x sub n is Cauchy if the sequence of partial sums-- again, I'm just going to put an m up top because this may start at 0 or n equals 1 or something-- so the sequence of partial sums is Cauchy. And so let me just restate what we proved for sequences in terms of series. So we proved that every Cauchy sequence is convergent. So a series is Cauchy means that the sequence of partial sums is Cauchy. But we've proven that Cauchy sequences are convergent. So if this is Cauchy, then it's convergent and vice versa. So based on what we've proven already for sequences, it follows that-- and this just follows immediately from what we've proven already, so I'm not even going to write a proof-- a series is Cauchy if and only if the series is convergent-- again, because both are defined in terms of the sequence of partial sums associated to the series. And we've already proven the equivalence between Cauchy and convergence for sequences. Now, let me write what it means to be Cauchy in a slightly different way. And it's the following. So before, we had that a sequence is Cauchy, intuitively, if the elements of the sequence are getting close to each other. Now, for a series to be Cauchy, the intuitive way to think about it is that the tail of the sum is getting small, is getting arbitrarily small, the tail of the sum being if I add up finitely many numbers far enough out. So a series is Cauchy if and only if all epsilon positive there exist to M, a natural number, such that for all integers l bigger than m, which is bigger than or equal to capital M, the sum from n equals n plus 1 to l of x sub n-- so this is a sum involving terms that are pretty far out there, at least all indexed by something bigger than or equal to capital M-- is less than epsilon. So a series is Cauchy if you're adding up smaller and smaller pieces, not individual pieces but actual adding those up. So I'll leave it to you to do-- they're both pretty easy-- but this direction, I'll leave it to you as an exercise. And it'll follow immediately from what I'm going to write for, essentially, this direction. So let's suppose-- and let's make things concrete, starting somewhere-- let's suppose this sum is Cauchy. And we want to prove now that it has this property. So let epsilon be positive. We now want to produce some capital number M so that this holds. So since the sequence of partial sums s sub m is Cauchy-- that's what it means for the series to be Cauchy-- there exists a natural number, capital M, such that for all M bigger than or equal to M0 and l bigger than or equal to M0, s sub m minus s sub l is less than epsilon. So we're actually going to take capital M to be this M0. Choose M to be M0. Then, if l is bigger than m, which is bigger than or equal to M, which is equal to M0, if I look at this sum from n equals m plus 1 to l x sub n, I can write-- so this is absolute value. So in fact, let me remove the absolute value so that this becomes pretty clear. I can write this sum as the l-th partial sum minus the n-th partial sum. Because the l-th partial sum sums from n equals 1 up to l. The m-th partial sums from n equals 1 up to m. So the sum containing only the terms between n plus 1 and l is the difference of these guys. So now I'll put it on absolute values. And this thing, because l and m are bigger than or equal to capital M, which is equal to M0, and because I have this inequality, this is less than epsilon. And the converse direction, again, it follows immediately, essentially, from this equality here. So to check whether a series converges, I don't have to somehow come up with a limit for this series, a sum for this series. I can just prove that the tail can be made arbitrarily small, as long as I go far enough out in the series. And from this, we get a pretty simple elementary property, so a theorem. If a series converges, then this implies that the limit as n goes to infinity of x sub n of the sequence you used to obtain the series equals 0. So this should fall in line with a series being convergent if and only if it satisfies this property here, which is somehow saying the tail of the sum is getting smaller and smaller, which means you can't be adding up big things as you go on out in the series. So proof. So we'll show this by a simple epsilon M definition. So suppose xn converges. Then xn the series is Cauchy. And now I'm going to verify this using the epsilon M definition. Let epsilon be positive. Since xn is Cauchy, this implies that there exists a natural number M0 such that for all l bigger than or equal to m bigger than or equal M0, I have that condition there, sum from n equals m plus 1 to l x sub n is less than epsilon. Choose M to be M0 plus 1. So why M0 plus 1 but not exactly M0? Just because, basically, what I'm going to do is I'm going to take l to be equal to m plus 1. And the index gets shifted by 1. Then if m is bigger than or equal to M, I get that the absolute value of x sub m-- maybe instead of saying limit as n goes to infinity, I'll write limit as m goes to infinity, it's just a change in the dummy variable-- this is equal to the sum from n equals m to m x sub n. And so I've shifted this. So now little m is bigger than or equal to capital M0 plus 1. So you could write this as, if you like, m minus 1 plus 1. So little m minus 1 is bigger than or equal to M0. So by this inequality, this is less than epsilon. So we see that if a series converges the terms, the individual terms, x sub n must converge to 0. So there's another reason why this series minus 1 to the n does not converge. Because those terms do not converge. And this also tells us that for this geometric series, when r is greater than or equal to 1 in absolute value, the series does not converge. It does not converge. And the proof is-- and we proved this, in fact, I think, a few lectures ago, as well, that if the absolute value of r is bigger than 1, then the limit as n goes to infinity of r to the n, this limit does not exist. We showed it's, in fact, unbounded. r to the n is an unbounded sequence. So it does not convert. So I'm using that theorem over there in a little bit roundabout way. Let me restate this theorem over here. This theorem says if the series converges, then this limit equals 0. Now, this statement is logically equivalent to the negation of the converse, or there's an actual word for that, but I can't remember-- namely that the negation of this implies the negation of this. So a logically equivalent way to rewrite that statement over there is that if this limit does not equal to 0-- so if it doesn't exist at all, that's also fine-- this implies that does not converge. So this restatement of the theorem over there is really what I used here. All right. And I think I'll stop there. Next time, we'll see that this theorem here is a one-way street. And I think you covered this example in, probably, calculus, namely that one-way street in the sense that if x then converges, then this limit is 0. But the converse does not hold. Namely, it is not true that if this limit is 0, then this converges. And we'll see the famed harmonic series next time.
MIT_18100A_Real_Analysis_Fall_2020
Lecture_1_Sets_Set_Operations_and_Mathematical_Induction.txt
[SQUEAKING] [RUSTLING] [CLICKING] CASEY RODRIGUEZ: OK. So I have to admit this is extremely awkward, lecturing to an empty room. So I have to imagine there's somebody on the other end actually listening to me at some point. Perhaps this is what YouTube stars have to go through at some point in their career. So what is the purpose of this course? So this is for 18 100A, Real Analysis. So the purpose of this course is twofold. Really, I think the first primary purpose of this course is to gain experience with proofs. So that means being able to read a proof, being able to write a proof. And the second statement, or the second purpose, which is supposed to be a way to obtain the first purpose, is to prove statements about real numbers, functions, and limits. OK. So the second part, this is the analysis part, OK? So for the first few lectures, we're going to do what maybe to some will be kind of review. And for most of you, a lot of this material in the first few lectures will be review. But it's a nice way to ease into the material. And things will most definitely pick up after a few lectures. So the first set of objects we're going to define and try to prove some statements about are sets. So definition-- and because I use a lot of shorthand, I will mostly write Dfn from now on instead of the entire word, definition. So a set is a collection of objects called elements or members, OK? Now, this course is supposed to be probably the first really rigorous course in math that many of you will deal with. So essentially, everything that we talk about will be rigorously and unambiguously defined. But we do have to start somewhere. And so maybe you think this word, "collection," is a little ambiguous. And perhaps you should. But to actually build up set theory from the ground up would take us quite beyond the scope of this class and too far afield of the things that we want to do, or at least that what I want to do. OK, so a set is just a collection of objects called elements, or members. There is the simplest set to define: the empty set is the set with no elements. And we denote it by this symbol here-- a circle with a dash through it. OK, so with new math typically comes new notation, new symbols that you use. So let me introduce a few shorthand notations we'll use throughout the course. I mean, quite honestly, this is a little bit of the fun of doing higher math. You get all these funny symbols. And a very accomplished mathematician at the University of Chicago, one time said, you're really only interested in the math where that has the symbols you like to write over and over again. So some notation-- a and this symbol which this symbol, which looks like a e-S means a is an element of S. A with a dash through this little e means everything here but. So a is not an element of S. This upside down A means for all. It's shorthand for all. Backwards E means there exists. And a couple of more-- if you see an arrow like this, this means implies. So I've written down one thing. This implies the next statement. I'll put an arrow between them. And an arrow going both ways means if and only if, meaning if I have a statement P if and only if Q, that means statement P implies Q and statement Q implies P, all right? And if you need a quick refresher on basic logic, you can find that in the appendix of the textbook. OK, so that's the basic definition of set, empty set. So set A-- another definition to set A is subset of B, which we write A, this little symbol that looks like a C, B If every element in A is an element in B. Little a's in capital A means little a is in capital B. So two sets are equal-- we write A equals B-- if A is a subset of B and B is a subset of A. And A is a proper subset of B if A is a subset of B and A does not equal B. And we typically write that by A and with a dash going through a line underneath the C to signify that it's not equal. So think of it as not, so less than or equal to, but not equal to is one way to think about it. OK, so let me say something since I'm now 1, 2, 3 definitions in. So definitions are a fact of life when it comes to math. In the beginning of any subject, there's going to be a lot of definitions because we have to have objects we want to talk about. And we have to have these unambiguously defined objects. So it may seem like there's going to be a lot of definitions now, but this will let up. And we will start proving some theorems, which are facts, about these objects. These are the things that we're really after. We're not really after just making up definitions. Definitions are meant to be a rigorous way of defining an object we're interested in studying. We're interested in proving theorems, facts about them. So again, a lot of this is just probably review. When we describe sets, we will use these braces and maybe list the elements in here. Or we will describe it as x in some set A, satisfying some property P of x. Or we won't write this x and A part. We'll just write all objects x satisfying x, as being an element of whatever universe we're in, that satisfy property P of x, OK? So again, you should read this as all satisfying property P of x. So the basic examples-- and this is-- you should expect this after seeing any non-trivial definition. If you were here, I would ask you to call me out, so I'll have to police myself. But after every semi-interesting definition, you should see examples, OK? This is how you actually learn about these things, or at least digest what these things are. So we have the natural numbers, which everyone is familiar with since they started to count-- 1, 2, 3, 4, and so on. We have the integers, which is 0, 1, minus 1, 2, minus 2. So all the natural numbers, along with their additive inverses, along with the 0 element, an additive identity, we have the rational numbers. So this is written as m over n such that m and n are integers and n does not equal to 0. And we have R, the real numbers, which I, as of right now, cannot actually write down what they are in terms of set-building notation. In fact, this will be our first goal of the course, is to give a proper description or definition of what R actually is. But you can think of this as you did in calculus, as Q along with-- so rationals and irrationals, like pi and 2 and these things. So this is fine to think about for now. So of course, I didn't have to use these. Maybe I'm interested in odd numbers. That's a set of numbers of the form 2m minus 1, where m is a natural number. So this is just 1, 3, 5, and so on, OK? And so note that we have the inclusions. Natural numbers are contained in the integers, which are contained in the rational numbers, which are contained in the real numbers, OK? And if you look at the history of why these things were thought up in the first place, I mean, they were thought up to solve polynomial equations that you couldn't solve in the number system before. Integers were created because I could not solve the equation x plus 1 equals 0 in the natural numbers. Rationals were thought of because I could not solve the equation 2x plus 1 equals 0 in the integers. And the real numbers were thought of because I cannot solve the equation x squared minus 2 equals 0 in the rational numbers. Now, I can't solve the equation x squared plus 1 equals 0 in the real numbers, which led to the creation of complex numbers. But we will not deal with complex numbers in this class. Although hopefully, if you keep studying analysis, you go on to complex analysis, which is really a beautiful subject of study to this day. So as I said-- let me write this here-- our first goal, real goal of the class-- and this is something to keep in mind. We're not going to do it right now. Our first real goal is to describe what R is, OK? I mean, if we're going to be proving statements about the real numbers, functions of real numbers, and limits, whatever-- those limits that you learned in calculus-- then we have to be able to really describe what we're starting with, the real numbers. OK, so let's get back to sets, to our review of sets. So there were some examples. We have a few more definitions. The union of two sets, A and B, is the set which we write-- so this is how we denote it, A U B. This is the set of all elements x. x is in A or x is in B. The intersection of A and B-- so this was defining the union. This was defining the intersection-- is the set A cap B. And this is a set of all x's so that x is in A and x is in B. So the union is take all the things from A, take all the things from B, and put them together in one big basket. The intersection is just take the things that A and B have in common. The set difference of A with respect to B is the set A backslash B. This is the set of all elements in A such that x is not in B. The complement of A is the set A-- so this is how I'm denoting the set. The next part is how I'm defining the set. This is a set of all elements in our universe that is not in A. And when I say universe, I don't mean this universe necessarily. I mean, if we're looking at subsets of R, the complement is generally with respect to R. Or if all of our sets are subsets of Q, then our universe would be Q, the rationals. And we're taking the complement in there. Two sets are disjoint if their intersection is empty, OK? So it took me quite a long time to figure out this compliment has an E in the middle as opposed to an I, as in the compliment you would give a friend. I had to do a lot of spell-checking in my thesis when my advisor pointed that out. So this is just something to keep in mind. This complement has an E in the middle of it. OK, so let me just draw a quick picture. So this blob over here is A. This is a set B. This is a set C. In fact, let's make this a little more-- OK, let's keep C there. Then what I have here, that's A intersect B. This bit over here, with the lines going this way but not including this, this is A take away B, A backslash B. And OK, so that was not meant to be along the same direction as this one, so let's go vertical. And everything with a vertical line is A intersect B, OK? So A backslash has the lines going this direction. A intersect has the lines going this direction. A union B has the lines going vertical. And C is way over here not touching any of A and B. So A and C are disjoint and B and C are disjoint. OK, they have nothing in common. OK, so this was a lot of definitions. We have not proven a single statement yet, so it's about time we do. This is probably one of the most basic theorems one can prove at the start of a Real Analysis class or any class about proofs. This is analogous to when you write your first Hello World program in a programming class. So let me state the theorem, which is DeMorgan's Laws. And the statement is the following. So if A, B, C are sets, then I have several things I can say. The union of B and C, taking their complement, this is the intersection of the complements. So the complement of the union is the intersection of the compliments. If I take their intersection and take the complement, this is the union of the complements. So the complement of the intersection is the union of the complements. Now, these are complements, meaning I am, in some sense, taking a set difference with respect to the entire universe. But I can make these things relative to some set A. So A take away B union C, this is the same as A take away B intersect A take away C. Really, again, you should think of this as a special case of one. Or at least if you were to write the proof-- I'm not going to because it's all going to be contained in the first two-- then you would see it's really just a proof of this guy. A take away B intersects C equals A take away B union A take away C, OK? So again, for a quick refresher about logic I would look at the appendix of the textbook. In general, so let me make a few remarks before we move on to the proof about typically how this is going to look. So this is some remarks. Typically, a theorem is a statement of the type P implies Q. Let me write this out in English. If some statement P holds, then Q-- for us, it's if I have any three sets, then I have these equalities between these operations of sets. So the general structure you'll see of the class is I have objects which I define unambiguously. I want to prove theorems now, meaning true statements about these objects. And the real meat is the proof part. So what is in this mysterious guy, the proof? It's quite simple. You start with-- you assume P, meaning what you were given, the hypotheses, the hypothesis, P, and-- I'm going to put dots here-- through logic and most definitely, most of the time some calculations, you arrive at Q is true. And most proofs are ended with this little box here, OK? So most proofs have this structure. I take my hypotheses. And these hypotheses mean something in terms of the definitions I have given. And now, I need to use these unambiguous definitions, along with logic and maybe some calculations, to conclude that statement Q is true. That is the essence of a proof. That is all there is to it. Now, that doesn't mean it's a simple thing to learn how to do. That's the point of this course. But distilled down, that's what a proof is, OK? And Q-- so I said P usually means something in terms of the definitions we have. But also, Q will usually mean something in the definitions that we've given. And so our job is to verify Q. So let's go with proving this theorem. And in fact, I'm only going to prove property 1. Property 2, 3, and 4 I'll likely put on the homework. So let B and C be sets. So I mean, this is the only hypothesis I get. I'm trying to prove that B union C complement equals the intersection of the complements. So what does that mean? So we want to prove. So this is-- it's quite helpful, especially when you're first starting to do proofs, to write down what you're actually trying to prove. So even though I have this statement here, it's an equality between two sets. Equality between two sets means something specifically, right? We have that in our definition-- where is it-- over there that two sets are equal if one is a subset of the other and vice versa. So that's what we have to prove. We have to prove that the left side, B union C complement, is a subset of B complement intersect C complement and vice versa. So we want to prove that is a subset of B and-- OK? So that's what the equality means. That's what we have to prove. We have to prove those two statements now, OK? And that's as distilled down as far as we can go. So let's prove this. Now, we'll prove this using, again, logic and what these things actually are. So let's prove this first statement here. So I have to show that every element in this set is an element of this set. So I'll even write this down as WTS. That means Want To Show. This is the first thing we'll show. As we go on, I'm not going to write as much as I'm doing right now. But this is the first theorem and proof you're seeing, so I should write down quite a bit. So the first thing we want to show is we have this inclusion, OK? That means every element here is an element here. So let x be in B union C complement. And now, we're going to trace what this means. And we'll eventually arrive at x as in this. So then x is not in B union C. That's just the definition of the complement. Now, x is not in B union C means x is not in B and x is not in C because the union is-- something's in the union if it's in B or C. So something's not in the union if it's not in B and not in C. Now, this implies, simply again by the definition of what it means to be in the complement, x is in B complement and x is in C complement. But this, again, is simply the definition of x being in B complement intersect C complement, OK? So you see, we started off with an element in this guy and we showed that it's also an element of the right-hand side. So thus, B union C complement is contained in B complement intersect C complement. Now, we want to do this other inclusion here. Now, this is one of those rare situations where you get to essentially reverse the entire argument and get what you want. But let's just go through it in a linear fashion. Let's take something from here and show it's in here. So let x be in the intersection of the complements. Then that means x is in B complement and x is in C complement. That means x is not in B-- so that's this statement. That's the definition of being in the complement-- and x is not in C. That's, again, the definition of being in the complement. Now, just like we used here in this step, this is equivalent to-- so really, I should-- in this statement, I should have written this statement is equivalent to this statement, but we'll remove that. So x is not in B and x is not in C. This means x is not in their union, which implies that x is in the complement of the union, OK? So thus, we've proven is a subset of B. And since we've shown both sets are a subset of each other, that means, by the definition of two sets being equal, they are equal. Again, this box means really nothing. It just means that's the end of the proof. All right, let's move over here. This is terrible. And not everybody uses that little box to finish a proof. Some people don't put anything. When I was in graduate school, I was a TA for this guy named Paul Sally who was a fantastic teacher and really loved math, who would end-- So amazing story about this guy is, when I was his TA, he was in his 70s, I think. But he had also had diabetes. So he had lost both of his legs beneath his knees. He was also legally blind. And he had a patch over one eye. So he himself often referred to himself as the a pirate mathematician. But he would end his proofs with-- at least in his textbook-- he didn't ask me to do this on the board, thankfully. He would end his proofs with a picture of himself with this cob pipe that he had, very much in the pirate fashion. Anyways, OK, moving on from things that end proofs, let's go on to a next subject, induction. So induction is a way to prove theorems about natural numbers, OK? The theorem itself is more of a tool rather than an interesting fact on its own, OK? So let me state the theorem. And then we'll go over a couple of examples on how to use induction. So let me recall from-- I think I just erased it. N is the natural numbers. And it has an ordering, meaning-- so we'll precisely define what ordering means. But just in your head, this means the usual thing-- 1 is less than 2 is less than 3 is less than 4. So a property of the natural numbers, which will take as an axiom, is the well-ordering property. So an axiom is not something you prove. You assume this about the objects that you've defined or are studying up to this point. And so the statement is if I take a subset of natural numbers, which is non-empty, then S has a least element or smallest element. Now, what does this mean? Let me write this last statement out. i.e. There exists an x in S-- "st" I will often write, meaning such that or so that-- such that x is less than or equal to y for all y in S, OK? So every non-empty subset of the natural numbers has a smallest element, OK? We're going to take that as an axiom, as just a property of the natural numbers, which we'll assume. Now, using this axiom, we're going to prove-- it's not really often you hear it called as a principle of mathematical induction, but this will state it as a theorem instead of a principle, whatever a principle is supposed to be. So induction, so this is due to Pascal. Or at least in its first rigorous formulation is let Pn be a statement depending on natural number n. OK, so maybe we have some equality between two quantities that involves a natural number n, OK? That could be our statement P of n. Now, we're going to assume-- so what are our hypotheses about this statement? What's our if? Assume that this statement satisfies two properties. This first property is usually referred to as a base case. That is that P of 1 is true. And the second property is called the inductive step. So this statement satisfies the following property that if you assume P of m is true, then you can prove that P of m plus 1 is true. So I have a statement which satisfies both of these properties, OK? In particular, since I'm assuming P of 1 is true, by the second property, P of 2 is true. And then again by the second property, P of 3 is true. And then P of 4, and then P of 5. And so if you followed that last line of reasoning, this means you should be able to guess what the conclusion of this theorem is. Then P n is true for all natural numbers, OK? All right, so we're going to use the well-ordering property of the natural numbers to prove this theorem about the induction. OK, so we have our assumptions. I'm not going to-- although, I said over there let B, C be sets, I'm not going to rewrite the assumptions that we have about our statement P. We're just going to start trying to prove P of n is true for all n. So let me write our conclusion slightly differently. Let S be the set of all natural numbers such that P of n is not true. So what I want to show is that P of n is true for all n. So that's equivalent to saying we want to show that S is empty, OK? The set of natural numbers where P of n is not true, this is empty. This is equivalent to saying P of n is true for all n. And the way we're going to do this is another staple of mathematical proofs is trying to prove this by contradiction, OK? So what does that mean? Let me make a few comments about what that means, proof by a contradiction. OK, so in a proof by contradiction-- so this is-- what I'm about to write down is not part of the proof. This is commentary not to be included in the proof. What does it mean to say we're going to prove S is equal to the empty set by contradiction? We're going to assume that the statement we want to prove is false. Or not false, but we want to assume that the negation of the statement we want to prove is true and then arrive at a false statement, OK? So we want to assume-- this is what we're going to do. We're going to assume the negation of the statement we want to prove-- namely, S is non-empty, OK? And from this, we want to derive a false statement, OK? And so if we are to do-- if we were able to do that, then-- let me just say, again, you can check in the appendix or you can just believe me that the rules of logic then say that our initial assumption, that S was not empty, is false to begin with, OK? So rules of logic, meaning I cannot start from a true assumption and derive, in a logically consistent way, a false statement, OK? That is, if we believe that the rules of logic we're using are consistent, which that's a little bit hairy to talk about. But for our purposes of our class, you can believe me that the rules of logic we use-- or at least accept that the rules of logic we're going to use are consistent and sound. OK, so back to the proof at hand. We have this set of natural numbers where the statement is not true. We want to show it is empty. We're going to do it by contradiction, meaning we're going to assume the negation of the statement we want to prove-- namely, S is non-empty. And we're going to derive a false statement from that assumption, OK? And by the rules of logic-- that means that our initial assumption-- that S is non-empty-- is, in fact, false, OK? All right, so towards a contradiction, suppose that S is non-empty, OK? Now, we're going to use the well-ordering property of the natural numbers. By the well-ordering property of the natural numbers, S has a least element, x, OK? Now, what do we know about x? So first off, x cannot be 1, OK? S is a set where this property does not hold. x cannot be 1 because-- let me again rewrite this fact that S has the least element. Let me just reiterate that S has a least element in the set, OK? Now, x cannot be 1 because we're assuming the base case, meaning P of 1 is true. So since P of 1 is true, that means 1 is not an S, which means x is not 1. In particular, x must be bigger than 1. So x is some magical natural number out there bigger than 1 that's the least element of this set S. OK, since x is the least element of S-- so let me draw. On the number line, we have 1, 2, 3, 4. Out there is some magic point x, which is the least element of S. And the rest of the subset S lies to the right of this number x, right? Because it's the least element of S. And therefore, x minus 1 cannot be in S. So since x is the least element of S and x minus 1 is less than x, this means that x minus 1 is not in S. Otherwise, it would be a smaller element than x in S. So thus, what does it mean to not be in S? It means that P of x minus 1 is true. By the definition of S, this means P of x minus 1 is true, OK? But by the second property we're assuming about our statement P, this means that the next guy in line, x minus 1 plus 1, is true, which is just x, which means that x is not in S, OK? So from the assumption-- so let me just recap. From the assumption that S is non-empty, we've derived two facts. 1, x has the least element in S. And that element is also not in S. So written out, we've concluded there exists a natural number which is both in S and not in S. And this is a false statement. You cannot have an object and member that's both in the set and not in the set, OK? And at the end of contradiction arguments, I'll usually put two arrows hitting each other. So that's a contradiction. Therefore, our initial assumption that S is non-empty has to be false. And therefore, S is the empty set, OK? So I encourage you to go through that proof a little slowly because maybe you got turned around by taking the complements or the general scheme of how a proof by contradiction works. But don't spend too much time on it because, as I've said, this theorem itself and its proof are not the thing we're really interested in. Or at least, it's not the most interesting. It's more of a tool that we'll use to prove more interesting statements. OK, so how do we actually use this theorem, induction, to prove other statements? So I guess I should include this here. This falls under the umbrella of logic, meaning we're going to approve previous-- we're going to use previous statements we've proven to prove new statements. But anyway, so how do we use induction in practice? So if we want to prove some statement-- for all n, Pn is true-- in the print then, this theorem about induction-- this theorem of induction-- tells us we just have to do two things, OK? We have to prove the base case. And this is usually easy. You just stick the number 1 into the statement that you want to prove. And that's the end of the story. And the second step is usually-- or the second thing we have to prove is the more involved part, which is we have to prove that the statement that if PM is true, then P of m plus 1 is true, OK? If we want to do a proof by induction, there's two smaller proofs we have to do. First, we have to prove P of 1 is true. And then we have to prove this statement. If P of m, then P of m plus 1. So again, this is usually referred to as the base case. This is the inductive step. So let's try and actually do this. All right, so-- so another question I get at the beginning of a course, especially about proofs-- because there's a lot of uncertainty about what you can assume is true, what can you use, what can you not use, right now, at this point, you can use whatever you know about any of the algebraic properties, you know about the real numbers, the rational numbers-- and by algebraic properties, I mean if a plus b equals c, then a plus b times d equals c times d-- and what you know about inequalities. So we're going to go much more in depth into ordering, which is what inequality is a part of. But you can use all the properties you know about solving inequalities or manipulating inequalities, meaning if I have one number is less than or equal to another number, then when I multiply both sides by a positive number, that doesn't change the inequality. So you can use all of these algebraic properties of rationals and real numbers from here on out. I mean, so we're going to be proving things about calculus. So you certainly cannot use anything about continuity, differentiation, or anything like that. But for now, you can use all the algebraic properties you know. So the first statement we're going to try and prove using induction is the statement that for all c not equal to 1, for all n, a natural number, 1 plus c plus c squared plus c to the n equals 1 minus c to the n plus 1 over 1 minus c, OK? So this here is our statement P of n. It depends on the natural number n, OK? So we're going to do this by induction, which means we're going to do those two things. We're going to prove the base case, P of 1, which I said is easy. And then we're going to prove the second case, the second property, the inductive step, which is a little more involved, but not so much involved, at least in the beginning. So let me call this inequality star. We're going to prove star by induction. So first, we will do the base case. And like I said, the base case is usually you just plug in n equals 1 and verify that P of n is true. And that's what we do. 1 plus c to the 1, which is the left-hand side, does, in fact, equal 1 minus c to the 1 plus 1 over 1 minus c because this right-hand side-- 1 minus c squared is 1 minus c times 1 plus c-- the 1 minus c's cancel. And so the base case is proven, all right? Now, we do the inductive step, OK? So we're going to assume that star holds for n equals m So we're going to assume P of m. So assume that 1 plus c plus c to the m equals 1 minus c over 1 minus c, OK? Now, we want to show-- again, let's write out what we want to do, what's our plan. We want to prove that this equality, that this line star holds for n equals m plus 1, OK? So again, what I wrote here, this is basically star for n equals m, OK? And let me call this second inequality-- the second equality 2 star. So this is my assumption for m, n equals m. OK, so let's take the left side for n equals m plus 1 and see if we cannot massage it to get the right-hand side for-- I should say the right-hand side for n equals m plus 1. So here is the calculation part. So we have 1 plus c plus c to the m plus c to the m plus 1. This is the m plus 1 case of the left-hand side of star, which we want to show is to the-- is equal to the n equals n plus 1 case of the right-hand side. Now, this is equal to-- now, we already know what this is equal to by our assumption. This is by the second star there, which is what we're assuming is true. This is equal to 1 minus c n plus 1 over 1 minus c plus cm plus 1. And so now, we just do a little bit of algebra. This is equal to 1 over 1 minus c m plus 1 plus c m plus 1 minus c m plus 2 over 1 minus c. Those cancel and I'm left with 1 minus c to the m plus 2. And I'll write it just so that you can see this is really the m plus 1 case, all right? So again, we arrived at this first step by our assumption, the second starred equation, OK? So thus, star holds for n equals m plus 1. So by induction-- or really, I should say the theorem of induction that we proved-- our equality between those two objects, or two expressions, is valid for all n, OK? OK. OK, so let's do one more example of using induction. So let's prove if C is a real number bigger than or equal to minus 1, then for all n, a natural number, 1 plus c to the n is bigger than or equal to 1 plus n times c, OK? All right, so we're going to do this by induction again. That means we need to prove the base case and we need to do the inductive step. So base case, as always, will-- so this is just right here. We're going to do this by induction. So as you can see, the base case is, again, n equals 1 is clear just by looking at it. 1 plus c to the 1, in fact, equals 1 plus 1 times c. So it's certainly bigger than or equal to 1 plus n times c. So I think that's the last stars I'll use for this lecture. So our statement, our inequality star star star, holds for n equals 1. All right, so that's our base case. Now, we're going to assume that this inequality holds for n equals m and try to prove that it holds for n equals m plus 1. So we're assuming this when n equals m. So 1 plus c to the m is bigger than or equal to 1 plus m times c. And we want to prove this inequality with n equal to m plus 1. And we're just assuming this guy, OK? So I want to get the statement for n equals m plus 1. One way to do that is this left side. I want to get-- let's look at the n equals m plus 1 side and see what we can do with it. So again, this is a calculation part and logic. So we have 1 plus c to the m plus 1. So that's the n equals m plus 1 side of this. This is equal to 1 plus c times 1 plus c to the m. Now, we're assuming, again, this inequality. This is the n equals m case. So we can use it. So we're assuming it. We use it. And since C is bigger than or equal to minus 1, 1 plus C is non-negative. So this thing is bigger than or equal to this thing. So if I multiply both sides by 1 plus c, I preserve the inequality. So this is bigger than or equal to 1 plus mc, OK? Again, this just follows from essentially the assumption multiplied through by 1 plus c, OK? So now, I'm going to finagle this. So let me just-- I'm not doing anything different here. I'm just going to rewrite this over here so that I can have a chain of inequality. So I have 1 plus c to the m plus 1 is greater than or equal to 1 plus c 1 plus mc. All right, so now, this is bigger than or equal to this. And this here-- so when I write equal, I do not mean that this left side is now equal to what I'm about to write here. That means the previous thing on here is equal to what I'm about to write here, OK/ this is a typical fashion and writing down inequalities-- or I guess, practice. So this is equal to 1-- so just doing the algebra-- m plus 1 times c plus m times c squared, OK? Now, this part is exactly the n equals m plus 1 side of this. And I have a little room to give because now this is plus something that's non-negative. So let me just rewrite this again. This means that 1 plus c to the m plus 1 is greater than or equal to 1 plus-- so again, I'm kind of writing a lot here. I will stop writing as much as the course goes on. But I encourage you, especially in the beginning, to write all the steps and logic, OK? So again, I'm not rewriting anything. I'm just summarizing what I've done here. Now, this right-hand side-- so I have this as bigger than or equal to this. And this right-hand side, since I have a number plus something non-negative, m times c squared-- m's a natural number. This is bigger than or equal to 1 m plus 1 times c. Thus, 1 plus c to the m plus 1 is greater than or equal to, which is the n equals n plus 1 case. So by induction, this inequality triple star holds for all n. All right.
MIT_18100A_Real_Analysis_Fall_2020
Lecture_14_Limits_of_Functions_in_Terms_of_Sequences_and_Continuity.txt
[SQUEAKING] [RUSTLING] [CLICKING] CASEY RODRIGUEZ: Last lecture, we introduced the notion of the limit of a function as x goes to c, which we write limit x arrow c, f of x equals L. What does this mean? This means for all epsilon positive, there exists a delta positive such that for all x in S satisfying 0 is less than x minus c is less than delta, we have that f of x minus L is less than epsilon. And we proved the following theorem last time, that if we have a set, a cluster point of S-- so this is where we look at limits and f as a function from S to R-- then the limit as x goes to c of f of x equals L if and only if for every sequence x sub n converging to c, we have f of x n, the new sequence, converges to L. So this theorem here connects limits of functions to what we did previously, limits of sequences. Now, using this theorem, we'll get analogs of theorems we proved for sequences, but now for limits. First, let me show a few simple applications of this theorem. So for example, we could prove the following-- that for all c in R, the limit as x goes to c of x squared equals c squared. So we could have done this using just the definition, but now with this theorem, we can prove it a little bit quicker and easier, because we essentially did all the hard work when we proved that the product of two convergent sequences is convergent, which is what we'll use here. So we're going to prove this theorem using the previous theorem. So let x n-- so for this theorem, since it's not stated, you should take the function to be f of x equals x squared. And then the set that is defined on S is equal to R So let x n be a sequence such that x n converges to c as n goes to infinity. We now want to show that f of x n converges to f of c, where f of x equals x squared. But this follows from what we proved for limits. So we proved the product of two convergent sequences is convergent. By earlier theorem about the product of convergent sequences, we get that x n squared converges to c squared, which implies-- so we've now verified for every sequence converging to c, f of x n, meaning x n squared, converges to c squared. And therefore by the previous theorem, we've proven the claim. So now let's use this term to study a couple of more limits. And we use this theorem, in fact, to show that a certain limit does not exist. So the limit as x goes to 0 of sine 1 over x-- this limit does not exist. So remember for a limit, I look at all those points x that are close to c, but not equal to c. So this function, which I'm taking the limit as x goes to 0 of, doesn't need to be defined at x equals 0 in order to consider the limit. And if you like, the function sine of 1 over x is defined on S equals R take away 0. So this limit does not exist, but the limit as x goes to 0 of x times sine of 1 over x does exist and equals 0. Now I want you to note something-- you can't just stick in x equals 0 here to evaluate the limit. Because then you'll be taking sine of 1 over 0 times 0. You can't divide by 0, so can't just say this limit equals 0 because I stick in x equals 0. In fact, we saw in the previous lecture that the limit need not equal the function evaluated at that point. So just to recall the example from last time, when I had f of x equals 1 if x equals 0 and 2 if x does not equal to 0, then we showed that limit as x goes to 0 of f of x equals 2, which note, does not equal f of 0. When we discuss continuity, that's what connects the limit to the function evaluated at that point. I just want to make that little comment. To prove this theorem, we're going to use, again, the previous theorem. Maybe I should-- no, I'm not going to label it. You'll know when I point up to it that I'm referring to that theorem that I stated up there. So in fact, let's prove the second limit exists and equals 0 first. So suppose x n is a sequence converging to 0. We now want to show that x sub n times sine of 1 over x sub n converges to 0. And then the previous theorem up there will imply that the limit as x goes to 0 of x times sine of 1 over x equals 0. Now, if I look at the absolute value of x sub n times sine of 1 over x sub n, this is equal to the absolute value of x sub n times the absolute value of sine of 1 over x sub n. And no matter what you stick into sine, sine is always bounded between 1 and minus 1. So the absolute value is always bounded by 1. So just to summarize, we've shown that-- and now we apply the squeeze theorem. So that goes to 0. It's just the constant sequence equal to 0. Since x sub n is converging to 0-- remember we're assuming that-- this converges to 0. And therefore, what gets trapped in between goes to 0. So by squeeze theorem, x sub n times sine of 1 over x sub n converges to 0. And that proves the second claim So now we'll prove 1. And in the previous lecture, I did negate this definition. So let's actually negate this theorem, if you like, or use the negation of each side of this if and only if to state an equivalent theorem. So two statements are equivalent, which is in that theorem, if and only if their negations are also equivalent. So by the theorem, we also have the following fact-- limit x goes to c of f of x is not equal L if and only if there exist-- so negating the right-hand side of that if and only if-- there exists a sequence x n converging to c-- so this sequence will-- consisting of elements of S take away c, such that x n converges to c, but we don't have f of x n converging to L. And when I write this, you should read this as either this limit exists and does not equal L or this limit does not exist. So I'll even write that out-- and either limit does not exist or does not equal L. So again, that's an equivalent way of stating the theorem up there is in terms of the negation. Two statements are equivalent, which is right there, if and only if their negations are equivalent. So now, we're going to prove that the limit as x goes to 0 of sine of 1 over x does not exist by showing that there exists a sequence converging to 0 such that when I plug that in to sine, the limit of that sequence does not exist. So to show there exists a sequence converging to 0 such that limit as n goes to infinity of sine of 1 over x sub n does not exist. Now, sine oscillates between 1 and minus 1 depending on a certain multiple of pi over 2. So here's the intuition behind it, behind the sequence I'm about to give you-- note that sine of x equals 1 if x equals pi over 2, 5 pi over 2, 9 pi over 2, and so on, minus 1 if x equals 3 pi over 2, 7 pi over 2, 11 pi over 2, and so on. So I can stick in things that are getting bigger to sine and get 1 or minus 1. And in fact, let me change this to y, since we're using x to be essentially 1 over y. So if I stick in pi over 2, 5 pi over 2, 9 pi over 2, and so on, I get sine equals 1. If it's 3 pi over 2, 7 pi over 2, 11 pi over 2, and I stick that into sine, I get minus 1. But that means if I stick 1 over these numbers into sine of 1 over x, I get 1 or minus 1. And so then that sequence that I get will be 1 or minus 1 alternating. And we know that sequence does not converge. So that's the idea. So let me write that down. Let x sub n be 1 over these numbers, essentially, so 2n minus 1, pi over 2, minus 1, because we're going to stick this into sine of 1 over x, which is 2 over 2n minus 1 over pi. Now, note for all n, x sub n is less than or equal to-- I can write this as 2 over n plus n minus 1 pi. And since n is bigger than or equal to 1, this is always bigger than or equal to 0. So this is less than or equal to 2 over n pi. And this goes to 0. So that shows by the squeeze theorem that this sequence I've defined here converges to 0. But what happens when I plug this sequence into sine of 1 over x? This is now equal to sine of 2n minus 1 pi over 2. And this is therefore equal to 1 minus 1, 1, minus 1, 1, minus 1. And this sequence, which is just equal to minus 1 to the n plus 1-- I don't know why I capitalized that-- does not converge. So we found a sequence converging to 0 such that when I stick it into the function, that new sequence does not converge. So we've proven that this limit does not exist. So I alluded to this fact that this theorem will give us theorems that are similar to what we proved for sequences, except now for limits of functions. So let me just state the simplest theorem you can get. So let S be a subset of R, c a cluster point of S, and suppose I have two functions f going from S to R and g going from S to R. If these two limits exist, and one function is smaller than the other, then-- so we had an analogous statement for sequences, which was if I have two sequences converging and one is less than or equal to the other, then one limit is less than or equal to the other. And it's an analogous conclusion for limits of functions. So again, the analogous statement for sequences was we have two sequences, one less than or equal to the other, which then the limit of the smaller sequence is less than or equal to the limit of the bigger sequence. So let's give the proof. And we'll use this theorem connecting limits of functions to limits of sequences. And then we'll use the corresponding statement which we do have for sequences, which I just stated a couple of times. So let L1 be the limit as x goes to c of f of x, and L2 the limit as x goes to c of g of x. And what we want to show is that L1 is less than or equal to L2. Let x n be the sequence, and S takeaway c such that x n converges to c. Such a sequence exists because c is a cluster point of s. And you proved in the assignment that if I have a cluster point of S, then there exists a sequence in S takeaway c that converges to c. Now, since-- say it this way-- by the previous theorem, this limit equals L1, this limit equals L2 if and only if for every sequence converging to c, f of x n converges to L1, g of x n converges to L2. By the previous theorem, we then conclude that L1 is equal to the limit as n goes to infinity-- actually getting ahead of myself a little bit. So let's pause right there and reset. So now we have the sequence converging to c. Then by the assumption here for all n, f of x n is less than or equal to g of x n. And since f of x n converges to L1, and g of x n converges to L2, we get by, again, this theorem about sequences, which says if I have two sequences, f of x sub n-- this is one sequence is less than or equal to another sequence, g of x n-- and they both converge, then the limits satisfy the same inequality, which is what we wanted to prove. So like I said, this theorem here follows from the analogous-- in fact, I should have written this out or in fact, I will write it out now. So the analogous statement for sequences was if for all n, a n is less than or equal to b n, then limit as n goes to infinity, assuming both limits exist, satisfy the same inequality. So this is an analogous theorem to this theorem, which we had for sequences. And we used this theorem from sequences to prove it. Now, following that same philosophy, you can prove analogous statements for functions, limits of functions, as you did from sequences. You get these for free. And instead of stating all of the theorems we did for sequences except now for limits of functions, I'm just going to quickly say you get the same thing. So by using the previous theorem, which connects convergence of functions to convergence of sequences, we have analogous theorems for-- and let me state it this way-- for limits of functions now. And for example, you have the squeeze theorem, namely if I have-- so just talking this out-- if I have three functions, say f is less than or equal to g is less than or equal to h, and f of x and this converges to L, h of x converges to L, then g of x converges to L. That's what I mean by analogous statement. You also have theorems about algebraic operations and limits, meaning that if I have two functions that have limits as x goes to c, then f plus g will have a limit as x goes to c. And the limit of the sum is the sum of the limits. Same thing with the product, and same thing with the quotient, assuming the limit on the bottom is nonzero. And then similarly, we also have a theorem about the absolute value and limits, namely if f of x converges to L as x goes to c, then the absolute value of f of x converges to the absolute value of L as x goes to c. So you have all of these analogous statements, or theorems that are analogous to the statements from what we did for sequences, but now for limits of functions as x goes to c. And I'm not going to state them all. You can see this in the textbook. Maybe the proof of some of them I'll give as exercises. Now let's separate the notion of a limit of a function from that of a sequence just a little bit. So unlike when we talk about limits of sequences, here we're letting a point x converge or get close to a point c, but there's two ways it can get close to c on the real number line. It can converge to c from the left or it can converge to c from the right. And this leads to the notion of left and right limits of a function. So start the definition here and we'll go to the next board-- so let S be a subset of R, and suppose c is a cluster point of minus infinity, 0 intersect of c intersect S. So what I'm defining now is the notion of a function converging to something as x goes to c from the left. That's why I'm looking at only S intersect minus infinity to c. So I'm only looking to the left of c. We say f of x converges to L as x converges to c from the left by putting a minus sign up here if a similar definition as that of the limit, but now we only look at x getting close to c from the left. If for all epsilon positive there exists a delta positive such that for all x in S satisfying c minus delta is less than x is less than c-- so it's close to c but to the left of c-- we have that f of x minus L is less than epsilon. And in this case, we write the limit as x goes to c with the minus sign up top, f of x equals L. And then we have an analogous definition for converging to or taking a limit of a function as x goes to c from the right. If c is a cluster point of c infinity, c comma infinity intersect S-- so I shouldn't say if-- [INAUDIBLE] is a cluster point of, now just taking S that's to the right of c, then we say that f of x converges to L as x converges to c plus, meaning as x converges to c from the right, if for all epsilon positive there exists a delta positive such that for all x in S satisfying c is less than x is less than c plus delta-- so now it's close to c but to the right of c-- we have that f of x minus L is less than epsilon. And similarly to the notation up there, we write limit as x goes to c plus of f of x equals L. Now, just like we proved this theorem for limits, you can state and prove an analogous statement for one-sided limits. So such a statement would be, for example, limit as x goes to c minus of f of x equals L if and only if for every sequence x sub n satisfying x sub n is less than c converging to c, we have f of x n converges to L. So these two just kind of limit how f behaves near a point c if we're just looking to the left of c or to the right of c, but not at c. So let's, for example, look at-- I think this is usually referred to as the heavy side function. So f of x equals 0 for x less than or equal to 0, and 1 if x is bigger than or equal to 0. So the graph is like that. And why do people care about this function? Well, in a certain sense, if you take the derivative of this function, you get what's called the Dirac delta function, although that's not a function. That's a distribution. But that's why this function has a name attached to it because if you take its derivative, you get something somewhat special. Anyways, we're not even at derivatives. We're not even going to talk about distributions in this class. So let's get back to one-sided limits. So if I look at this function for x close to 0 from the left, f is just 0. So in fact, since I'm only looking at x to the left of 0 in this limit, this is just plugging in x less than 0, so I get 0. And that's just 0. And if I look at this function from the right, to the right of 0, then f of x is just 1. It's just 1 identically. And so I get-- and it's, again, although I haven't shown that one-sided limits of constants equal to constant, I think that should be something you can easily believe or write out yourself. So for this function, we see that it does have two-sided limits, except those limits don't equal each other. And they certainly don't equal-- so this one does equal f of 0, but I could have made f of 0 to be a 1/2, and then this still would have been 0, and not equal to the function evaluated at the point. So again, I'm making this point that for limits, just limits, it does not matter what the function is doing at the point. Limit only cares about how a function behaves near a a point. One-sided limits augment that by saying we're only going to care about the function near the point and to the left for the left limit, and to the right for the right limit. So I didn't say that, but this we call the left limit, this we call the right limit, simply because we're getting close to c from the right and from the left. But what is the connection between left and right limits? We have the following-- let S be a subset of R, f be a function from S to R, and suppose that c is a cluster point of both sets minus infinity to c intersect S. And c infinity intersect S. That way I can talk about the left and right limits of c or at c. Then-- so first off, if c is a cluster point of any one of these sets, it's going to be a cluster point of the set S. So we can actually look at the limit. So then the limit as x goes to c of f of x exists and equals L if and only if the limit as x goes to c from the left of f of x equals the limit as x goes to c from the right of f of x equals L. So this kind of looks like the theorem we proved about lim sup and lim inf, but they don't have any connection. If you want to make some connection between the way this theorem looks and the statement of the theorem for lim inf and lim sup, so this is kind of saying that limit of a function equals L if we approach from the left or right the function f approaches L. And for the lim sup lim inf guy that we did for sequences, you could take that as saying that the limit of a sequence equals L if and only if following the sequence from below, that approaches L, and following the sequence from above, that also approaches L. So there's kind of two directions there, just as there's two directions here, but not really. So let's give a quick proof of this theorem. It's not difficult. It follows almost immediately from the definitions. In fact, I'm going to do only one direction. So this direction should, assuming this, and proving this should be pretty clear. If I have this, this means that if I want to be close to L, I just need to be close to c. And therefore, it doesn't matter if I'm close to c from the left or right. I'll be close to L. And going this direction, this says I just need to be close to c from the left, sufficiently close, and I need to be close to c from the right, sufficiently close, to be close to L. So really, there's not a lot of trickiness in the proof. It's just writing these things out. And so I'm just going to write out one direction and leave the other direction to you. So let's assume that the left limit equals the right limit equals L. And now we want to show that the limit as x goes to c of f of x equals L. Now we want to show limit as x goes to c of f of x equals L. So let's go back to the definition. Let epsilon be positive. We want to be able to find delta so that f of x is within epsilon to L if x is within delta to c. And what's the point? Here's c and here's L. So this is the picture that goes along with this. Assuming these two limits equal L, I know that there exists a delta 1 so that if I'm in this interval, then here's L plus epsilon, L minus epsilon. Then if I'm within delta 1 to c and to the left, then f will be close to L in that interval. And then since the limit as x goes to c from the right equals L, there exists some delta 2 so that if I'm in this interval then I'll be, again, close to L. But this means that if I choose the smaller of these two, and I look at the whole interval, then f will be close to L on the whole interval. And that's it. So let epsilon be positive. Then limit as x goes to c minus f of x equals L, this implies there exists delta 1 positive such that if x minus c is less than delta 1, then I get that f of x c minus delta 1 is less than x is less delta is less than c. This implies that f of x minus L is less than epsilon. And similarly for the right limit, since limit as x goes to c plus of f of x equals L, this implies, by the definition, there exists a delta 2 positive such that if c is less than x is less than c plus delta 2, then I get f of x minus L is less than epsilon. Now choose delta to be the minimum of delta 1 and delta 2. And then we'll now show that this delta works. So now we're going to show this delta works. So if less than delta, then this implies that either x is in c minus delta-- so if we take something close within delta to c, and delta is the minimum of these two distances, then-- for the sake of this picture, let's say delta 1 equals delta 2, so now I'm looking at this interval-- then two cases. Either x is in c minus delta c, which is a subset of c minus delta 1 c since delta is the minimum of those two deltas, which implies by the first inequality here for delta 1 or x is in c, c plus delta, which is a subset of c, c plus delta 2, which by our choice of delta 2 gives us that. Thus we've shown that if x minus c is less than delta, then f of x minus L is less than epsilon. That's the end of the proof. So I've said this over and over again-- limits of functions don't care about what the function is doing at the point. It cares about what the function is doing near the point. Now we're going to discuss the notion of continuity, which connects the limit of a function at a point to the function. So it connects how a function behaves near a point to the function evaluated at the point. And so you can even write this down. How a function behaves near a point compared to-- so near, at. And you'll see when I write down the definition, basically, what's staring you in the face is that the definition of continuity is that the limit as x goes to c of f of x equals f of c. So we had these examples where the limit-- I think already erased it, but where the limit exists but does not equal f of c. And here for continuity, the notion of continuity is that the limit as x goes to c of f of x actually equals the function evaluated at that point. So we have the following definition. Let S be a subset of R and c an element of S. We say f is continuous at c if for all epsilon positive, there exists a delta positive such that for all x and S satisfying x minus c is less than delta-- so in particular, I can now, for example, x equals c will satisfy this inequality. I don't have the 0 is less than that. So for all x and S which are close to c within delta, I have that f of x will be-- with an epsilon of f of c. So in this case, just a little f is continuous at every point on its domain that we're considering. We just say f is continuous. So for a function to be continuous at a point nearby x, x being near c, should mean that f of x should be near f of c. And let's go through some examples. Remember, whenever you get a definition, you should look for examples and then potentially negate it. We'll negate this definition in just a second to show that a function I wrote down a minute ago is not continuous. So the affine function f of x equals a times x plus b-- so S is R, so x is a real number-- is a continuous function, meaning it's continuous at every real number c. So let's prove this. Let c be an element of R. We want to show f is continuous at c. So we have to go through the definition. Let epsilon be positive. Choose delta to be epsilon over 1 plus the absolute value of a. And last time, in the previous lecture, I gave the intuition on why you would choose this delta based on the function and epsilon. I did a computation here. I'm just going to choose delta this way, and you'll see that it works. So now we have to show this delta works. If x minus c is less than delta, we should be able to now show that f of x minus f of c is less than epsilon. This is equal to a x plus b minus a c plus b. So this is equal to a times x minus c, which equals absolute value of a times absolute value of x minus c. This is less than delta. Absolute value of x minus c is less than delta times a, which equals the absolute value of a over 1 plus the absolute value of a times epsilon, which is less than epsilon. Because a number over 1 plus that number is always less than 1. Maybe you were wondering, why didn't I just choose delta to be epsilon over the absolute value of a? This is just a smidgen of sophistication, that what happens if the absolute value of a is equal to 0? Then we would have divided by 0. So adding a 1 there takes care of that. So this guy is continuous at every c. So this function is continuous. How about a function that's not continuous at a point? Make sure this is the next topic. Here is a non-example. The function f of x, which equals 1 if x equals 0, 2 if x is not equal to 0. That's the function. f is not continuous at 0. So to prove this, let's negate the definition of continuity. So the negation of continuity is-- so f is not continuous at c if-- so the "for alls" become "there exists," and "there exists" become "for all." So if there exists some bad epsilon so that for all delta positive, there exist an x such that x minus c is less than delta, and we do not have the second inequality, f of x minus f of c is bigger than or equal to this bad epsilon. Now for this guy, it's pretty clear which x to choose. So let's think this out for a minute. There should be some bad epsilon 0 so that if I take any small interval around 0, I can find a point in this interval so that f of x minus f of c is going to be bigger than or equal to epsilon 0. Now here, f of c is f of 0, which is 1. Now what would be the bad epsilon so that f of 1 is greater than distance 1 or greater than distance epsilon 0 to f of x for some x in this interval? Well, if I take any x in this interval other than 0, and stick it into f, I'm going to get 2. And that's greater than or equal to distance 1 to f of 0. So epsilon 0 I will choose to be 1. So now we want to prove that f, this function here, is not continuous at 0. So I'll tell you what the bad epsilon is. Choose epsilon 0 equals 1. So now we have to show this bad epsilon 0 is indeed bad. Let delta be positive. We have to now find a number in this interval so that f of x minus f of 0 is bigger than or equal to 1. And like I said a minute ago, if you take any x in this interval other than 0 and stick it into f, I get 2, which is distance 1 to f of 0. Let x to be delta over 2, say. Then x minus 0 is less than delta. It's actually equal to delta over 2. And f of x minus f of 0-- this is equal to 2 minus 1, which is bigger than or equal to epsilon 0. So this function is not continuous. And so next time-- and I'll just leave this question here, which we'll address in the next lecture. But it's a kind of simple question. So first off, if you look at this function, it shouldn't be too hard to convince yourself, so you're also told when you were a child that a function is continuous If you can draw the graph and not lift up the pencil, which I better not see on the exam if I ask you about continuity. But anyway, for the sake of this conversation, let's take that as the intuition. So you can convince yourself that this function is continuous over here, though. If I'm getting close to-- let's say this is minus 1, then the function is getting close to 2. And the value of the function at minus 1 is 2. So the function is getting close to the value of the function at minus 1. And the same for if I'm looking at c equals 1. So you should be able to convince yourself that this function is continuous at every point other than at the origin 0. So a natural question to ask is, let f be a function, let's say, defined on the whole real number line. Does there exist a point in R such that f is continuous at this point c? For this example over here, we were able to-- any point other than 0, the function's continuous there. So natural question is, let's say I take an arbitrary function. Does it have to have a point where it's continuous? And next time, we'll see that answer is no. We'll give an example that's I think due to Dirichlet because it's named after him, but naming doesn't necessarily mean anything in math. Green's theorem is named after Green but he didn't prove it, so maybe it was due to somebody else. And we'll use a similar characterization of continuity that's kind of analogous to this first statement we had for limits. And we'll do that next time.
MIT_18100A_Real_Analysis_Fall_2020
Lecture_15_The_Continuity_of_Sine_and_Cosine_and_the_Many_Discontinuities_of_Dirichlets_Function.txt
CASEY RODRIGUEZ: OK. So let's continue our discussion of continuity, which we began last time, which is-- which I defined last time, and I wrote again here, which intuitively says that if you want to be-- that if x is sufficiently close to number c, then f of x will be very close to f of c. So it connects how the function behaves near a point to how a function behaves at the point. And last time, we gave an example of a function which is continuous everywhere, namely the function f of x equals ax plus b, and one that was not continuous at a point c. And we ended with-- let me just recall the question I asked last time. If f is a function-- let's say that its whole domain is r. Does there exist a point where it's continuous? Now, I could answer this question now just using the definition, but I'll answer it in a minute after we prove the following theorem, which is an analog of this theorem that we proved here for limits. We showed that if you have a subset s of r, a cluster point of s-- so this is where you take limits, and then the limit as x goes to c of f of x equals l if and only if, for every sequence converging to c, we have f of xn, which is a new sequence, converges to l. So this connects limits of functions to limits of sequences-- limits along sequences, if you like. We're going to prove an analog of this theorem now for the notion of continuity. But there's kind of-- and these two definitions certainly look-- if you look at the definition of limit and the definition of continuity, this one certainly looks like the definition of limit where now the limit has to be f of c, the function evaluated at that point. And it essentially is, but we do have a degenerate case where c here is not required to be a cluster point of s. It's just any point of s. And we'll have two cases, c is a cluster point of s and c is not a cluster point of s. And you'll see that when c is not a cluster point of s, we'll have a silly situation for talking about continuity. So this is the following A theorem, which has three parts. So suppose s is a subset of r. c is an element of s. f is a function from s to r. So the first part is if c is not a cluster point of s, then f is continuous at c. So if we're looking at continuity at a point, the only interesting points to look at are cluster points of the set s. Otherwise, no matter what function it is, it will be continuous at such a point. The second is that-- now let's suppose we're in the more interesting case that c is a cluster point of s. Suppose c is a cluster point of s, which is essentially the only thing that doesn't-- that's missing from this definition and the definition for limit. Then f is continuous at c. And this is borne out in this theorem if and only if the limit as x goes to c of f of x equals f of c. And the third part of this theorem, which is now the analog of this theorem we proved here for limits, is the following-- f is continuous at c if and only if for every sequence xn of elements of s such that x of n converges to c, we have f of x sub n converges to f of c. So again, if c is not a cluster point of the set, then every function is going to be continuous at that point. So this is kind of a-- so if c is not a cluster point of s, this is a silly case to look at. All right. So let's prove the first theorem-- I mean, the first part of this theorem. So what's the intuition? Remember, continuity is a connection between f near a point and the function at the point. Now, if the only points near c is c, then f of x equals f of c for x near c because x can only be c. And therefore, that will be less than epsilon. So when we're at a point that's not a cluster point, it kind of removes the near part. And we're just looking at f at c and comparing f to itself at c. So let's suppose f is not a cluster point. So we want to prove continuity, so that's a for all epsilon delta argument. So let epsilon be positive. Since c is not a cluster point of s, what does this mean? So remember, to be a cluster point of a set s means for all delta positive, there exists-- so let me recall what it means to be a cluster point over here on the side. So c is a cluster point of s. This means for all delta positive, the set x minus delta x plus delta intersect c, intersect s takeaway c is not empty. So since c is not a cluster point, that means there exists some delta 0 so that this interval is disjoint from s takeaway c such that c minus delta 0, c plus delta 0 intersect s takeaway c equals the empty set. Now, if I include c in this intersection-- so another way of stating this is that the only thing-- since c is an element of s, the only thing that's in this intersection is c. So this here is the rigorous way of saying there's nothing in s near c except for c itself. And therefore, there's nothing to compare to f of c. So we'll choose delta to be this one because then there's nothing-- there's no x in s in this interval other than c. And then f of c minus f of c is 0, so choose delta to be delta 0. And if x minus-- so now we want to say that this delta works. So if x is in s and x minus c is less than delta, that implies x equals c. Because the only thing in this interval that's coming from s is c. And therefore, f of x minus f of c-- this is equal to f of c minus f of c equals 0, which is less than epsilon. So this is a really degenerate case of when you're trying to see if a function is continuous at a certain point. All right. So let's now move to the more interesting part that sees a cluster point. So suppose c is a cluster point of s. And we want to show f is continuous at c if and only if the limit of f as x goes to c of f of x equals f of c. Going this direction is really quite easy, so we're going to go this direction. And you should be able to prove this direction just from what I write down here. So suppose the limit as x goes to c of f of x equals f of c. I mean, this whole argument is kind of silly because the definitions are just so close except for now c is a cluster point, so that's not missing from either definition. But the only difference is this guy, you only look at x near c but not at c. So suppose the limit as x goes to c of f of x equals f of c. Now we want to show that f of x is-- that f is continuous at c, so let epsilon be positive. Since we have this, there exists a delta positive such that if x is in s and the absolute value of x minus c is bigger than 0 is less than delta 0, then f of x minus f of c is less than epsilon. And we'll just choose delta to be this delta 0. If x minus c is less than delta 0, then there are two cases. Either x is equal to c or x is not equal to z. See? So let's write it this way. If x equals c, then clearly, f of x minus f of c equals f of c minus f of c equals 0 is less than epsilon. And if x is not equal to c, then that's certainly bigger than 0, but we're still assuming it's less than delta, which remember, is this delta 0 coming from the fact that the limit as x goes to c of f of x-- I missed that there-- tells me that this is less than epsilon. So it just follows immediately from the definition. There's not much to it. OK. So now let's prove the third part of this theorem, which is an analog of that theorem we have over there for limits. So let's prove this direction. Let's suppose f is continuous at c, and now we want to prove this statement about sequences. So let me also-- when I prove the opposite direction, not have to label it. So let me put a star. This star is going to be the right-hand side of this statement. So let x sub n be a sequence in s of elements of s such that x converges to c. Now, I drew this picture for limits, and it's the same picture now for this guy. And we're wanting to show now that the limit as n goes to infinity of f of x sub n equals f of c. So here's the epsilon m argument, so let epsilon be positive. So now I'll go to the picture. Here's f of c. Here's f of c plus epsilon, f of c minus epsilon. So what do we know? We know the function is continuous at c. So that means that if I look at c, then there exists a delta so that if I'm in this interval here, everything in this interval gets mapped to inside of this interval. Now, what else do I know or what else am I assuming? x sub n is a sequence converging to s. So for large n, for n big enough for some capital M0, x sub n is going to lie inside this interval. If you like, in the definition of convergence of a sequence, epsilon is equal to delta in that definition. But don't mind that. So for all n big enough, x sub n lies in this interval. And since this interval gets mapped inside this interval by f, this f of x sub n will end up in this interval within epsilon to f of c as long as n is big enough. And so this picture I just went through, this is the proof. And now we just need to write it out. Since f is continuous at c, there exists a delta positive such that if x minus c is less than delta, this tells me f of x minus f of c is less than epsilon. Now, since the sequence x sub n converges to c, there exists some integer capital M0 such that for all n bigger than or equal to 0, x sub n minus c is less than delta. And therefore, for all n bigger than or equal to m sub 0, f of x in will be within epsilon of f of c. So this will be the m we choose. Then if n is bigger than or equal to M, we get that x sub n minus x minus c is less than delta, which implies by how delta's defined-- if x minus c is within distance delta, f of x minus f of c is within distance epsilon. We get f of x sub n minus f of c is less than epsilon. So that gives us one direction. All right. So now we'll prove the opposite direction. So we're assuming that this statement star holds, namely for all sequence x sub n's converging to c, we have f of x sub n converging to f of c. And we want to prove that the limit or that f is continuous at c. And we're going to do this by contradiction, which is the same way we proved the opposite direction for the theorem about limits. So the proof is by contradiction, namely suppose the conclusion we want does not hold. So suppose f is not continuous at c. So let me recall we negated that definition in the last lecture, but we'll negate it again. For this, this means that there exists a bad epsilon so that for all delta positive, there exists an x and s satisfying x minus c is less than delta. And f of x minus f of c is bigger than or equal to epsilon 0. All right. So since we're assuming f is not continuous or exists at epsilon 0, so we have all this, which holds for every delta. And now we'll choose delta to be 1, 1/2, 1/3, and so on. Then there exists x sub 1 in s such that x sub 1 minus c is less than 1 and f of x sub 1 minus f of c is bigger than or equal to epsilon 0. That's just delta. If you like delta, equals 1. And now we continue. There exists x sub 2 in this such that x sub 2 minus c is less than 1/2. And so here, this is delta equals 1/2. So this is delta equals 1/2, and so on. So then we conclude-- so if you like, omit this from the proof. And this is really what's behind the next statement I'm about to make. Then for all natural numbers n, there exists an x sub n in s such that x sub n minus c is less than 1 over n, and f of x sub n minus f of c is bigger than or equal to epsilon 0. Now let's take a look at this sequence. We're trying to break something. And we'll end up breaking the star assumption. So now we have this sequence, x sub n of s, which we're getting closer and closer to c. So we have 0 is bigger than or equal to x sub n minus c is bigger-- is less than 1 over n. So this converges to 0. This converges to 0. So by the squeeze theorem, the absolute value of x sub n minus c converges to 0. And therefore, x sub n converges to c. By squeeze theorem, we get that xn converges to c. Now, since we're assuming star holds-- that's the right side of 3, of the if and only if-- it must be the case that f of xn converges to f of c. That's what star tells me-- that if I take a sequence converging to c, f of x converges to f of c. But each one of these, remember, is bigger than or equal to epsilon 0, which is a contradiction. Epsilon 0 is a positive number. And that concludes the proof of this theorem. That gives us an equivalent way of stating continuity in terms of sequences. And just like for limits, this will allow us to use what we know about sequences to conclude analogous facts for continuous functions. So let's look at a non-trivial example of a continuous function and also put this theorem to use, this previous theorem. The functions f of x equals sine x and g of x equals cosine x, these are continuous functions. So meaning their domain is the set of real numbers, so they're continuous at every real number. They're continuous at c for every c, a real number. So we're going to use this-- well, we're not going to use this theorem just yet on this part. We're actually going to prove sine is continuous directly from the epsilon delta definition, which is always good. So first, we claim sine x. So before I say what we're going to do, let me just give you a quick refresher on what you can prove just from the definition of sine and cosine. Remember, I'm not-- I would usually put this to the class and see who remembers and who doesn't. So this is also supposed to be a unit circle, although it doesn't quite look like that. Remember that sine and cosine are defined as you travel along signed distance x along the circle, and you arrive at a point on the unit circle, which you-- so it's an ordered pair. The first element you call cosine x. The second element you call sine x. So that's how sine of x is defined. So now simply-- I'm not going to do this because this is trig, not analysis. But from the definition of sine and cosine from the unit circle, we have that, of course, all x and r sine x plus cosine squared x is equal to 1. And therefore, each of these individual things has to be less than or equal to 1, which upon taking square roots means that-- so you have these. Now, you can also make a better estimate for sine x when x is close to 0, which is only useful for x close to 0, but which is the following-- that for all x, sine x is less than or equal to the absolute value of x, just obtained by comparing the length of one side of a triangle from this picture with the length of the arc. So you have this, and then you also have the angle sum formula, which says that sine of a plus b equals sine a cosine b plus cosine a sine b. And you also have the-- I can't remember the name of the exact formula. I think it's difference to product or something of that nature, which says sine of a minus sine a b, you can write-- so this is using the previous formula-- twice sine of a minus b over 2 cosine of a plus b over 2. So using these elementary properties of sine and cosine, we'll now show that sine of x is continuous. So first thing we want to show is that sine x is a continuous function. So let's see. In R, we're going to show sine is continuous at c. Let epsilon be positive. And now we have to say how to choose delta depending on this epsilon. Choose delta to be epsilon. Now we'll use these elementary properties to show that this delta works. Then if x minus c is less than delta, we now want to show that f of x minus f of c, sine x minus sine c, is less than epsilon. So sine x minus sine c, this is equal to-- by this last formula we have on the board over here, this is equal to twice sine x minus c over 2 cosine x plus c over 2. And now this is the product of 2 times the absolute value of sine times absolute value of cosine. By the second property over here, cosine is always bounded by 1, so this is less than or equal to 2 times the absolute value of sine x minus c over 2. And now we use this third property here that sine x is less than or equal to the absolute value of that thing you're sticking in. So this is less than or equal to 2 times x minus c over 2, which equals x minus c. So now we're in business. We've connected f of x to x minus c. So this is less than or equal to-- so this is less than delta, which equals epsilon by our choice of delta. And therefore, sine x minus sine c is less than epsilon. Sine x minus sine c is less than epsilon. So thus, function sine x is continuous at c. Now we'll use this and the theorem we showed a minute ago. Have I erased it? Yes, I have now officially erased it. Anyways, we used the previous theorem that we proved for the equivalence of continuity and convergence of sequences to show that cosine is continuous. So let c be an element of R. Let xn be a sequence converging to c. And we now want to show that cosine of x sub n converges to cosine of c. Once we've done that, then by the theorem we proved previously, we can conclude that cosine is continuous at c. Now, here's the thing. For all x and r, we have cosine of x is equal to sine of x plus pi over 2. We can deduce that simply from this angle sum formula. Take a equals c, b equals pi over 2. I'm using that cosine of pi over 2 is 0. And this is what we'll use, and the fact that we know sine is continuous. Since the sequence xn converges to c, this implies the sequence x sub n plus pi over 2 converges to c plus pi over 2. Now, this sequence x sub n plus pi over 2 converges to this number. And since sine is continuous, we have by the previous theorem that sine of this converges to this. And therefore, cosine of-- cosine of x sub n, which is equal to sine of x sub n plus pi over 2, converges to sine of c plus pi over 2, which equals cosine of c, i.e. cosine of x sub n converges to cosine of c. And therefore, we've now shown that cosine is also continuous. So let's answer this question real quick. So the answer is no. You can find a function which is discontinuous at every point, every real number. So maybe this is an example rather than a theorem, but I'll state it as a theorem. Let f of x be the function which takes the value 1 if x is irrational, 0 if x is not irrational. Then f is not continuous at every c and r. So first off, don't try to plot this using Matlab or Mathematica or anything because first off, computers can't deal with irrational numbers. So you will just get 1 no matter what you stick into f. OK. So you have this function, which is 1 if x is a rational 0. If x is not a rational number, if it's irrational, let's show it's discontinuous at every c. A different way, if you like, to state the theorem which I've already erased-- but let me just state it slightly differently. f is not continuous at c. So remember, the two statements are equivalent if and only if their negations are equivalent. So f is not continuous at c if and only if there exists a sequence x sub n such that xn converges to c and f of xn-- the sequence fn of xn does not converge to f of c. So maybe it just doesn't converge at all, or maybe it converges to something other than f of c. So this is just a restatement of the theorem we've already proven but now using negations of the statements that appeared on the if and only if. So there's two cases to consider for this. So now we're going-- so this is a statement of the theorem that we're-- a restatement of the theorem we're using. And now we're going to prove the theorem that I've stated here. So let c be an element of R. We're going to show that this function is discontinuous at c. There are two cases to consider, c is a rational number or c is an irrational number. So case 1-- c is a rational number. Let's show f is not continuous at c. Now, we know that for every natural number n, there exists an element x sub n and the complement of q, so an irrational number, such that c is less than x sub n is less than c plus 1 over n. So we proved this in an assignment. Between any two real numbers, I can find a rational number, an irrational number in between them. So that's what this statement says, is for each n, I can find an irrational number between the number c and the number c plus 1 over n. Now, by the squeeze theorem, this is just a constant sequence, if you like, converging to c. This convergence to c. And as n goes to infinity, this convergence to c. So I get that the sequence x sub n converges to c. This will be my bad sequence to satisfy this conclusion because-- so xn converges to c, but if I look at f of xn, if I take the limit as n goes to infinity of f of xn, since all the xn's are not rational-- they're irrational numbers-- and I stick them into f, I get 0. And this does not equal 1, which equals f of c because we're in the case that c is a rational number. So here we use the density of the irrationals to show that this function is not continuous at a rational number, and we'll do the same thing for the case that c is an irrational number. It's the same proof, except now we essentially take compliments. We proved this theorem right after the Archimedean property of r that for any two real numbers, I can find a rational number in between them. So for every n, there exists a rational number such that c is less than x of n is less in c plus 1 over n. And again, by the squeeze theorem, it follows that xn converges to c. And if we look at the limit as n goes to infinity of f of x of n, this is equal to-- all of these values are 1, which does not equal 0, which equals f of c. And c is assumed to be an irrational number. So this function is not continuous at every real number. So just terminology here. If you hear me say this, when I say something is not continuous, I'll often say discontinuous. OK. We're going to use this theorem-- I keep saying this although I already erased it, but the theorem that equates continuity at a point and the fact that every sequence converges-- every sequence converging to c implies f of xn converges to f of c. We're going to use that theorem to get corresponding theorems about continuity that look analogous to statements we made about sequences. And then we'll consider a case which is not covered, which has no analog for sequences. So let me state the following theorem. So suppose s is a subset of r, c is an element of s. And I have two functions from s to R. Then if f and g are continuous at c, then the conclusion is f plus g-- the function f plus g is continuous at c. f times g is continuous at c. And if g of x does not equal 0 for all x and s, so I can divide by it, then f over g is continuous at c. And so for example, I'll prove the first statement, the other two, you can prove for yourself. And again, this just follows from this characterization we have of a function being continuous at a point in terms of limits of sequences. And limits of sequences we know well. We've proven all these properties about them. If you like, that's where we did all the hard work. And now getting some payoff in that we get interesting statements without a whole lot of work. So we're assuming f and g are continuous at c, so let's prove f plus g is continuous at c via this sequential characterization. Suppose xn is a sequence converging to c. Then since f and g are continuous at c, this implies that the sequences f of x sub n converges to f of c, and g of x sub n converges to g of c. And therefore, by the theorem we proved several lectures ago that the sum of two convergent sequences converges to the sum of the limits, you get that f plus g of x sub n, which is f of x sub n plus g of x sub n, converges to f of c plus g of c or function f plus g evaluated at c. And that's it. We've now shown that every sequence converging to c, f plus g of x sub n converges to f plus g of c. And similarly for the others, although there's maybe a little bit of a small hiccup here in that you want to-- well, no. So this follows-- I was thinking of something else. Don't worry about that. But 2 and 3 also follow similarly using the sequential-- the analogous sequential theorems. However, we do have-- so these are three natural operations we can do with two functions. We can add them. We can multiply them. We can divide them, just like we had for sequences. There is one operation we can do with functions that you don't do with sequences or can't do with sequences, and that's compose them. So the natural question is, is the composition of two continuous functions continuous? And the answer to that is yes. We have to state this carefully. So suppose a and b are a subset of R, and c is an element of a. So let's write this a little bit differently. Let a and b be a subset of r. c is an element of a, and f will be a function from-- let me get this right. f will be a function from a to r, and g is a function from a set b to a. OK. So I'm getting this backwards. So when I compose these two, when I take f of g, f of g will be a function now from b to r. So if g is continuous at c and f is continuous at the point g of c, which is, remember, an element of a, then the composition f of g is continuous at c. So we'll use, again, this characterization in terms of sequences. We don't necessarily have to. We could have done it, strictly speaking, from the definition. But this is a nice short way of proving this statement. So let xn be a sequence in b, so of elements of b such that x sub n converges to c. And what we want to show is that f of g limit as n goes to infinity of f of g of x sub n equals f of g of c. Let me put this off to the side. OK. So since xn converges to c and g is continuous at c, this implies that the sequence g of xn converges to g of c. Now, g of x sub n-- this is a sequence now in a converging to an element of a. Yeah? So since this sequence g of x sub n converges to g of c and g, and now f, is continuous at g of c, this implies that f of g of x sub n converges to f of g of c, i.e., this is, by definition, the same as saying f of g of x sub n equals f of g of c. So we have an operation that we can't do with sequences, but this operation still preserves continuity. So taking sums of continuous functions maintains a continuous-- stays in the class of continuous functions. The product of two continuous functions is continuous. The quotient is continuous. And now also the composition of two continuous functions is continuous. So as a consequence of this, we can use this to prove some-- give more examples of functions which are continuous. So for example, for all n, a natural number, f of x equals x to the n is continuous as a function for x and r. So what's the proof? We can do this by induction. So for n equals 1, f of x equals x, we've already done. That was one of the first examples of continuous functions we did, was ax plus b. So a equals 1, b equals 0 gives me this case. So this is the base case. Let's now do the inductive step. Suppose that the function x to the m is continuous. And now we want to show x to the n plus 1 is continuous. Then x to the m plus 1, this is equal to x times x to the m. And since x to the m is continuous and the function f of x equals x is continuous, the product of two continuous functions is continuous. It's a product of two continuous functions, which by the theorem we proved implies that x to the n plus 1 is continuous. So I keep saying continuous, but what I'm saying is for all c and r, x to the m is continuous at c. So maybe I should have written that down, but I think that meaning should be clear enough. And by the same inductive reasoning, you could also prove that in a natural number, polynomials are continuous. f of x-- for all in a natural number, a0 and R, the function given by a polynomial a sub n x to the n plus a sub n minus 1, x to the n minus 1 plus a sub 0 is continuous. So rather than write down the actual proof by induction, let's talk our way through it. So again, for n equals 1, this is just going to be some number times x, which we've already dealt with in a previous example. So we know that's continuous. So that settles the base case. Now let's assume that we've proved this for n equals m, or let's assume this for n equals m and look at a function with n equals n plus 1. That's going to be a to the a sub m plus 1 times x to the n plus 1 plus a lower-order polynomial, which we already know is continuous. And a sub m plus 1 times x to the n plus 1, that's continuous because it's the product of a constant and a continuous function by the previous example. So that would be continuous plus the lower order polynomial, which is continuous by assumption. That will be continuous by the theorem that we proved. But you don't have to just look at polynomials. For example-- so for example, the function f of x equals 1 over 3 plus sine x to the 4th. This is also continuous-- is a continuous function. Why? Because for all x-- first off, 3 plus sine x to the 4, this is never 0. So this function makes perfectly good sense. So the bottom is always non-zero. Sine x is continuous, so sine x to the 4 is continuous. 3 is a continuous function. It's just a constant. So the bottom as a function on its own is continuous. And 1 over that function is continuous as long as the bottom is never 0, which it's never 0, so again, by this theorem here. So we used the composition one to say that sine x to the 4 is continuous. And we use that one, the theorem before that, to show that 1 over 3 plus sine x 4 is continuous. So let me write this out real quick. I will say by composition theorem, the function sine x of 4 is continuous, which implies by, I'll say, algebraic theorem, meaning that theorem that involves algebraic operations, 3 plus sine x to the 4 is continuous. And by the algebraic theorem again, since 3 plus sine x to the 4 is never 0 for all x and r, this is continuous. All right. So next time, we'll look at some properties of continuous functions, namely called the min and max theorems for continuous functions. So the great thing about continuous functions is if you look at them on closed and bounded intervals, they always attain a maximum and minimum at some point-- not just that the graph is bounded above or below, but there's actual point where f reaches that min and reaches that max. And then we'll also prove the intermediate value theorem, which says between any two-- if I have a continuous function on an interval a, b, and I look at f of a and f of b, and I take a number in between those two values, f of a and f of b, there exists a point in between them so that f attains that value. And this is extremely important. And we'll definitely see why later once we get to differentiability and continuity.
MIT_18100A_Real_Analysis_Fall_2020
Lecture_2_Cantors_Theory_of_Cardinality_Size.txt
[SQUEAKING] [RUSTLING] [CLICKING] CASEY RODRIGUEZ: So last time we spoke about-- we covered sets and induction. This time, I want to ask a question about sets, which turns out is actually quite a deep question. I mean, I didn't come up with it myself. This question is at least 150 years old probably. So the question that I want to ask is if A and B are sets-- or let's phrase it this way, at least a little more efficiently. When do two sets, A and B, have the same size? Now, this is especially interesting if these two sets are not finite, meaning there's not five of them, there's not eight of them, but there's infinitely many members, whatever that means. And I'll actually define what that means. For example, do the natural numbers and the integers have the same size? Do the natural numbers and the rational numbers have the same size? What about the rational numbers and the real numbers? Even though we haven't defined that yet, you can still keep in mind your notion of the real numbers from calculus. And so this question is-- why is it deep? Because it depends on this word here, "size," and what exactly that means. So I'm not-- so kind of an answer, this is due to George Cantor. He said that two sets-- so not an answer, but a way of defining size or a way of saying the two sets have the same size. He said two sets have the same size when the two sets-- the elements of the two sets can be paired off, meaning-- and I will make this much more precise in a minute or so. What does he mean? Or what do I mean? For example, the set a, b, and c and the set 1, 2, and 3. So this one, this set, consists of three letters-- a, b, and c. This set consists of three integers-- 1, 2, and 3. They have the same size because I can essentially pair them off. So a and 1 go together, b and 2 go together, and c and 3 go together, meaning each member of this set gets paired off with an element of this set. And every element of this set gets paired off with a unique element from this set. Now, to make-- and so this goes under the name of the theory of cardinality or cardinal numbers, if you like. But we won't go too deeply into it. So the way you make-- or the way one makes this pairing off business more precise is using the language of functions. And you've been dealing with functions since you've been doing calculus. So I'm just going to quickly reintroduce or review some terminology that goes along with functions that will be the precise meaning of this pairing off business that I'm writing here. But just with an eye towards the future, this is why I'm reviewing some of this terminology related to functions. So a brief review of some terminology for functions-- so let me just recall that if A and B are sets, then function f A B-- so a function's usually written as f colon A arrow to B, meaning it takes elements from A into elements from B. This is a mapping that-- or if you like, an assignment. It assigns to each x in A a unique element which we denote f of x in B. So one single input, x, gives me a single output. And the input does not give me three outputs. Now, this is what one would call a naive definition probably, not necessarily completely unambiguous. But for us, it suffices. In the textbook, you can look up a function is unambiguously defined or as a subset of the Cartesian product of A and B, which you would recognize as essentially describing the graph of the function. But then you never use that definition of again. And you essentially use this definition when you think about functions and when you prove properties of function. So this we will just take as our definition. And it will be unambiguous enough for us. So let f be a function from A to B. If C is a subset of A, we define a set-- so this is f of capital C. So C is not an element of A. It's a subset of A. And this is a subset of B. This is-- let me write it slightly differently here. This is a set of all y in B such that there exists an x in C such that y equals to f of x. And I could write this a little more efficiently as this is the set of all elements f of x as x ranges over the elements of this subset C. And if D is a subset of B, we define the set f inverse of D. This is the set of guys that get mapped into D. So I should say this is the inverse image of D, not the inverse of this function f. So the inverse, whatever that means for now, does not necessarily always exist. So this inverse image of the set D always exists. This is the set of all elements in A such that f of x is in D. So for example, let me-- so 1, 2, 3, 4 and a, b, c, d. And let's suppose 1 goes to a, 2 goes to a, 3 goes to c, 4 goes to d. Then f of the set 1, 2, this is where-- what is the set that gets-- so this is the subset of-- so this you should think of as B, a, b, and c, and D. And then 1, 2, and 3, this is A. So f of the set of 1, 2-- so 1 gets mapped to a. 2 gets mapped to a. So this is just the set 2. f of the set-- let's go with 1 and 3. 1 gets mapped to a. So this should be a. 1 gets mapped to a. And 3 gets mapped gets mapped to c. And so a couple of inverse images-- if I look at the inverse image of a, this is equal to the set of all guys that map to a. This is, well, 1 maps to a and 2 maps to a. So this is 1, 2. Now, if I look at the inverse image of a, c, d, what elements get mapped to a, c, and d? Well, 1 and 2 get mapped to a. 3 gets map to c. 4 gets mapped to d. So everything maps into a, C, and d. So this is just the original set or the set A-- 1, 2, 3, 4. All right, so there's more terminology. And this is what we will mean by when two sets can be paired off. Or this makes that more precise. Let f be a function from A to B. We say that f is injective, or I'll write one-to-one. It should be read one-to-one if f satisfies the following property. f of x1 equals f of x2 implies x1 equals x2. So injective, or one-to-one, means if I take two different inputs, I get two different outputs. That's essentially what this means. I mean, taking the equivalent-- so equivalently, from a logical standpoint, this statement implying this statement is equivalent to the negation of this statement implying the negation of this statement. So equivalently, this means x not equal to x2 implies f of x1 not equal to f of x2. So maybe this is clear. If I were to define f as injective if it satisfies this property, maybe that would have been clearer that f takes two different elements to two different elements. But this condition here is typically easier to verify, or at least simpler to state and verify. So f is surjective or onto if the image of A is B. So let me write that statement out a little bit more. Everything in the set B gets mapped to by something from A. So equivalently, this says that for all y in B there exists an x in A so that f of x equals y. f is bijective if f is one-to-one and onto. So f is bijective if it's both injective and surjective. So for example, this map that we just drew over here that sends 1 to a, 2 to b-- I mean 2 to a, 3 to c, 4, to d, this is neither injective nor surjective. It's not injective because it takes two different elements to the image in B, namely a. So it takes 1 and 2 to a. It's not surjective because nothing gets mapped to the element b here. So we've seen that's not surjective. Of course, this map, if I take-- again, imagine this is 1, 2, 3 a, b, c, d. This map here, this function here that takes 1 to a, 2 to b, 3, to c, this is actually injective but not surjective. And then of course, we could change this slightly. And 1 goes to a, 2 goes to b, and 3 goes to b. Then this is surjective but not injective. Now, the map that sends-- let's switch sides here-- a, b, and c, 1, 2, 3-- a to 1, b to 2, c to 3, this is a bijection, bijective. So if I say something's an injection or surjection or bijection, that just means it's a map that is injective, or, surjective or bijective respectively. And now, a definition-- so this is really-- there's not much to this definition, but just defining a couple of related functions that are related to a given function. If f goes from A to B, g goes from B to C, the composition g of f is the function which goes from A to C is defined by g of f of x equals g of f of x. And 2, if f is bijective-- so this is the composition of two functions. I didn't write the word composition, but g of f means the composition, or is referred to as a composition. If f is bijective, then we define the inverse function to f, B to A, by the following. If y is in B, then f inverse of y in A is the unique elements in A such that if I take this element, stick it into f, I get back y. So the inverse of a function only exists for bijective functions, or at least is only defined for bijective functions. Keep that in mind. Don't confuse that with the inverse image of sets. Although, those two notations look the same. You have an inverse f to the minus 1. f to the minus 1, if it's a function, that is the inverse function. However, if I'm taking f to the minus 1 of a set, that means the inverse image of that set as defined over there. So bijections, meaning bijective functions, will be what we mean when we say two sets can be paired off. It's what we mean when we-- or at least what Cantor's answer to that original question was, when did two sets have the same size. So this is the notion of cardinality that I alluded to. So we say two sets, A and B, have the same cardinality if there exists a bijection or a bijective function f from A to B. And so let me just make some notation here. so this is not really new objects I'm defining or operations. This is just some notation. So when two sets have the same cardinality, we write-- so this is just a shorthand way of writing that two functions have the same cardinality. You should not necessarily think this means taking the absolute value of a set. That doesn't mean anything, all right? This is just shorthand notation for saying two sets have the same cardinality. If A has the same cardinality as the set 1, 2, 3 up to n, we write A equals n. That's just shorthand for saying that a set has the same cardinality as the natural numbers up to n. So if there exists an injective function-- or I will often say either function or map. You should take those as synonymous. There exists an injective function f-- if there exists an injective function f from A to B, we write this thing. Again, do not read this as taking the absolute value of some set and that absolute value is less than or equal to the absolute value of the other set because that's meaningless. We haven't said what that means even. This is just shorthand notation. And if there exists an injection from A to B but they don't have the same cardinality, we write this. So even though this notation of these absolute value things being on the outside of the set makes you think absolute value or should be interpreted as absolute value, it's OK to think as a certain ordering being there, as A having smaller size than B. We write this because there being an injection from one set to the other means I can pair off elements of A with some elements of B. Maybe I don't get all of the elements of B. But I can pair off some of the elements of A with some of the elements of B. For example, that first map we wrote up there, that says that the set 1, 2, 3 in size is less than or equal to the size of a, b, c, d because we found an injection, an injective map from the first set to the second set. This third map would say that the size of the set a, b,c is equal to 3, written here, or shorthand written here. And in fact, that first map that we wrote up there, again, is-- so from first map, this says that the set of 1, 2, 3-- in our shorthand notation, absolute value is less than the size of a, b, c, d. So don't think of these as saying the absolute value. It's best to maybe think of this as saying the size of. All right, so if there exists an injective map or function from one set to another, then the size of A is less than or equal to the size of B. If the size of A is less than or equal to the size of B but the size of A and B are not the same, we write the size of A is less than the size of B. So best to think of those-- this absolute value looking thing as being shorthand for saying the words size of. Now, I'm not going to prove this. It goes a little bit beyond the scope of this class. But let me just say that this ordering, this inequality, these symbols that we're writing does bear some sort of semblance to the ordering of real numbers in that if I have two real numbers, one is less than or equal to the other and vice versa. So A is less than or equal to B and B is less than or equal to A, then A must equal B. That is, in fact, true also for this elementary notion of size of sets. And this is-- I mean, it shouldn't surprise you for finite sets necessarily. If I have a pairing of A, if A is no bigger than n and n is no bigger than the size of A, then a should have n elements. But it takes a little bit more to prove for sets which are not finite. So I forgot to write this down. We also say that if the size of A is equal to the size of this finite set 1 through n, we say A is finite. So this theorem that I'm stating here is the Cantor Schroeder Bernstein theorem, which states that if the size of A is less than or equal to the size of B and the size of B is less than or equal to the size of A, then the size of A equals the size of B. So again, if you're thinking of these in the context of real numbers, one being less than or equal to the other and vice versa, of course, that implies that those two real numbers are equal to each other. But we're not talking about real numbers. Again, this is just shorthand notation for saying there exists a bijective map from A to B. This means there exists a bijective map from A to the set. This means there exists an injective map from A into this, from a to this set. So this is not a statement about real numbers. This is a statement about cardinality, all right? OK. So finite sets are sets that you can count if you had n fingers. Now, we would like to be able to define what it means to be able to count a set. So what do I mean by count a set? Meaning, if I had infinite time, I could go through the set counting them-- 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 and so on. I don't need to go anymore. But what does that process of counting mean? That means, for each element of the set that I'm trying to count, I can pair it off with a natural number-- 1, 2, 3, and so on. So this is how we define countable sets. So if A has the same size as the natural numbers, meaning there exists a bijection from A to the natural numbers, then we say A is countably infinite. If A is finite, or countably-- I also use a lot of shorthand, but it's not clever shorthand. So you should just be able to sound what-- if you just sound that out, you'll get what word I mean, countably, countably infinite. We say A is countable. So countably infinite means I need all the natural numbers to be able to count off the elements of the set A. Countable means maybe I stop after some point and I've counted all them. So A is finite or it's countably infinite. Otherwise, if a set is not countable, we say it's uncountable. OK, so let's take a look at a couple of countable sets that maybe don't come off as being countable. The set of-- actually, that's a homework problem. The set of even integers is countable. The set of odd-- so I said integers a minute ago. Maybe I should just say natural numbers. The set of odd natural numbers is countable. So in fact-- so this is just an aside. So these are two disjoint subsets that make up the natural numbers, even and odd natural numbers. And they both have the same size as the set that they make up. So it's almost like saying the cardinality of the natural numbers is twice the cardinality of the natural numbers, since you would think that the size of N should be the size of this set plus the size of this set, since they are disjoint and they make up the set. But that's not how cardinality works. You don't just add cardinalities to get the cardinalities. So this is a subtle, interesting thing about cardinalities So Richard Feynman, who won the Nobel Prize in the '60s for his work on QED, described this as saying there are twice as many numbers as numbers. OK, so let's prove this. So what does this mean? This means we have to be able to find a bijective function from this set to this set or from this set to this set, one or the other. And so in fact, maybe I should-- well, I'll say something about that in a minute. Well, let me pause. Let me pause this just for a minute. And let me make a few comments about cardinality, which maybe I should have. So this you can think of as a little theorem. If I have two sets, A has the same size as B, then B has the same size as A. So remember, I mean, what I just said in English is not exactly what those symbols mean. Remember, this means there exists a bijection from A to B This means there exists a bijection from B to A. So what is the proof of this statement? So let's start off with the hypothesis. Suppose then there exists a bijective map, bijective function f from A to B. Now, if I have a bijection from A to B, then what would be a bijection going from B to A? That would just be the inverse. So this is not the inverse image of sets, like we described, but the actual inverse, which we defined over there. Is a bijection. So B has the same size as A. One other statement-- so if A and B have the same size and B and C have the same size-- so again, A, B, and C are sets. I should have written that at the beginning. But from this context, you should understand that A, B, and C are sets-- then A and C have the same size. So let's do a proof of that. So let's start with the hypothesis, meaning A has the same size as B and B has the same size as C. So what did these two statements mean in terms of the definition? That means there exists a bijection from A to B and a bijection from B to C. And this statement-- so let me finish this. And then I'll say what I was going to say. Then there exists bijections f from A to B and g from B to C. So perhaps I should have said a few more words about why this is true. But I'm going to leave this as an exercise just to pause the lecture and do it yourself. And what I'll write shortly-- for this case, I'll actually prove that the thing I'm going to define is bijection. That should help you. So let me draw over here off to the side. We have, again, 1, 2, 3, a, b, c. So think of this as f. This is my set A. This is my set B. And then I have alpha, beta, gamma. And a gets mapped to alpha, b, gets mapped to beta, c gets mapped to gamma. So what would be the map going from A down to the set C? Well, perhaps the composition. 1 gets mapped to a, which gets mapped to alpha. So 1 gets mapped to alpha. 2 gets mapped to beta. 3 gets mapped to gamma. How do I build this function out of the things that I know, namely this f and g? Well, this function going from A to C is just the composition of these two functions. So this is off to the side. This is not a part of the proof. Let h go from A to C be the function g of f of x. So I claim that this function is a bijection. So we want to prove that it's one-to-one and onto, all right? So let's do one-to-one first. So we this is the part where I'm going to put this in parentheses, meaning what we're doing now. So we're going to prove that h is one-to-one. So that means we have to verify the definition that I've erased. So remember the definition. So let's write it all out. We first show h is one-to-one. And what this means, according to the definition that I'm now writing over, if h of x1 equals h of x2, then x1 equals x2. So this is what we want to prove. This is the definition of h being one-to-one. So let's start. If h of x1 equals h of x2, then, in terms of how we've defined h, which is g of f of x, so the composition, then this means g of f of x1 equals g of f of x2. That's just what h is. Now, this statement here-- now g we know is a bijection. We know g is one-to-one. And since g of something equals g of something else and g is one-to-one, this implies that f of x1 equals f of x2. This is since g is one-to-one. So starting with this and the definition of h being the composition, we conclude that f of x1 has to equal f of x2 because g is one-to-one. Now, since f is also one-to-one-- we are assuming that f and g are bijections. Since f is one-to-one, this implies x1 equals x2, since f is one-to-one. And this is what we wanted to prove. We wanted to start with assuming h of x1 equals h of x2 and prove x1 equals x2, which is what we've done. So thus, we've proven that h is one-to-one. Now, we have to show that it's surjective, that it's onto. h of A equals C. And again, I'll write out what this means. This means for all y in C-- all right, let me call it [? z-- ?] there exists an x in A such that h of x equals z. Now, we used the fact that f and g were injective to conclude that the composition is injective. So it stands to reason that we're going to use the fact that they're both surjective to prove that h is surjective. So we need to prove that for all z in C there exists an x in A as such that h of x equals z. So let z be in C. Now, we need to find some x in A that gets mapped to z. We'll use that g and f are both surjective. And what's the picture that goes along with this? Here's the sets C, B, and A. We have some element of z And we know that since g is surjective, there's some element in B that gets mapped to z by G. But now, since f going from A to B is surjective, there exists some x which maps to y. And then that's the whole argument. That's it. I drew this picture. But now, I just need to turn it into English using the properties and assumptions that I have. Since g is surjective, there exist a y in B such that g of y equals z. Since f is surjective, there exists an x in A so that f of x equals y. But then if I look at h of x, this is equal to g of f of x, which is equal to g of y, which is equal to z by how we've defined y. Remember, g of y equals z, how we found y and how we found x. Remember, f of x equals this y. And therefore, this map h is onto. And therefore, h is a bijection, proving this theorem. So that should help you. I'll give you many exercises to prove that the inverse of a bijection is a bijection. So back to what we were doing to begin with. We want to show the set of even natural numbers has the same size as the natural numbers, has the same cardinality, as the natural numbers and the same for the odd ones. So I'm just going to do the odd ones. Again, this will be a small exercise for you to do to prove number 2, the statement for the odd ones. So I'm going to do the even ones. And you can do the odd ones. You should, if you plan on studying more math, get used to the instructor, professor, research paper writer, textbook author giving out little exercises to make sure that you're following along and that you can do some minor task at some point during the discussion. So we're going to find-- so we want to show that the natural numbers has the same size as the even natural numbers, which is the same as this statement by that first theorem I wrote up there. Now, that means we need to find a bijection from the natural numbers to the set of even natural numbers. This should be not too bad. I mean, so again, this is off to the side. This is not part of the proof. What would be the map going from these guys to these guys? Well, I mean, there's several you could choose. But maybe the simplest is 1 gets mapped to 2, 2 gets mapped to 4, 3 gets mapped to 6, 4 gets mapped to 8, 5 to 10, 6 to 12, and so on, and so on. Now, what is that map? And now, I'll continue the proof. Let f be the function into the even natural numbers defined by f of n equals 2 times n. And so n is a natural number. So this is just formally writing the function that takes 1 to 2, 2 to 4, 3 to 6, 4, to 8, and so on, and so on. I claim f is a bijection. So I have to show that f is one-to-one and onto. So we'll first show f is one-to-one. So that means that I have to assume f of n1 equals f of n2 and conclude that n1 equals n2. So let me actually write out what this means again for you to say that f is one-to-one. Is one to one-- i.e. if f of n1 equals f of n2, then n1 equals n2. But this is easy to verify for this function that we've written down because if f of n1 equals f of n2-- so remember, this is what we want to show, all right? So to show it, I start with my assumption. And I need to conclude n1 equals n2. So suppose f of n1 equals f of n2, my assumption, my hypothesis. And I need to conclude n1 equals n2. Then this implies, by the definition of f, 2 times n2 equals 2 times n2, which, by algebra of just eliminating the 2's, n1 equals n2, which is the conclusion that I want. So I've proven the statement that if f of n1 equals f of n2, then n1 equals n2. Therefore, f is one-to-one. So thus, f is one-to-one. So now, we want to show f is onto, surjective. I'll write this out again. i.e. For all elements that n are an even number, there exists an n such that f of n equals m. Now, let m be an even integer. And so not to confuse us, let me write 2 times k. This is the same set. I'm just using a different dummy variable instead of n in my description of the even natural numbers. And let's do that here as well. Again, this is not changing anything. This is just changing the letter I'm using to describe the set, which is inconsequential. But I don't want you to get the false impression that somehow I'm not doing anything at all. OK, so suppose I have an even integer, then there exists-- simply by the definition of this set, there exists an n natural number so that m equals 2 times n. Then f of this natural number, which is 2 times n, equals m. And therefore, I get something that maps to m. Therefore, f is onto. Therefore, f is a bijection and the two sets have the same cardinality. All right, so let's-- where are we at? Where should I write? Let's write here. Maybe I'll leave that up because I don't want to-- now, using this-- and I'll probably put this in the homework-- I mean, one can also show-- I should say using this, but one can also show that the integers have the same size as the natural numbers, which, again, is a little bit surprising since the natural numbers are a strict subset of Z. So what's the proof? So I'm going to draw a picture, then I'm going to write down the function, and then I'm going to leave it as-- actually, I'm going to put it in the homework for you to verify that this function is one-to-one and onto. So let's say there's as many natural numbers as I want to write and as many integers as I want to write. OK, so what would be a way of mapping the integers in a one-to-one and onto fashion onto the natural numbers? Well, what we could do-- first off, let's send 0 to 1 just to get 0 out of the way. And from then on, now we just need to find a way to map the positive integers and the negative integers onto the natural numbers bigger than 2. And in some way, mentally, we should feel like we can do this because we kind of did it over there, but not explicitly. So how about we take 1 to 2, 2 to 4, 3 to 6? 4 would then go to 8. And we'll take 1 to 3, minus 2 to 5. I'm getting crossed. And then minus 3 would get sent to 7. So you see that the positive integers get mapped to the even natural numbers and the negative integers get mapped to the odd natural numbers bigger than 1. So I'm not even going to write the proof. This will be part of the homework. Now, it is a bit surprising that there are twice as many numbers as numbers, not too surprising since these subsets, if you picture them as I've been doing as subsets of the real line, you know they're kind of discrete. So you should be able to count them. What is not-- what is more surprising are-- is whether or not can count subsets that, in some sense, are not discrete. For example, what about the rational numbers? So this is a theorem that-- and let me just look at those positive rational numbers. In fact, I could take all rational numbers. But for the statement of this theorem, if I look at those rational numbers which are positive, then this has the same size as the natural numbers. You can count the positive rational numbers, which is just a bit crazy to me because here, at least for let's say the integers, once I'm at an integer, I can move to a next biggest one and count that one in some way, right? And so that makes it believable that I can count the integers. The integers have the same size as the natural numbers, even though that, at first glance, it looks like there's twice as many. But for rational numbers, between any two rational numbers, there's another rational number in between them. You just take the average of those two rational numbers. So now, this idea of being at a rational number and then moving to the next big one, you can't do that now. So now, it's a little bit up in the air at least whether or not can count the rational numbers. And what I'm saying is that indeed you can. And this will be part of the homework. I will at least give you an idea of how the proof will go, what we will actually be able to write down, a map based on a simple fact. So let me not write down proof, but let me write down a remark. So we'll actually be able to write down a map based on one simple fact, which is this fundamental theorem of arithmetic, which says that-- so this is just discussion now. More stuff will be written in the homework about this. So just try to follow along. So the fundamental theorem of arithmetic says that if you have a positive natural number, you can write it in a unique way as a product of prime numbers. Now, for rational numbers, using that, that means you can write every rational number uniquely as a product of prime numbers divided by another product of prime numbers where no two prime numbers-- where the prime numbers up top and the prime numbers at the bottom, none are in common, meaning you've simplified as much as possible. So instead of 15 over 3-- no, that's not good. Let's say 15 over 30. You have 1/2. So the map that I would take-- so what I'm saying here is that every rational number can be written as some product. So let me not use that notation, but some product p r1 pN rN q1 s1 qn sM where p1, p2, and pN, these are all primes. q1 up to qN, they're all primes. r1, rN, these are all exponents, positive natural numbers. So rj, sk, these are natural numbers. p1 pN q1 to qM are primes. And for all jk, qj does not equal pk. So there's no prime that appears both up here and down here. We've already simplified that away. So just so you don't think I'm fooling you, 9/2, which is a positive rational number, this is 3 squared over 2, yeah? I'm not going to do any more. That's it. So the map that we will take from this rational number to a natural number will be this gets mapped to the natural number p1 to r1, pN to rN times now q1 to 2s 1 minus 1 qM 2Sn minus 1. So basically, I map it to the integer that has this expansion in terms of prime numbers, where now the exponents of these prime numbers is even depending on these exponents on top and the exponents of this one are odd, depending on this exponent on top. So for example, 3 squared over 2, this would get mapped to 3 to the 4 times 2. So what we'll do in the homework is show that this map is, in fact, a bijection. So those two theorems you will prove in the homework. They won't be too bad. I will leave enough hints. So we've dealt with z. We've dealt with q, essentially. I mean, so let me actually write down-- this is really a corollary of these two theorems here, which I haven't proved, but you will prove in the homework. So this says that the rationals are countable, all the rationales, not just the positive ones. And what's the proof? So we know that-- so I'm going to give you a sketch. Should I sketch it or should I write it all out? I'm running a little bit out of time. So maybe I will just tell you why this is true. I mean, all the details are essentially here. I'm just not going to write it as carefully as I've been writing down the proofs before. So let me write this as the proof sketch. Of course, you can write these-- you can actually use the definitions and write this out. But this is the essential idea. So we have that the size of rational numbers-- so let me-- this has the same size as the rational numbers which are negative. And how do we establish this? Or instead of using the same letter, let's say r, since f of Q equals minus Q is a bijection from the first set to the second set. If I just take a rational number, positive 1, take its minus, then I get an element of the second set. And this is a bijection. OK, so thus, since this has the same size as the natural numbers, and by that theorem we proved over there, the size of this set is the same as the natural numbers-- so it is countable-- then there exists bijections f going from Q-- so the positive rational numbers-- to the natural numbers and g going from the negative ones to the natural numbers. So the picture here is-- well, let's not draw the picture yet. OK, so I have these two bijections from the positive rationales to the integers-- this one from the negative rationales to the natural numbers. How do I get a map that goes from all of q now to the natural numbers? Well, let's go in between and go to the integers. Then I define a function h, which goes now from all of Q to the integers by h of x equals 0 if x equals 0; equals f of x if x is positive; and negative g of x if x is negative. And h is a bijection. So everything up to this point has been completely fine with the exception of me verifying that this map is a bijection and this map is a bijection. So that's the only parts that I'm leaving out for you to verify. Then h is a bijection. So Q has the same cardinality as integers, which we've shown has the same cardinality as the natural numbers. And therefore, the rationals have the same cardinality as the natural numbers. So they are countably infinite. So a natural question is-- I mean, is there anything bigger than the natural numbers? Because everything I've written down, this has the size of the natural numbers. We haven't really defined the real numbers well enough yet to make any sort of claim like that about the real numbers. And now, is there just any set that is bigger in size than the natural numbers? The answer to that was unknown. And it's pretty strikingly yes, in fact. So let me phrase that question. Does there exist a set A such that A has strictly bigger cardinality than the natural numbers, so A is uncountable? So to answer this question astoundingly yes, let me first define for a general object. If A is a set, we define script p of A. This is the power set of A. This is a set that consists of all subsets of A. So this is a set of all sets B such that B is a subset of A. So for example, if A is the empty set, the power set-- so A is empty. What are the subsets of the empty set? Well, there's only one subset of the empty set, the empty set. And even though the empty set has no members, its power set has one member. If A equals 1, then the power set of A, the set of all subsets, consists of the empty set and the set itself, 1. And let's do one more. If A is the set 1, 2, then the power set of A, this is the set which consists of the empty set, because this is a subset of this set, the set consisting of 1, because this is a subset of this set, or 2, 1, 2. Now, notice something. This set had cardinality. Strictly speaking, I should have defined that as having cardinality 0. Yet, its power set has cardinality 1. A has size 1 because it's in one-to-one correspondence with just 1. And its power set has two elements. So it has cardinality 2. I could count them off-- 1, 2. A has size 2, has two elements. And the power set of A has size 4. It has four different elements-- the empty set, 1, 2, 1, 2. So what one can prove in general and which will appear on the homework is if A has size n-- so it's a finite set of size n, meaning it's in one-to-one correspondence with the numbers 1 up to n-- then its power set is also finite and has cardinality 2 to the n. And you can prove by induction, if you like, that 2 to the n is always bigger than n. And so a theorem I'll prove next time, which will finish our discussion of sets and cardinality, and then we'll move on to the real numbers, is this theorem due to Cantor, which says that not just for finite sets do we have the power set being, in some sense, strictly bigger than the original set, but for any set. So this theorem due to Cantor is if A is a set, then the cardinality of A is strictly smaller than the cardinality of its power set. So this definitely answers our question. Does there exist a set with cardinality bigger than the natural numbers? So let me write this as a remark. In fact, maybe I'll state this as another theorem, which just follows immediately from this getting used over and over again, is that the cardinality of the natural numbers, this is less than the cardinality of the power set of the natural numbers, which is less than the cardinality of the power set of the power set of the natural numbers, right? I can now take this as a set and take its power set. And I get a new set with bigger size, which is less than the power set of the power set where-- how many do we have now-- of the power set of natural numbers, and so on. And formally, this means there's an infinity of infinitudes. There's an infinite number of infinite sizes, one getting strictly-- in some sense, strictly bigger than the previous size. And maybe you're wondering-- there's one more question that's kind of sandwiched in between here, pun intended, is-- let's look at this first guy. Does there exist a set A such that it has size bigger than natural numbers-- so it's uncountable-- but has size strictly smaller than the power set of the natural numbers. And this question is called the continuum hypothesis, hypothesis because it is independent from one of the standard axiomatic treatments of set theory. But we will not touch this question. This is beyond the scope of this class, but it's an interesting question that's out there that people just don't know.
MIT_18100A_Real_Analysis_Fall_2020
Lecture_8_The_Squeeze_Theorem_and_Operations_Involving_Convergent_Sequences.txt
[SQUEAKING] [RUSTLING] [CLICKING] CASEY RODRIGUES: So I'm going to prove a few theorems about limits, which will allow us to compute limits, or at least we can use to prove that other non-trivial limits exist using these theorems, rather than using the definition directly. So this first theorem is the easiest theorem in the world because it's simply just restating the definition of convergence of a sequence. So I'm going to state it as follows, so pretty short, that if I have a sequence x sub n, then it converges to x if and only if the sequence obtained by taking the absolute value of x of n minus x equals 0, or the limit of that sequence is 0. So what is the proof? It follows just immediately from the definition. So I'm not even going to write anything. I'll leave it to you. But the proof follows from the definition and the simple fact that x sub n minus x in absolute value is equal to the absolute value of the absolute value of x sub n minus x minus 0, OK. So the definition says for all epsilon positive, you have to find an M so that this is less than epsilon for all m bigger than or equal to capital M. But if you found such a capital M for this to be less than epsilon, then this will be less than epsilon, which is saying that this limit equals 0. And then going the other direction, it's the same thing. So this is just following directly from the definition and this simple fact. OK, that was a very silly fact about limits, but a very useful one in conjunction with the next theorem, which is not so trivial, which is the squeeze theorem. And it says the following. So let an, bn, and xn be sequences such that the following holds for all n natural numbers, a sub n is less than or equal x sub n is less than or equal to b sub n. And a sub n and b sub n are convergent sequences. And their limits equal each other. And they're given by some number, call it x. Then the conclusion is that the limit as n goes to infinity of x sub n equals x. So when I write something like this, you should also kind of-- there's half a sentence before that that goes with this saying, x of n is a conversion sequence. And its limit is equal to x. So it's two statements in one when I write that. So if you just draw a picture, the squeezed theorem shouldn't be too surprising. So this is a little discussion. So here's x, the common limit of a sub n and b sub n. And so we can imagine that we're trying to show the limit as n goes to infinity of x sub n equals x. So that means we have to find for every epsilon a capital M so that x sub n is between x plus epsilon and x minus epsilon. So if I go out a little bit, I would hope I can find a natural number, a capital M so that x sub n minus x-- or x sub n is in this little interval. Now, if I'm assuming that a sub n and x sub n, if a sub n and b sub n are squeezing x sub n, in other words x sub n is between the two, and b sub n is converging to x, then for n bigger than or equal to some integer M0, all of the b sub n's are in this interval. OK, maybe they're not there. Maybe they could be over here. But the way I drew it is just to the right of x. And, likewise, since a sub n is converging to x, there exists some other integer M sub 1 so that if I look at a sub n, it's also in this interval. Maybe it's over here. Maybe it's-- well, it can't be to the right of b sub n because that inequality up there strictly implies that a sub n is less than or equal to b sub n. But it's in this interval. So then what would that say if I look at n bigger than or equal to n plus M1, M0 plus M1, then n is bigger than M0, this guy. And n is bigger than both and M1, this guy. And therefore if I look at x sub n, it's going to be between these two. And in particular, it's going to be in this interval. So that's the proof in a picture. And now our goal is just to write it down. So that's the picture of the proof. But if you're actually trying to guess why this would be true? I mean, the b sub n's, you can imagine, are getting very close to x. The a sub n's are also getting very close to x. x sub n is in between the two, so it's getting squeezed to x, and thus the name. OK, so now we just need to turn this into written word. So we need to show-- we're going to show that x sub n converges to x. So we're-- all we have is an epsilon delta proof. Or we could use that theorem. But let's go with an epsilon delta proof. I mean, not epsilon delta, epsilon M. Epsilon delta proofs will come later. So let epsilon be positive. And since b sub n converges to x, there exist to M0 in natural numbers such that for all n bigger than or equal to n sub 0, b sub n minus x is less than epsilon in absolute value, which is the same as saying-- well, I mean it's not the same. But it implies that b sub n is less than x plus epsilon. Since a sub n's converge to x as well, there exists a M sub 1, natural number, such that for all n bigger than or equal M1, a sub n minus x in absolute value is less than epsilon, which implies a sub n is between x minus epsilon and x plus epsilon, but I'm only going to use one of those inequalities. Now I'm going to choose the capital M for my sequence x sub n, I'm trying to show convergence to x. So choose m to be m sub-zero plus M sub 1. I mean, you could have chosen it to be the maximum of the two. But this works just fine as well. Then if n is bigger than or equal to m, this implies n is bigger than or equal to M0 and M is bigger than or equal to M1, which implies that both of these inequalities here are valid for this n. So then x minus epsilon is less than a sub n, which by assumption, is less than or equal to x sub n, which is less than or equal to b sub n, which is less than x plus epsilon. Now, these string of inequalities, therefore, tell us that x minus epsilon is less than x sub n is less than x plus epsilon, which is equivalent to saying the absolute value of x, so then minus x is less than epsilon. And therefore x sub n converges to x. So these two facts together give us a very robust and short way to prove limits of sequences. So, for example, let me give you a simple one. Let's show the limit as n goes to infinity of n squared over n squared plus n plus 1 equals 1. So, I mean, this is a very simple limit to use these theorems on. But in practice, you don't always have just a simple expression like this. And we'll use these two theorems in conjunction to prove some other theorems here in a minute. But let's just see it in action once. Let me look at-- so this sequence converges to 1 if and only if the absolute value of the difference converges to 0. So let's look at the absolute value of the difference. This is equal to-- now just doing the algebra-- this is equal to-- and taking absolute values gives me n plus 1 over n squared plus n plus 1. And this is less than or equal to. This 1 is making things bigger, so I can drop it. And this is less than or equal to n plus 1 over n squared plus n. Now, n squared plus n I can factor into n times n plus 1. So then that cancels with this n plus 1 on top. And I just get 1 over n. So 0 is less than or equal to n squared over n squared plus n plus 1 minus 1, which is less than or equal to 1 over n. Now, 1 over n we've shown using epsilon delta proof-- I mean, epsilon M proof. We've shown 1 over n converges to 0. So since 0 converges to 0, the left side, and 1 over n converges to 0, the right side, that implies by the squeeze theorem, n squared over n squared plus n plus 1 minus 1 converges to 0 by the squeeze theorem. Which implies that n squared over n squared plus n plus 1 converges to 1 by that first theorem. OK now you can imagine that instead of having this at your disposal, I asked you to do an epsilon proof, an epsilon M proof of this statement, then you would have taken this and played with it just like we did here and gotten to 1 over n. And therefore if this is less than epsilon, that would imply this is less than epsilon. So you would choose capital M to be-- so that 1 over capital M is less than epsilon. But using these two theorems saves us a little work and a little time. OK, now, so at the end of the lecture last time, we discussed the notion of subsequences. And we showed that limiting-- limits and subsequences interact nicely, meaning if I have a convergent sequence, then every subsequence converges to the same thing. So now a natural question is, how do limits interact with the order of the real numbers? R has these two fundamental properties about it, that it's-- so first off, that it has the least upper bound property, and also that it's an ordered field. So first natural question is, how does this definition of limit convergence interact with the order? So the first theorem states, or this term that kind of answers this question is the following is that limits respect order, basically. So if xn, yn, are convergent sequences and for all n, xn is less than or equal to yn, then what should be the conclusion? The limit as n goes to infinity. So what I said a minute ago as limits respect inequality. Then I should be able to take the limits of both sides and still have this inequality. Then limit as n goes to infinity is less than or equal to limit as n goes to infinity of y sub n. And a simple corollary that follows from this is that if x sub n is a convergent sequence, and for all n a natural number, you have two numbers a and b, such that a sub n is less than or equal to-- or a is less than or equal to x sub n is less than or equal to b, then this implies that the limit of x sub n is also between a and b. Let me say something very brief about what this says and doesn't say. So what does this not say? You may lose a strict inequality, meaning what? It can be the case that x sub n is less than y sub n for all of n, but the limit of x sub n equals the limit of y sub n. So simply having less than x sub n less than y sub n does not imply the limit of x sub n is less than the limit of y sub n. All right, so what do I mean by this? This does not imply that the limit is less than y sub n. Now, at this point in class, I would ask somebody to give me a counter example. So at this point, I'm going to take a bite of my cookie and let you think about that. I didn't have to take a bite of my cookie. You could have just paused the video and thought about it. But then I wouldn't get a bite of my cookie. OK, so what's an example of two sequences that satisfy this, but don't satisfy this? If x sub n equals 0 for all n, and y sub n equals 1 over n, then the x sub n is less than y sub n for all n. And what's the limit x sub n? Well, that's just 0. And before the limit of y sub n, that's just 0. And these two things equal each other. This is not less than that, OK. So just want to make that small point. Could pop up as a small question on one of the midterms-- I should say the midterm-- and possibly the final. All right, so let's prove-- [COUGHS] let's prove 1. 2 follows immediately from one. For two, we simply take, for example, the upper inequality x sub n is less than or equal to b, we take y sub n to be the constant sequence b. And for the other one, we would take the bigger sequence to be x sub n, and the smaller sequence to be the constant sequence a. So two follows immediately from 1, so we're just going to do 1. And we haven't done a proof by contradiction in a while, so why not do it by contradiction? OK, so the proof-- let's label these sequences so I don't have to write limit as n goes to infinity, and limit of x sub n, and limit as n goes to infinity of y sub n so much. So let's call their limits x and y. And what do we want to show? x is less than or equal to y. That's our goal. And we're going to prove this by contradiction. That is, let's assume y is less than x, and arrive at a false statement, which contradicts our setting our assumptions that we have. Assume y is less than x. Now, let me draw a picture here to go along with what's going to happen. So if y is less than x, all of the y sub n's have to be near y if I go far enough out. And all the x sub n's have to be near x as long as I go far enough out. So that would contradict eventually the fact that the x sub ns are supposed to be less than or equal to the y sub n's, if y is less than x. And let's say I go out, let's say, half the distance between y and x. So this is like the midpoint. And all the y sub n's are here, and all the x sub n's are here, and the y sub n's are here, then I cannot have x sub n less than or equal to y sub n, which is my assumption, which is the assumption in my theorem. Now, maybe I should have written this here. Sometimes it's good to reiterate what your assumptions are. Suppose for all n, x sub n is less than or-- y sub n. And-- like that. And so the thing we're trying to show is this, OK. So we assume the negation of this and arrive at a contradiction to the things we're assuming, or just to general true facts. I had to add the word true on the facts, because they're, like I said, at some point there's alternative facts flying around out there. OK, so this picture, we're going to turn this into a proof. So since yn converges to y, there exists a natural number M sub 0 such that for all n bigger than or equal to M sub 0, y sub n minus y is less than x minus y over 2. Now, that's a positive number, because we're assuming x is bigger than y. And by the definition of limit, given any positive number, I can find an integer so that for all n bigger than or equal to that integer, this thing is less than that small number. And I'm just choosing that small number to be this very special small number, because that's going to help me arrive at a contradiction. Since xn converges to x, there exist M1 natural number such that for all n bigger than or equal to M1, x sub n minus x is less than the same thing. OK, so this is putting into a precise form what I was saying that all the x sub n's to be close to x eventually. And all the y sub n's to be close to y eventually. And how eventually? Well, eventually enough so that I'm in these two disjoint intervals, OK. Let n be M sub 0 plus M sub 1. Then n is bigger than or equal to M sub 0. And n is bigger than or equal to M sub 1. So both of these inequalities are valid for this n and what does this mean? Well, then that implies that y sub n-- so let me tack on one more inequality. Just by removing the absolute values and adding y, this tells me y sub n is less than x plus y over 2. And this tells me x sub n is less than-- or the other way, sorry. Well, we'll just write this down here. So Then y sub n is less than y plus x minus y over 2, which equals x plus y over 2, which equals x minus x minus y over 2. And this is less than x sub n by the second inequality. So this follows from the first inequality. This follows from the second inequality, which implies for this specific n, y sub n is less than x sub n. And this is a contradiction to our assumption that y sub n is bigger than or equal to x sub n for all n. So we had just found, based on this assumption here, that we arrive at a contradiction to our other assumptions. And therefore this must be false. So that's how limits interact with inequalities. So if I have two sequences, one bigger than the other, then the limits respect that inequality. That has to deal with the order part of R being an ordered field. So what about the field part of R being an ordered field? So how does limits interact with algebraic operations? All right, quite well, it turns out. So let's in theorem. So suppose I have two convergence sequences, limit as n goes to infinity of x sub n equals x, and limit as n goes to infinity of y sub n equals y, then several things hold. The first is that, again, you should kind of read this as two statements written in one, limit as n goes to infinity of x sub n plus y sub n. So this is a new sequence that I formed by just taking the term-by-term sum of these two things. This sequence, this new sequence is convergent. And the limit equals the sum of the limits, OK? The second is, for all c in R, the limit as n goes to infinity of the new sequence obtained by taking every entry of the sequence x sub n and multiplying it by this fixed number c, the limit of that product is the product of c and x. So limits respect what one would call scalar multiplication. But we could be more general than that. c you can think of as just one example of a convergent sequence, just a constant sequence. But, in general, we have that the product of two convergent sequences is convergent. And the limit of the product is the product of the limit. And, finally, if we have something for-- write it over here-- if we have something for a product, then perhaps we have something by quotient. And that's as long as we can divide things. So if for all n, y sub n does not equal 0, and the limit y does not equal 0, then limit of the quotient x sub n over y sub n is the quotient of the limits, OK. OK, so we're going to prove this first one using this scheme of using both this simple theorem about limits and the squeeze theorem. And it's quite simple. So we just use the triangle inequality. By the triangle inequality, 0 is less than or equal to x sub n plus y sub n minus x plus . And x sub n minus x, y sub n minus y. And then I use the triangle inequality. I get x sub n minus x plus y sub n-- oh-- you know what, scratch that. Because, in fact, I was trying to be too clever and would have ended up using the theorem to prove the theorem. So let's not do that. It's never good to use the theorem you're trying to prove to prove the theorem that you're trying to prove. So let's go back to basics and use the definition. Let epsilon be positive. So since x sub n converges to x, there exists a natural number M such that for all n bigger than or equal to M0, x sub n minus x-- it's supposed to be a n, but it looks like a k-- less than epsilon over 2. Why the 2? Well, you'll see in a minute. And similarly for the sequence y, there exist M1 natural number such that for all n bigger than or equal to M1, y sub n minus y is less than epsilon over 2. So I have these two integers, which are given to me by the fact that x sub n converges to x, y sub n converges to y, and the definition of convergence. I can always find these two integers for any small tolerance. And I'm choosing the tolerance to be epsilon over 2 for some reason, which you'll see in a minute. And so what is the integer capital M that I choose for this epsilon for the sequence x sub n plus y sub n? I'm going to choose M to be M0 plus M1. And now I need to show that this choice of M works. And if n is bigger than or equal to M, this implies that n is bigger than or equal to M0. And M is bigger than or equal to M1. So both of these inequalities are valid for this n. And therefore I get x sub n plus y sub n minus x plus y. And now I do what I was going to do a minute ago when I was going to use the theorem to prove a theorem. x sub n minus x, y sub n minus y, I group those together. And then I used a triangle inequality. This is less than or equal to x sub n minus x plus y sub n minus y. And now this is less than epsilon over 2. This is less than epsilon over 2, so equals epsilon. And now you can see why I chose the 2. Because I wanted to show this was less than epsilon. And I had control over these two things. And the sum of controls gives me epsilon. So I choose the control to be epsilon over 2. If I had three sequences, then you could probably guess though which is epsilon over 3. Meaning if I had sequences x sun n y sub n, and z sub n, and I looked at sum x sub n plus y sub n plus z sub n, I could show that converges to the sum of the limits. And I would choose these integers M sub 0, M sub 1, M sub 2, so that I have epsilon over 3 here, so that they sum up to epsilon. So now we prove 2, that for this single scalar multiplication if you like where you just multiply each term by a single number, the limit respects that multiplication. Do an epsilon proof again. Since x sub n converges to x-- so now we're trying to show that second limit-- there exists M sub 0, a natural number, such that for all n than or equal to M sub 0, x sub n minus x is less than epsilon over the absolute value of c plus 1, OK. And now you have to trust me, why that thing? Well, you'll see. It'll come out just like this did. So for the sequence c times x sub n, we'll choose M to be just this M sub 0. Then if n is bigger than or equal to M, which is equal to M sub 0, this implies this inequality holds. And therefore c times x sub n minus c times x. This c pops out of the absolute value and becomes the absolute value of c times the absolute value of x sub n minus x. And which is less than-- so I'm writing less than here. But this thing is less than that. [INAUDIBLE] is less than c over c plus 1 epsilon. Now, this quotient here, this number over this number plus 1, is always less than 1. So this positive number which is less than 1 or non-negative number which is less than 1 is going to be less than 1 times epsilon, which gives me epsilon. And maybe you're wondering why didn't I choose-- so this is just a little smidgen of sophistication, not much, just a little. Why didn't I choose this so that it's epsilon over the absolute value of c so that when I stick in my inequality for this guy I just get epsilon? Well, what if c equals 0? So then I'm telling you to choose capital M sub 0 so that the absolute value of x sub n minus x is less than epsilon over 0. Division by 0 is a no no. But if we fudge it a little by adding 1 we get something that still does the job. It still gives me some number which is non-negative and less than 1, which is enough, OK. So let's prove that the limit of the product is the product of the limits. Since the sequence y sub n converges to y, it's a convergent sequence, and therefore it's bounded. That means that there exists some non-negative real number b such that for all natural numbers n, y sub n is less than or equal to b in absolute value. Then I look at x sub n times y sub n minus x times y and I add and subtract x times y sub n. You can write this as plus. And now I use the triangle inequality that this is less than or equal to x sub n minus x times y sub n plus y sub n minus y times absolute value of x. And y sub n is bounded by b for all n. So this is less than or equal to-- plus y sub n minus y times the absolute value of x. Now, let me just state the obvious that we get 0 is less than or equal to x sub n times y sub n minus x times y. And this is less than or equal to, as we've shown here, plus x times y sub n minus y. Now, the right-hand side-- so the left side of this inequality converges to 0. The right side also converges to 0 because we've just shown by 1 and 2. By 2, this converges to 0. And by that theorem, since xn converges to x, this product here converges to 0. b is a fixed number. And the same thing for this, that converges to 0. And therefore by 2, this sum converges to 0. OK, so these two arrows are by 2. And this is by 1. So let me just summarize by 1 and 2 the right-hand side of this inequality, b times x sub n minus x plus an absolute value of x times y sub n minus y. This converges to 0. Which by the squeeze theorem implies that-- this is by squeeze theorem-- which implies that x sub n times y sub n converges to x times y by that first theorem. So I'm not going to keep referring to that first theorem. Because it's such a simple fact, I'm just going to keep using it without referencing it, namely that the sequence, a sequence converges to x if and only if the absolute value of this thing, this difference, converges to 0, OK. All right, so that proves the limit of the product is the product of the limits. Now, for the quotient we can use 3 once we've proven it for just 1 over y sub n. So what do I mean? So now we're assuming the y sub n is not equal to 0 for all n. And y does not equal 0. So if we prove this statement that the limit as n goes to infinity of 1 over y sub n equals 1 over y, then by 3 implies that the limit as n goes to infinity of x sub n over y sub n equals x over y. Because x sub n over y sub n is just a product of x sub n and 1 over y sub n. So we just need to prove this special case if you like. And we're to do it kind of the same way. Now because we're dividing by y sub n, so here we use that we had an upper bound for the product here. But when we take 1 over y sub n to get an upper bound of that, it means we need a lower bound on y sub n, on the absolute values of y sub n. And we get that by our assumption that the limit is non-zero for all n, and so is the y sub n's. So first thing we prove, or let me write this as a claim, there exists a positive number b, little b, such that for all natural numbers n, y sub n is bigger than or equal to b. And we know that a sequence is bounded. So we know that there's always a capital B so that absolute value of y sub n is less than or equal to b. But for sequences that are non-zero and converging to a non-zero limit, then you can bound them away from 0. And it's kind of the same proof that we gave for showing that a convergent sequence is bounded above. So let me draw a quick picture. And let's assume that the limit is positive just for the sake of the picture. So this is a little discussion on why this is true. And this picture is going to look-- at least the explanation is going to be kind of similar to why a sequence is bounded. And this one is why is it bounded below. So let's assume that the limit y is positive. And let's say I go out within distance, let's say, y over 2 so that I'm still positive. So this is y minus y over 2, absolute value. So in this picture y is positive. So that's just equal to y over 2. Then what can I say? That eventually all of the y sub n's are here in this interval away from 0. And, in fact, their absolute value is bounded by y over 2. So let me just write it that way. So all the y sub n's or n bigger than or equal to some M, all have to lie in this interval because they're converging to y and y is positive. And therefore in absolute value, they're all bounded above-- below I mean-- by y over 2. They're all at least distance y over 2 to 0. OK, and then all that's left to handle are-- maybe the finitely many that are left, y sub 1, y sub 2, y sub m minus 1, that are scattered on the real line but are non-zero. So we're just going to end up taking the minimum of this number and the absolute value of these numbers. So since the y sub n's converge to y, and y does not equal 0, there exist an integer M such that all n bigger than or equal to capital M. So this picture was why this claim is true. It was not the proof of why this claim is true. What I'm writing now is the actual proof of why this claim is true-- such that for all n bigger than or equal to a capital M, y sub n minus y in absolute value is less than y over 2. So for this picture I drew where y is positive. That would have just been y over 2. But you have to use an absolute value for the other case that y is negative, because this has to b a positive number. Then for all n bigger than or equal to M, this any inequality and the triangle inequality gives me-- so if I look at the absolute value of y, this is equal to the absolute value of y minus y sub n plus y sub n. And now I use a triangle inequality, which is less than-- this is less than y over 2 plus y sub n. And I started off with the absolute value of y . So when I subtract that over, that tells me that absolute value of y over 2 is less than absolute value of y sub n for all n bigger than or equal to capital M. So then I let to b to be the minimum of several numbers. know I'm writing min, but I should write inf. But so if you like, let me write inf of y1, Ym minus 1, and y over 2. And by what you are doing on the assignment, I think it was the assignment 2, this inf always exists in a finite set. This is a finite set of positive numbers. And therefore it's infimum exists as one of these elements. One of these M numbers is the infimum. And they're all positive, so this is a positive number. And so then simply how this number is defined, it follows that for all n, y sub n is bigger than or equal to b. Because, again, if n is between-- little n is between 1 and capital M minus 1, then certainly that the absolute value of that thing is bigger than or equal to the smaller of all of these, which is bigger than or equal to b. And if n is bigger than or equal to capital M, then we proved over here that y sub n is bigger than the absolute value of y over 2, which is bigger than or equal to the minimum of these numbers and y over 2, which equals b. All right, so that proves the claim. That proves the claim. But we haven't proved what we wanted to yet that the limit as n goes to infinity of 1 over y sub n equals 1 over y. But this follows almost immediately from what we've done so far. So now we're going to use the claim to prove it. So we look at-- compute that the 0 is less than or equal to 1 over y sub n minus 1 over y. We're going to show that this goes to 0 using those two theorems. And so by algebra that's equal to-- and using the absolute value, this is 1 over y sub n minus y. I mean, the absolute value of y sub n minus y over absolute value of y sub n absolute value of y. Now y sub n is bigger than or equal to b. So this is less than or equal to 1 over b times y times y sub n minus y. So just to summarize, we've shown that 0 is less than 1 over y sub n minus 1 over y is less than 1 over b times 1 over-- b times the absolute value of y times y sub n minus y. Now, this goes to 0 because it's just a constant sequence. This converges to 0 because y sub n minus y in absolute value goes to 0. This is just a fixed number of times that. And by what we've proven for 2, this product converges to 0. So by the squeeze theorem, we get that 1 over y sub n, this converges to 0, which implies-- OK. So another big property about the real numbers that we proved after we stated the existence of the real numbers, which remember was this is defined as this ordered field with a least upper bound property. We proved that the square root of 2 exists as a real number. There was really nothing special about 2. In fact, you could prove that the square root of x exists as a real number for any x that's a positive or non-negative number. So the square root of a real number is well defined. Or the square root of a non-negative number is well define and always exists as a real number. So you can ask how does limits interact with square roots? And they interact just as you think they should. If I have a sequence so that for all n, x sub n is bigger than or equal to 0, and it's a convergent sequence, converging to some number x, then the limit of the square roots of these guys equals the square root of the limit, OK. Now, I want you to just take a second here and understand that this is a meaningful statement. Because since the x sub n's are all non-negative by a theorem that-- let's see, did I erase it already? The one that had to deal with limits and the order, so since the x sub n's are all non-negative, that implies that x is non-negative so that the square root is meaningful. OK, so first check whenever somebody says here's this theorem, or this theorem is-- I think this theorem is true-- is to check to make sure that a theorem is meaningful. So let's prove this. So there's two cases to consider, x is equal to 0 or x is non-zero. So let's do the first case. So the limit is 0. So we'll do this proof using the definition of limits, meaning the epsilon M definition. So we want to show that the limit of the square root of x sub n equals 0. So let epsilon be positive. And since x sub n converges to 0, there exists natural number n sub 0 such that if n is bigger than or equal to M0, then x sub n minus the limit, which is just 0, and taking the absolute value, which is just x sub n, which is equal to x sub n because x sub n is non-negative, is less than epsilon squared. Remember, I can always find, no matter what is underneath my hand, since x sub n converges to 0, I can find a natural number so that that thing is less than what's underneath my hand. And the thing that I'm going to have underneath my hand that's going to make things work out for the square root is epsilon squared, OK. Choose M to be M sub 0. So I'm going to show this M works for the sequence square root of x sub n. And n bigger than or equal to M. The square root of x sub n minus 0, which is just x sub n. Now, it's also-- so we didn't strictly speaking prove this-- but it's not too hard to show that square roots respect inequalities. So x sub n is less than epsilon squared. So the square root is less than epsilon squared, which equals epsilon. So the second case is x not equal to 0. And to do this case, we'll use those two theorems again. And let's look at square root of x sub n minus square root of x. Now, if I write this as-- and multiply top and bottom by x sub n plus square root of x, square root of x sub n plus square root of x, which is a positive number. It's fine to divide by it as well, because x is non-zero. So here not just non-zero, it's positive. Because x has to be non-negative. Now this is the product of something minus something else with something plus something else. So then that's going to be the difference of the squares. So that's equal to-- over-- and, again, these are positive numbers so that they come out of the absolute value. And this is non-negative. So, in fact, it's only making things bigger on the bottom and therefore things smaller overall. So this is less than or equal to-- just if I replace this by square root of x. So what did I prove? 1 over square root of x-- all right. And so by assumption, x sub n converges to x. So this goes to 0. This is a fixed number multiplied by this thing going to 0. So by number 2 and the theorem we proved before, this whole product converges to 0. So, and of course this converges to 0. So this thing in the middle must converge to 0 by the squeeze theorem. So that's the square root. And let me just remark that-- so this number 3 up here, that the limit of the product is a product of the limits, this thus implies that the limit of xn squared converges to x squared. So the square of x sub n converges to the square of the limit. And by induction, you can show that the cube fourth power, fifth power of x sub n converges to the fourth power, fifth power of x. And not only that, you can also prove-- so I'm not going to do this. And I'm not going to force you to either. But just know these as facts, that I don't have to just take the square root, I could take the k-th root. And this statement still is true that if x sub n is bigger than or equal to 0 for all n, and I have a limit, then the k-th root of x sub n converges to the k-th root of x. So the final theorem we'll prove for today, which will conclude our facts about limits, is we have-- I mean, we've been using this all along, although I haven't made special attention about it. The real numbers there, this, again, like I said, an ordered field with a least upper bound property. So we've seen how limits interact with this structure of the real numbers. But they also have a distance associated to them, the absolute value. The distance from two numbers a and b is the absolute value of a minus b. Or the distance from a number to 0 is just the absolute value of that number. So one could ask, how does the limit interact with this additional structure of the absolute value? And so just like everything's been good so far with limits, it's the, same with the absolute value namely that limits respect absolute value. So if x sub n is convergent sequence, the limit x, then the sequence of absolute values is also a convergent sequence. And the limit as n goes to infinity of the absolute values equals the absolute value of the limit. Now let me, again, let's try to think a little more deeply about this real quick. If I have a convergence sequence and the absolute values converge, does the converse hold? If the absolute values converge, does this imply that the original sequence converges? And the answer is, of course, no. Well, not of course. I didn't even give you a minute to think about it. But why is the converse not true? So let me make this a remark. Converse is not true because you could look at x sub n equals minus 1 to the n. Then the absolute values of these guys converges, but the original sequence does not converge. So this is a one-way street for convergence and the convergence of the absolute values. So before I prove this theorem, let me just prove a quick inequality which I think I said I was going to put on the assignment, and then forgot to put on the assignment. But you should know it. So this theorem is the reverse triangle inequality which states for all a, b real number, the absolute value of the difference in absolute values is less than or equal to the absolute value of the difference. OK, so the proof of this reverse triangle inequality just uses the original triangle inequality. So absolute value of a, this is equal to the absolute value of a minus b plus b. And this is less than or equal to the absolute value of a minus b plus absolute value of b. And thus the absolute value of a minus the absolute value of b is less than or equal to the absolute value of a minus b. Now, these are just two numbers, I mean two letters, reverse the letters. In this argument replace a with b and b with a. So then I get b minus the absolute value. The absolute value of b minus the absolute value of a is less than or equal to the absolute value of b minus a, which is the same as this. And therefore-- let me multiply through by minus 1, and that tells me-- OK, so I have this is less than or equal to the absolute value of a minus b. I also have it's bigger than or equal to minus the absolute value of a minus b. And therefore the absolute value of a minus the absolute value of b is less than or equal to absolute value of a minus b. So I it's taboo to write on the very back board, but I'm going to do it anyway. So that was the proof of the reverse triangle inequality. Here's the proof of the theorem before it. It just follows from the reverse triangle inequality and that combination of those two theorems over there. We have that the absolute value of x sub n minus the absolute value of x. This is less than or equal to, by the reverse triangle inequality, the absolute value of x sub n minus x. So this is by the reverse triangle inequality. And by assumption, this converges to 0 as n goes to infinity. So by the squeeze theorem, this goes to 0. And therefore the absolute value of x sub n converges to the absolute value of x. And that's it for that proof for this lecture and for this week.